name
stringlengths 5
6
| title
stringlengths 8
144
| abstract
stringlengths 0
2.68k
| fulltext
stringlengths 1.78k
95k
| keywords
stringlengths 22
532
|
---|---|---|---|---|
628040 | Efficient Cost Models for Spatial Queries Using R-Trees. | AbstractSelection and join queries are fundamental operations in Data Base Management Systems (DBMS). Support for nontraditional data, including spatial objects, in an efficient manner is of ongoing interest in database research. Toward this goal, access methods and cost models for spatial queries are necessary tools for spatial query processing and optimization. In this paper, we present analytical models that estimate the cost (in terms of node and disk accesses) of selection and join queries using R-tree-based structures. The proposed formulae need no knowledge of the underlying R-tree structure(s) and are applicable to uniform-like and nonuniform data distributions. In addition, experimental results are presented which show the accuracy of the analytical estimations when compared to actual runs on both synthetic and real data sets. | INTRODUCTION
Supporting large volumes of multidimensional (spatial) data is an inherent characteristic of modern
database applications, such as Geographical Information Systems (GIS), Computer Aided Design (CAD),
Image and Multimedia Databases. Such databases need underlying systems with extended features (query
languages, data models, indexing methods) as compared to traditional databases, mainly due to the
complexity of representating and retrieving spatial data. Spatial Data Base Management Systems
(SDBMS), in general, should (i) offer appropriate data types and query language to support spatial data,
and (ii) provide efficient indexing methods and cost models on the execution of specialized spatial
operations, for query processing and optimization purposes [Gut94].
In the particular field of spatial query processing and optimization, during the last two decades several
data structures have been developed for point and non-point multidimensional objects in low-dimensional
space to meet needs in a wide area of applications, including the GIS and CAD domains. Due to the large
number of spatial data structures proposed (an exhaustive survey can be found in [GG95]) active research
in this field has recently turned to the development of analytical models that could make accurate cost
predictions for a wide set of spatial queries. Powerful analytical models are useful in three ways:
(i) structure evaluation: they allow us to better understand the behavior of a data structure under various
input data sets and sizes.
(ii) benchmarking: they can play the role of an objective comparison point when various proposals for
efficient spatial indexing are compared to each other.
(iii) query optimization: they can be used by a query optimizer in order to evaluate the cost of a complex
spatial query and its execution procedure.
Spatial queries addressed by users of SDBMS usually involve selection (point or range) and join
operations. In the literature, most efforts towards the analytical prediction of the performance of spatial
data structures have focused on point and range queries [FSR87, KF93, PSTW93, FK94, TS96] and,
recently, on spatial join queries [Gun93, HJR97, TSS98]. Some proposals support both uniform-like and
non-uniform data distributions, which is an important advantage keeping in mind that modern database
applications handle large amounts of real (usually non-uniform) multidimensional data.
In this paper we focus on the derivation of analytical formulae for range and join queries based on R-trees
[Gut84]; such models support data sets of any distribution (either uniform-like or non-uniform ones)
and make cost prediction based on data properties only. The proposed formulae are shown to be efficient
for several distributions of synthetic and real data sets with the relative error being around 10%-15% for
any kind of distribution used in our experiments.
The rest of the paper is organized as follows: In Section 2, we provide background information about
hierarchical tree structures for spatial data, in particular R-tree-based ones, and related work on cost
analysis of R-tree-based methods. Section 3 presents cost models for the prediction of the R-tree
performance for selection and join queries. In Section 4, comparison results of the proposed models are
presented with respect to efficient R-tree implementations for different data distributions. An extended
survey of related work appears in Section 5, while Section 6 concludes this presentation. Most of the work
in this paper is based on previous work by the authors [TS96, TSS98].
2. BACKGROUND
The first applications of interest involving multidimensional data included geographical databases (GIS,
cadastral applications, etc.), CAD and VLSI design. Recently, spatial data management techniques have
been applied to a wide area of applications, from image and multimedia databases [Fal96] to data mining
and data warehousing [FJS97, RKR97]. An example of a GIS application is illustrated in Figure 1, where
the geographical database consists of several relations 1
about Europe. In particular, Figure 1a (1b)
illustrates European countries (motorways) on a common workspace.
(a) Relation countries (b) Relation
Figure
1: Example of a spatial application
The most common queries on such databases include point or range queries on a specified relation (e.g.
"find all countries that contain a user-defined point" or "find all countries that overlap a user-defined
query window") or join queries on pairs of relations (e.g. "find all pairs of countries and motorways that
overlap each other").
2.1. SPATIAL QUERIES AND SPATIAL
The result of the select operation on a relation REL 1 using a query window q contains those tuples in
with the spatial attribute standing in some relation q to q. On the other hand, the result of the joinThe example considers a relational database [Cod70]. However, in the rest of the paper, our discussion does not depend on the
specific underlying model of the SDBMS. Other models, such as object-oriented [Kim95] or object-relational [SM96] ones, are
also supported in a straight-forward manner.
operation between a relation REL 1 and a relation REL 2 contains those tuples in the cartesian product REL 1
where the i-th column of REL 1 stands in some relation q to the j-th column of REL 2 .
In conventional (alphanumeric) applications, q is often equality. When handling multidimensional data,
q is a spatial operator, including topological (e.g., overlap), directional (e.g., north), or distance (e.g.,
close) relationships between spatial objects (in the example of Figure 2, the answer sets of the operators
overlap, north, and close with respect to the query object q are {o5}, {o1, o2}, and {o3, o5}, respectively).
close
overlap
north
Figure
2: Examples of spatial operators
For each spatial operator, with overlap being the most common, the query object's geometry needs to
be combined with each data object's geometry. However, the processing of complex representations, such
as polygons, is very costly. For that reason, a two-step 2
procedure for query processing, illustrated in
Figure
3, is usually adopted [Ore89]:
. filter step: an approximation of each object, such as its Minimum Bounding Rectangle (MBR), is used
in order to produce a set of candidates (and, possibly, a set of actual answers), which is a superset of
the answer set consisting of actual answers and false hits.
. refinement step: each candidate is then examined with respect to its exact geometry in order to
produce the answer set by eliminating false hits.Brinkhoff et al. [BKSS94] alternatively propose a three-step procedure which interferes a second step of examining more
accurate approximations, e.g., convex hull or minimum m-corner, in order to further reduce the number of false hits.
false hits
Test on exact
geometry
hits
candidates
Test on object
approximation
hits
query result
Filter step
Refinement step
Figure
3: Two-step spatial query processing
The filter step is usually based on multidimensional indexes that organize MBR approximations of
spatial objects [Sam90]. In general, the relationship between two MBR approximations cannot guarantee
the relationship between the actual objects; there are only few operators (mostly directional ones) that
make the refinement step unnecessary [PT97].
On the other hand, the refinement step usually includes computational geometry techniques for
geometric shape comparison [PS85] and, therefore, it is usually a time-consuming procedure since the
actual geometry of the objects need to be checked. Although techniques for speeding-up this procedure
have been studied in the past [BKSS94] the cost of this step can not be considered as part of index cost
analysis and hence it is not taken into consideration in the following.
Several methods for multidimensional (spatial) indexing have been proposed in the past. They can be
grouped in two main categories: indexing methods for points (also known as point access methods -
PAMs) and indexing methods for non-point objects (also known as spatial access methods - SAMs).
Well-known PAMs include the BANG file [Fre87], and the LSD-tree [HSW89], while, among the
proposed SAMs, the R-tree [Gut84] and its variants (e.g. the R + -tree [SRF87] and the R*-tree [BKSS90])
are the most popular. In the next subsection we describe the R-tree indexing method and its algorithms for
search and join operations.
2.2. THE R-TREE INDEXING METHOD
R-trees were proposed by Guttman [Gut84] as a direct extension of B -trees [Knu73, Com79] in d-
dimensions. The data structure is a height-balanced tree that consists of intermediate and leaf nodes. A leaf
node is a collection of entries of the form
(oid, R)
where oid is an object identifier, used to refer to an object in the database, and R is the MBR
approximation of the data object. An intermediate node is a collection of entries of the form
(ptr, R)
where ptr is a pointer to a lower level node of the tree and R is a representation of the minimum rectangle
that encloses all MBRs of the lower-level node entries.
Let M be the maximum number of entries in a node and let m - M / 2 be a parameter specifying the
minimum number of entries in a node. An R-tree satisfies the following properties:
(i) Every leaf node contains between m and M entries unless it is the root.
(ii) For each entry (oid, R) in a leaf node, R is the smallest rectangle that spatially contains the data
object represented by oid.
(iii) Every intermediate node has between m and M children unless it is the root.
(iv) For each entry (ptr, R) in an intermediate node, R is the smallest rectangle that completely encloses
the rectangles in the child node.
(v) The root node has at least two children unless it is a leaf.
(vi) All leaves appear in the same level.
After Guttman's proposal, several researchers proposed their own improvements on the basic idea.
Among others, Roussopoulos and Leifker [RL85] proposed the packed R-tree, for the case that data
rectangles are known in advance (i.e., it is applicable only to static databases), Sellis et al. [SRF87]
proposed the R + -tree, a variant of R-trees that guarantees disjointness of nodes by introducing redundancy,
and Beckmann et al. [BKSS90] proposed the R*-tree, an R-tree-based method that uses a rather complex
but more effective grouping algorithm. Gaede and Gunther [GG95] offer an exhaustive survey of
multidimensional access methods including several other variants of the original R-tree technique.
As an example, Figure 4 illustrates an set of data rectangles and the corresponding R-tree built on these
rectangles (assuming maximum node capacity
Figure
4: Some rectangles, organized to form an R-tree, and the corresponding R-tree
The processing of any type of spatial query can be accelerated when a spatial index (e.g. an R-tree)
exists. The selection query, for example, retrieves all objects of a spatial relation REL that overlap a query
window q. It is implemented by performing a traversal of the R-tree index: starting from the root node,
several tree nodes are accessed down to the leaves, with respect to the result of the overlap operation
between q and the corresponding node rectangles. When the search algorithm for spatial selection (called
SS and illustrated in Figure 5) reaches the leaf nodes, all data rectangles that overlap the query window q
are added into the answer set.
SS(R1: R_node, q:rect); /* Spatial Selection Algorithm for R-trees */
(all Er1 in R1) DO
IF (overlap(Er1.rect, q)) THEN
04 IF (R1 is a leaf page) THEN
06 ELSE
07 ReadPage(Er1.ptr);
Figure
5: Spatial selection operation using R-trees (algorithm SS)
On the other hand, the join operation between two spatial relations REL 1 and REL 2 can be implemented
by applying synchronized tree traversals on both R-tree indexes. An algorithm based on this general idea,
called originally introduced by Brinkhoff et al. in [BKS93]. Two improvements of this
algorithm were also proposed in the same paper towards the reduction of the CPU- and I/O-cost by taking
into consideration faster main-memory algorithms, and better read schedules for a given LRU-buffer,
respectively. Specifically, for two R-tree indexes rooted by nodes R 1 and R 2 , respectively, the spatial join
procedure (algorithm SJ) is illustrated in Figure 6.
SJ(R1,R2: R_node); /* SpatialJoin Algorithm for R-trees */
(all Er2 in R2) DO
(all Er1 in R1) DO
IF (R1 is a leaf page) AND (R2 is a leaf node)
THEN
06 output(Er1.oid, Er2.oid)
07 ELSE IF (R1 is a leaf node) THEN
is a leaf node) THEN
Figure
operation between two R-trees (algorithm SJ)
In other words, a synchronized traversal of both R-trees is executed, with the entries of nodes R 1 and
playing the roles of data and query rectangles, respectively, in a series of range queries.
For both opeartions, the total cost is measured by the total amount of page accesses in the R-tree index
(procedure ReadPage). Procedure ReadPage either performs an actual read operation on the disk or reads
the corresponding node information from a memory-resident buffer, thus we distinguish between node and
disk accesses 3 in the analysis of Section 3.
3 The distinction between node and disk accesses (denoted by NA and DA, respectively) is a subject related to buffer management.
The inequality DA - NA always stands; the equality stands only for the case where no buffering scheme exists.
3. ANALYTICAL COST MODELS FOR SPATIAL QUERIES
Complex queries are usually transformed by DBMS query optimizers to a set of simpler ones and the
execution procedure takes the partial costs into account in order to schedule the execution of the original
query. Thus query optimization tools that estimate access cost and selectivity of a query are
complementary modules together with access methods and indexing techniques. Traditional optimization
techniques usually include heuristic rules, which however are not effective in spatial databases due to the
peculiarity of spatial data sets (multi-dimensionality, lack of total ordering, etc. Other, more
sophisticated, techniques include histograms and cost models. Although research on multi-dimensional
histograms has recently appeared in the literature [PI97], in the spatial database literature, cost models for
selectivity and cost estimation seem to be the most promising solutions. Proposals in this area include
models for selection [KF93, PSTW93] and join [Gun93] queries.
However, most proposals require knowledge of index properties and make a uniformity assumption,
thus rendering them incomplete tools for the purposes of real query optimization. Appropriate extensions
to solve those problems are presented in the rest of the section. Throughout our discussion we use the list
of symbols that appears in Table 1.
3.1. SELECTION QUERIES
Formally, the problem of the R-tree cost analysis for selection queries is defined as follows: Let d be the
dimensionality of the data space and the d-dimensional unit workspace. Let us assume that
data rectangles are stored in an R-tree index R 1 and a query asking for all rectangles that overlap a
query window needs to be answered. What is sought is a formula that estimates the average
number NA of node accesses using only knowledge about data properties (i.e., without extracting
information from the underlying R-tree structure).
Definition: The density D of a set of N rectangles with average size s (s 1 , s 2 , ., s d ) is the average
number of rectangles that contain a given point in d-dimensional space.
Equivalently, D can be expressed as the ratio of the sum of the areas of all rectangles over the area of
the available workspace. If we assume a unit workspace [0,1) d , with unit area, then the density D(N,s) is
given by the following formula:
d
d
s
s
Symbol Definition
d number of dimensions
minimum R-tree node capacity
maximum R-tree node capacity
f average R-tree node fanout
c average R-tree node capacity (in
R height of the R-tree R i
R number of data rectangles indexed in the R-tree R i
density of data rectangles indexed in the R-tree R i
l
N , number of nodes of the R-tree R i at level l
l
R
D , density of node rectangles of the R-tree R i at level l
average extent of a query rectangle q on dimension k
l
R
s , average extent of node rectangles of the R-tree R i at level l on dimension k
number of node / disk accesses for R-tree R 1 at level l because of the query rectangle q
number of node / disk accesses for R-tree R i at level l because of the node rectangles of the
number of node / disk accesses for a selection query between an R-tree R 1 and a query
rectangle q
number of node / disk accesses for a join query between two R-trees R 1 and R 2
Table
1: List of symbols and definitions
Assume now an R-tree R 1 of height 1
R
(the root is assumed to be at level 1
R
h and leaf-nodes are
assumed to be at level 1). If l
R
is the number of nodes at level l and l
R
is their average size then the
expected number NA_total(R 1 , q) of node accesses in order to answer a selection query using a query
window defined as follows (assuming that the root node is stored in main memory):
_
R
l
l
R
l
R q
s
intsect
R
total
NA (2)
s
intsect l
R
l
R is a function that returns the number of nodes at level l intersected by the
query window q. In other words, Eq. 2 expresses the fact that the expected number of node accesses is
equal to the expected number of intersected nodes at each level l (l
R
Lemma: Given a set of N rectangles r 1 . r N with average size s and a rectangle r with size q, the
average number of rectangles intersected by r is:
d
s
s
Proof: The average number of a set of N rectangles with average size s that intersect a rectangle r with
size q is equal to the number of a second set of N rectangles with average size s' ( k
s
' , "k) that
contain a point in the workspace. The latter one, by definition, equals to the density D' of the second set of
rectangles.
d
s
s
s
intsect
s
intsect'
s
' .
Assuming that rectangle r represents a query window on R 1 , we derive NA_total(R 1 , q) by combining
Eqs. 2 and 3:
_
R
l
d
l
R
l
R q
s
R
total
NA . (4)
In order to reach our goal we have to express Eq. 4 as a function of the data properties
(number of
data rectangles) and 1
R
D (density of data rectangles) or, in other words, to express the R-tree index
properties
R
R
, and k
l
R
s ,
,as functions of the data properties
and 1
R
D .
The height 1
R
h of an R-tree R 1 with average node capacity (fanout) f that stores N R 1
data rectangles is
given by the following formula [FSR87]:
f
f
log
Since a node organizes on the average f rectangles, we can assume that the average number of leaf-nodes
(i.e.,
the average number of their parent nodes (i.e., l = 2) is
f
In general, the average number of nodes at level l is
l
R
l
R
f
A second assumption that we make is that the sizes of the node sides are equal (i.e.,
l
s
s d
l
R
l
,, 1. This squaredness assumption is a reasonable property for a "good" R-tree [KF93].
According to that assumption and Eqs. 1 and 6, the average extent k
l
s ,
, is a function of the density l
R
of node rectangles at level l:
l
R
l
R
d
l
R
R
l
R s
f
s
d
R
l
R
l
R N
f
s,
What remains is an estimation of l
R
D ,using the data properties
and 1
R
D . Suppose that, at level l,
l
R
N ,nodes with average size ( ) d
l
R
s ,
,are organized in 1
l
R
parent nodes with average size ( ) d
l
R
Each parent node groups on the average f child nodes, as illustrated in Figure 7 (d = 2), d
child nodes
being responsible for the size of the parent node along each direction. The centers of the l
R
rectangles' projections are assumed to be equally distanced and this distance (denoted by t l,k ) depends on
d
l
R
,nodes along each direction.
#SDUHQW#QRGH FRQVLVWV#RI#I FKLOG#QRGHV
WO#N
FKLOG#QRGH
Figure
7: Grouping f nodes into 1 parent node
Hence the average size k
R
of a parent node along each direction is given by:
l
R
l
d
l
R s
f
s ,
where t l,k is given by:
d
l
R
and denotes the distance between the centers of two consecutive rectangles' projections on dimension k.
To derive Eq. 9 we divided the (unit) extent of the workspace by the number of different node projections
on dimension k.
Lemma: Given a set of l
R
node rectangles with density l
R
D ,, the density k
l
R
D ,of their projections
is given by the following formula:
d
l
R
l
R
l
R N
R
d
l
R
d
l
R
l
R
d
l
R
l
R
l
R
l
R s
s
s
R
d
l
R
l
R
l
R
d
l
R
d
R D
Using Eqs. 8, 9, and 10, the density 1
l
R
D of node rectangles at level l+1 is a function of the density
l
R
D ,of node rectangles at level l:
d
d
d
l
R
l
R
f
Essentially, by using Eq. 11, the density at each level of the R-tree is calculated as a function of only
the density 1
R
D of the data rectangles (which can be named 0
At this point, the original goal has been reached. By combining Eq. 4, 5, 6, 7 and 11, the following
formula for the expected number NA_total(R 1 , q) can be derived.
f
l
d
d
R
l
l
R
l
R
R
f
f
R
total
log1 1,
Clearly, this formula can be computed by using only the data set properties
and 1
R
D , the typical
R-tree parameter f and the query window q.
3.2. JOIN QUERIES
According to the discussion of subsection 2.2, the processing cost of a join query is equal to the total cost
of a set of appropriate range queries, as the algorithm SJ shown in Figure 6 illustrates. In this subsection
we propose a pair of formulae that estimate the cost of a join query.
Formally, the problem of R-tree cost analysis for join queries is defined as follows: Let d be the
dimensionality of the workspace and the d-dimensional unit workspace. Let us assume two
spatial data sets of cardinality N R 1
and N R 2
, respectively, with the corresponding MBR approximations
being stored in two R-tree indexes R 1 and R 2 , respectively. In correspondence with the goal of Section 3.1,
in this section, the target of our cost analysis is a formula that would efficiently estimate the average
number NA of node accesses needed to process a join query between the two data sets, based on the
knowledge of the data properties only and without extracting information from the corresponding R-tree
structures.
Suppose that the height of a tree index R i is equal to R
h and the two root nodes are stored in
main memory. At each level l i
R
h -1, the tree structure R i contains i
l
R
N , nodes of average size
l
R
s , consisting of a set of entries l
R
E . In order to find which pairs of entries are overlapping and
downwards traverse the two tree structures, we compare entries 1
R
R
(line 04 of the
spatial join algorithm SJ in Figure 6).
The cost, in terms of node accesses, of the above comparison at each level is given by the summation
of two factors which express the respective costs for the two R-trees, namely )
R
R
NA and
R
R
NA . In order to estimate these two factors we consider that the entries of R 1 (R 2 ) play the role
of the data set (a set of query windows q, respectively); then we apply the function 'intsect' from the R-tree
analysis for selection queries in order to estimate the access cost for R 1 (R buffering scheme is
considered for the analytical estimation of node accesses; hence the access costs for both trees R 1 and R 2 at
each level are equal (since equal number of nodes are accessed, as can be extracted by line 14 of algorithm
SJ). The processing of line 04 of the SJ algorithm is repeatedly executed at each level of the two trees
down to the leaf level of the shorter tree R 2 (without loss of generality, we assume that 1
R
R
R
l
R
l
R
l
R
s
s
intsect
l
R
R
l
R
R
d
l
R
l
R
l
R
l
R s
s
l
R
R
l
R
R
R
R h
l
h and 1
l .
In [TSS98] we provide a slightly modified formula: in particular, instead of the factor k
l
R
l
R s
s ,
we use an upper bound (i.e., ( )
{ }
l
R
R s
s ,
workspace, which is assumed during the
whole analysis of Section 3 (Eq. 3, Eq. 4, Eq. 12, and Eq. 13). When the leaf nodes of R 2 are being
processed, l 2 is fixed to value 1 (i.e., denoting the leaf level) and the propagation of R 1 continues down to
its lower h h
levels. Formally, the total cost in terms of node accesses is given by Eq. 14:
_
R
l
l
R
R
l
R
R
R
R
total
R
R
R
R
R
R
R
l
l
l
l -
The involved parameters are:
. i
R
h , which denotes the height of the tree R i and is given by Eq. 5,
. l
R
N , which denotes the average number of nodes of the tree R i at level l i and is given by Eq. 6, as a
function of the actual population N R i
of the data set, and
. k
l
s ,
, , which denotes the average extent of nodes of the tree R i at each dimension k at level l i and is
given by Eq. 7, as a function of the density l
R
D of the node rectangles at level l i , which, in turn, is
given by Eq. 11, as a function of the actual density D R i
of the data set.
Qualitatively, Eq. 14 estimates the cost of a join query between two spatial data sets based on their
primitive properties only, namely number and density of data rectangles , in correspondence with the
relevant analysis for range queries (subsection 3.1). Notice that Eq. 14 is symmetric with respect to the
two indexes R 1 and R 2 . The same conclusion is drawn by studying the algorithm SJ, since the number of
node accesses is equal to the number of ReadPage calls (line 14) which, in turn, are the same for both
trees. The equivalence of the two indexes is not the case when a simple path buffer (i.e., a buffer that
keeps the most recently visited path for each tree structure) is introduced, as we will discuss in the next
subsection.
3.3. INTRODUCING A PATH BUFFER
Extending previous analysis, we introduce a simple buffering mechanism that maintains a path buffer for
the underlying tree structure(s). The existence of such a buffer mainly affects the performance of the tree
index that plays the role of the query set (namely R 2 ), as will be discussed in detail in this subsection,
since the search procedure, algorithm SJ illustrated in Figure 6, reads R 2 entries less frequently than R 1
entries. With respect to that, we assert that the cost of a selection query in terms of disk accesses
almost equal to NA_total(R 1 , q), as formulated in Eq. 12, and therefore provide no
further analysis for path buffering. Moreover, the effect of a buffering mechanism (e.g. an LRU buffer)
has been already addressed in the literature [LL98] and, according to experimental results, very low buffer
size (such as that of a path buffer) causes almost zero impact on point and range query performance. On
the other hand, even a simple path buffer scheme highly affects the actual cost of a join query.
As already mentioned, by examining algorithm SJ we conclude that the existence of such a buffering
scheme mainly affects the computation of the cost for R 2 because its entries constitute the outer loop of the
algorithm and hence are less frequently updated. As for R 1 , since its entries constitute the inner loop of the
algorithm, the respective cost computation is not considerably affected by the existence of a path buffer.
These statements are formally explained by the following alternative cases (illustrated in Figure 8):
(i)Suppose that an entry l
R
of tree R 2 at level l overlaps with m entries of a node 1
l
R
entries of a
different node 1
l
R
etc., of the tree R 1 . l
R
,is kept in main memory during its comparison with all
entries of node 1
l
R
and will be again fetched from disk, hence re-computed in DA(R 2 , R 1 , l) cost,
when its comparison with the entries of 1
l
R
starts. As a result, the number of actual disk accesses of
the node rooted by l
R
,is equal to the number of the nodes of R 1 at level l+1 (i.e., the parent level)
having rectangles intersected by j
R
l
R
l
R
l
R s
s
intsect 1
.
(ii) On the other hand, an entry l
R
of tree R 1 at level l is re-computed in DA(R 1 , R 2 , l) as soon as it
overlaps with an entry l
R
of tree R 2 with only one exception: l
R
,being the last member of the
intersection set of l
R
simultaneously, the first member of the intersection set of its consecutive
l
R
. The above exception rarely happens; moreover, it is hardly modeled since no order exists
among entries of R-tree nodes.
Hypothesis:
overlaps with {D1, E1} and {H1, I1} entries of nodes A1 and B1, respectively
- E2 overlaps with {E1, F1} and {H1} entries of nodes A1 and B1, respectively
case (i): Example of DA(R 2 , R 1 ,l) computation:
hits due to entry D2 (i.e., equal to the number of intersected entries of R 1 at level l+1, namely {A1, B1}).
case (ii): Example of DA(R 1 ,R 2 ,l) computation:
Rule: 2 hits due to entry H1 (i.e., equal to the number of intersected entries of R 2 at level l, namely {D2, E2}).
Exception to the rule: 1 hit due to entry E1 (since the overlapping pairs (E1, D2) and (E1, E2) are consecutively checked)
'#
'#
OHYHO#O#
OHYHO#O
Figure
8: Alternative cases for estimating DA cost for join queries.
Since the cost for the tree that plays the role of the query (data) set is affected in a high (low) degree,
we distinguish between two different cases: the tree R 1 that plays the role of the data set being taller or
shorter than R 2 . In the first case (where h h
> ) the propagation of R 1 down to its lower levels adds no
extra cost (in terms of disk accesses) to the 'query' tree R 2 that has already reached its leaf level. In the
second case (where 2
R h
each propagation of the 'query' tree R 2 down to its lower levels adds equal
cost to the 'data' tree R 1 (denoting that buffer existence does not affect the cost of the `data' tree R 1 ).
Hence, with respect to the above discussion, the access cost of each tree at a specific level l i is
calculated according to the following formulae:
R
R
l
R
R
d
l
R
l
R
l
R
l
R
l
R
l
R
l
R
l
R s
s
s
s
l
R
R
,,,
intsect
and the total cost is given by Eq. 17 (if 2
R h
R h
l
l -
R h
l
l -
{ }
{ } { }
if
if
R
R
l
l
R
R
l
l
l
R
R
DA
l
R
R
DA
l
R
R
DA
l
R
R
DA
l
R
R
DA
l
R
R
DA
R
R
total
R
R
R
R
R
R
R
R
R (17)
Notice that, in contrast to Eq. 14, Eq. 17 is sensitive to the two indexes, R 1 and R 2 . The experimental
results of Section 4 also strengthen this statement.
In the above analysis we have taken two cases into consideration: adopting (a) no, or (b) a simple path
buffer scheme. A more complex buffering scheme (e.g. an LRU buffer of predefined size) would surely
achieve a lower value for DA_total. However, its effect is beyond the scope of this paper (see [LL98] for
related work on selection queries).
3.4. SUPPORT FOR NON-UNIFORM
The proposed analytical model assumes data uniformity in order to compute the density of the R-tree node
rectangles at a level l+1 as a function of the density of the child node rectangles at level l (Eq. 11). In
particular, in order to derive Eq. 8 and Eq. 9, it was assumed that the centers of the l
R
node rectangles'
projections were equally distanced. This uniformity assumption leads to a model that could be efficient for
uniform-like data distributions but hardly applicable to non-uniform distributions of data, which are the
rule when dealing with real applications.
In order to adapt the model in a way that would efficiently support any type of data sets (uniform or
non-uniform ones) we reduce the global uniformity assumption of the analytical model (i.e., consider the
whole workspace) to a local uniformity assumption (by assuming a small sub-area of the workspace)
according to the following idea: The density of a data set is involved in the cost formulae as a single
number D R i
. However, for non-uniform data sets, density is a varying parameter, graphically a surface in
d-dimensional space. Such a surface could show strong deviations from point to point of the workspace,
compared to the average value. For example, in Figure 9, a real data set, called LBeach [Bur91], is
illustrated together with its density surface.
(a) LBeach data set (b) LBeach density surface
Figure
9: A real data set and its density surface
The average density of this data set is D However, as extracted from Figure 9b, actual density
values vary from zero-populated areas, such as the upper-left and bottom-right corners) up to
high-populated areas), with respect to the reference point. It is evident that using the D avg value
in a cost formula would sometimes lead to inaccurate estimations. On the other hand, a satisfactory image
of the density surface provides more accurate D values, with respect to a specified query window q.
Although we refer to dynamic indexing we assume that we can use some data properties for our
prediction, such as the expected number N and average size s of data, since these properties can be usually
computed using a sample of the data set (efficient sampling algorithms have been proposed, among others,
by Vitter in [Vit84, Vit85]).
Based on the above idea, the proposed cost formulae could efficiently support either uniform or non-uniform
data distributions by assuming the following modifications:
(i) the average density D R i
of the data set is replaced by the actual density R
D of the data set within the
area of the specified query window q.
(ii) the amount R
N of the data set is replaced by a transformation R
N of that, computed as
R
R
R
R N
' .
In this section we provided analytical formulae for the cost estimation of selection or join queries on
spatial data sets organized by disk-resident R-tree indexes. The proposed cost models are based on
primitive data properties only, without any knowledge of the corresponding R-trees. In the next section we
evaluate our model by comparing the analytical estimations with experimental results on synthetic and real
data sets in one- and two-dimensional space.
4. EVALUATION OF THE COST MODELS
The evaluation of the analytical formulae proposed in Section 3 was based on a variety of experimental
tests on synthetic and real data sets illustrated in Figure 10. Synthetic one- and two- dimensional data sets
consist of random (Figure 10a) and skewed (Figure 10b) distributions of varying cardinality N (20K - N -
80K) and density D (0.2 - D - 0.8), and have been constructed by using random number generators.
Real two-dimensional data sets are parts of the TIGER database of the U.S. Bureau of Census [Bur91].
In particular, we have used two TIGER data sets:
. LBeach data set: 53,143 line segments (stored as rectangles) indicating roads of Long Beach,
California (Figure 10c).
. MGcounty data set: 39,221 line segments (stored as rectangles) indicating roads of Montgomery
county, Maryland (Figure 10d).
For the experimental tests we built R*-tree indexes [BKSS90] and performed several spatial joins
using the data sets presented before. All experimental results were run on an HP700 workstation with 256
Mbytes of main memory. On the other hand, the analytical estimations of node accesses for selection
queries were based on Eq. 12 and the node (disk) cost estimations for join queries were based on Eq. 14
(Eq. 17) with the average capacity of the tree indexes being set to the typical
4 The average density R
D of a data set is considered to be always R
D > 0, even for point data sets. R
corresponds to zero-
populated areas.
(a) synthetic data: random (uniform-like) distribution (b) synthetic data: skewed (Zipf) distribution
(c) real data: LBeach data set (d) real data: MGcounty data set
Figure
10: Two-dimensional data sets used in our experiments
4.1. UNIFORM-LIKE
We present several test results in order to evaluate the cost estimation of Eq. 12 for selection (point and
range) queries. Figure 11 illustrates the results for two random data sets respectively,
both with density 0.1). The relative error was always below 10% for the two experiments illustrated in
Figure
11 as well as the rest experiments with random data sets.
query size in % of workspace per axis
node
(log
scale)
H[SHU#.
H[SHU#.
DQDO#.
Figure
11: Performance comparison for selection queries on uniform-like data
log scale)
As a further step, we evaluated the analytical formulae for join query estimation, presented in Section
3, on various R-tree combinations. Figure 12 illustrates the experimental and analytical results of node and
disk accesses (denoted by NA and DA) for (a) one- and (b) two-dimensional random data sets,
respectively, for all 1
R
R
combinations.
The non-linearity of the plots in Figure 12b is due to the fact that all R-tree indexes are not of equal
height h; the height of the two-dimensional indexes of cardinality 20K - N - 40K (60K - N - 80K) is
equal to while the height of all one-dimensional indexes is equal to 3. According to our
experiments it also turns out that the cost formulae for the estimation of disk accesses DA are non-symmetric
with respect to the trees R 1 and R 2 , a fact that has been already mentioned during the
presentation of the cost models in Section 3. The comparison results confirm that, for tree indexes of equal
height, the choice of the smaller (larger) index to play the role of the 'query' (`data') tree is the best choice
for the effectiveness of SJ algorithm, which however is not a general rule for trees of different height, as
illustrated in Figure 13 (all areas but AREA 2 and AREA 3 in the two-dimensional case follow the rule).
(a)
N5# / N5# combination
N5# / N5# combination
Figure
12: Experimental vs. analytical NA and DA costs for join queries on uniform-like data
N5# or N5#
DA
N5# or N5#
DA
(a)
Figure
13: Analytical DA costs for join queries on uniform-like data for varying cardinality 1
R
N or 2
R
Summarizing the results for join queries on random (uniform-like) data sets, we conclude the
(i) When no buffering scheme is adopted (i.e., the estimated number of node accesses NA is evaluated)
then the estimation is very accurate, since the relative error never exceeds 10%.
(ii) When a path buffer is adopted then the estimated cost of R 2 is always very close to the actual cost
(relative error usually below 5%), while the estimated cost of R 1 is usually within 10%-15% of the
experimental result. The accuracy of the estimation concerning R 2 (i.e., the tree that plays the role of
the query set) is expected since the existence of a buffer has been taken into account in Eq. 8, while
Eq. 9 assumes that the buffer existence does not affect R 1 (i.e., the tree that plays the role of the data
set), an assumption that lowers the accuracy of the estimation for the access cost of R 1 . However, as
already mentioned in subsection 3.2, the exception to the rule is hardly modeled.
4.2. NON-UNIFORM
As explained in Section 3, a transformation of the actual density of each non-uniform data set is necessary
in order to reduce the impact of the uniformity assumption of the underlying analytical model from global
(i.e., assuming the global workspace) to local (i.e., assuming a small sub-area of the workspace). In other
words, instead of considering the average density D avg of a data set, the cost formulae (Eq. 12, Eq. 14, and
Eq. 17) consider the values of the density surface D(x,y) that correspond to the appropriate areas of the
workspace. For experimentation purposes, we extracted a density surface for each non-uniform data set
using a grid of 40 x 40 cells, i.e., a step of 0.25% of the workspace per axis.
Figure
14 illustrates average results for selection queries on (a) skewed and (b) real data sets. The
analytical results are plotted with dotted lines and the experimental results for R*-trees with solid lines.
The relative error is usually around 10%-15% and this was the rule for all data sets that we tested.20601000.00 0.10 0.20 0.30 0.40
query size in % of work space per axis
node
query size in % of work space per axis
node
(a) Skewed data:
(b) Real data:
LBeach (upper pair) and MGcounty (lower pair)
Figure
14: Performance comparison for selection queries on non-uniform data
(solid lines: experimental results, dotted lines: analytical results)
The flexibility of the proposed analytical model on non-uniform distributions of data, using the
"density surface", is also extracted from the results of our experiments. Figure 15 illustrates the results for
typical point queries around nine representative points on skewed and real
data sets. The analytical results are plotted with dotted lines while the experimental results using R*-trees
are plotted with solid lines. Note that the plotted irregularities are similar for both the analytical and the
experimental values.
1_1 1_2 1_3 2_1 2_2 2_3 3_1 3_2 3_3
representative points (x_y)
node
1_1 1_2 1_3 2_1 2_2 2_3 3_1 3_2 3_3
representative points (x_y)
node
(a) skewed data, point queries (b) skewed data, range queries0,51,52,53,54,5
1_1 1_2 1_3 2_1 2_2 2_3 3_1 3_2 3_3
representative points (x_y)
node
accesses1030501_1 1_2 1_3 2_1 2_2 2_3 3_1 3_2 3_3
representative points (x_y)
node
(c) real data, point queries (d) real data, range queries
Figure
15: Performance comparison for selection queries around representative points
(solid lines: experimental results, dotted lines: analytical results)
The evaluation of the model for join queries also includes a wide set of experiments. Figure 16a
illustrates weighted average 5 costs (denoted by w.NA and w.DA) on two-dimensional skewed data sets for
varying density D. Apart from synthetic data sets we also used real ones from the TIGER database
[Bur91].
Figure
16b illustrates the corresponding experimental and analytical results. The labels lb and mg
(lb' and mg') denote the actual (mirrored with respect to x-, and y- axes) LBeach and MGcounty data
sets, respectively. In general, a relative error below 20% appears for all non-uniform data combinations.The weighted average number of disk (and node) accesses is computed by multiplying each cost with a factor inversely
proportional to the corresponding cardinality:
. i
DA
w where N
w K= , in order to achieve fair portions for both
low- and high- populated indexes.
lb / mg mg lb lb lb' mg / mg'
real data combinations
(a) skewed data (b) real data
Figure
Experimental vs. analytical NA and DA costs for join queries on non-uniform data
Summarizing the results of our tests, we list in Table 2 the average relative errors of the actual results
compared to the predictions of our model.
Relative
Data sets
point
queries
range
queries
join
queries
Random data 0%-10% 0%-5% 0%-10%
Skewed data 0%-15% 0%-10% 0%-20%
Real data 0%-15% 0%-20% 0%-20%
Table
2: Average relative error in estimating access cost for selection and join queries
4.3. THE BENEFIT WHEN USING A PATH BUFFER
As discussed in subsection 3.3, the larger the buffer size in an actual database system, the lower the access
cost for a selection or join query. However, the benefit for spatial selection queries by using a simple path
buffer is not clearly measurable; according to a related work [LL98], when the buffer size is close to zero
then no significant performance gain is achieved. On the other hand, a path buffer clearly affects the
performance of join queries, as the gaps between the lines that represent NA and DA in Figures 12 and
indicate. This gap is illustrated in Figure 17 with NA values being fixed to value 100% and hence DA
values showing the relative performance gain.
A significant savings of 10%-30% appears for one-dimensional data. Recalling that all one-dimensional
data sets of our experiments generated equal height trees, one can observe that the smaller the
'query' tree, the highest the gain becomes. For two-dimensional data sets, the performance gain increases
up to a 50% level. The above conclusion also stands in this case showing, however, a less uniform
behavior, which is due to the different index heights.
(a)
#.
UHODWLYH#SHUIRUPDQFH
(b)
#.
UHODWLYH#SHUIRUPDQFH
Figure
17: Relative performance gain when using a path buffer
5. RELATED WORK
In the survey of this section we present previous work on analytical performance studies for spatial queries
using R-trees. Several conclusions from those proposals have been used as starting points for consequent
studies and our analysis as well.
The earlier attempt to provide an analysis for R-tree-based structures appeared in [FSR87]. Faloutsos et
al. proposed a model that estimates the performance of R-trees and R + -trees for selection queries,
assuming, however, uniform distribution of data and packed trees (i.e., all the nodes of the tree are full of
data). The formulae for the height 1
R
h of an R-tree R 1 as a function of its cardinality N R 1
and fanout f
(Eq. 5) and the average size k
l
R
s ,,
of a parent node as a function of the average size k
l
R
s ,
,of the child
nodes and the average distance t l,k between two consecutive child nodes' projections (Eq. were
originally proposed in [FSR87].
Later, Kamel and Faloutsos [KF93] and Pagel et al. [PSTW93] independently presented a formula
(actually a variation of Eq. 2) that calculates the average number of page accesses in an R-tree index R 1
accessed by a query window q as a function of the average node sizes k
l
R
s ,
,and the query window size q k .
That formula assumes that the R-tree has been built and that the MBR of each node of the R-tree R 1 can be
measured. In other words, the proposed formula is qualitative, i.e., it does not really predict the average
number of disk accesses but, intuitively, presents the effect of three parameters, namely area, perimeter,
and number of objects, on the R-tree performance. In those papers, the influence of the node perimeters
was revealed, thus helping one to understand the efficiency of the R*-tree, which was the first R-tree
variant to take the node perimeter into consideration during the index construction procedure [PSTW93].
Faloutsos and Kamel [FK94] extended the previous formula to actually predict the number of disk
accesses using a property of the data set, called the fractal dimension. The fractal dimension fd of a data
set (consisting of points) can be mathematically computed and constitutes a simple way to describe non-uniform
data sets, using just a single number. The estimation of the number of disk accesses DA(R 1 , q, 1)
at level 1 (i.e., the leaf level) according to the model proposed in [FK94] (f is the average capacity - fanout
- of the R-tree nodes) is given by:
d
R q
s
f
R
DA,,
where
fd
R N
f
The formula constitutes the first attempt to model R-tree performance for non-uniform distributions of
data (including the uniform distribution as a special case: fd = d) superseding the analysis in [FSR87] that
assumed uniformity. However the model is applicable to point data sets only, which are not the majority in
real spatial applications.
Extending the work of [PSTW93], Pagel et al. [PSW95] proposed an optimal algorithm that establishes
a lower bound result for static R-tree performance. They have also shown by experimental results that the
best known static and dynamic R-tree variants, the packed R-tree [KF93] and the R*-tree respectively,
perform about 10%-20% worse than the lower bound. The impact of the three parameters (area,
perimeter, and number of objects) was further discussed in [PS96], where performance formulae for
various kinds of range queries, such as intersection, containment, and enclosure queries, were derived.
Since previous work used the number of nodes visited (NA is our analysis) as a metric of query
performance, the effect of an underlying buffering mechanism has been neglected, although it is a real cost
parameter in query optimization. Towards this direction, Leutenegger and Lopez [LL98] modified the cost
formula of [KF93] introducing the size of an LRU buffer. Comparison results on three different R-tree
algorithms [Gut84, RL85, KF93] showed that the analytical estimations were very close to the
experimental cost measures. A discussion on the appropriate number of R-tree levels to be pinned argued
that pinning may mostly benefit point queries, and even then only under special conditions.
Apart from cost estimation proposals for selection queries, Gunther's proposal [Gun93] was the earliest
attempt to provide an analytical model for estimating the cost of spatial joins. Abstractions of tree indexes,
called "generalization trees", were modeled on the support of q-joins. Implementation algorithms for
general q-joins were presented and evaluated for various probability distributions.
Later, Aref and Samet [AS94] proposed analytical formulae for the execution cost and the selectivity
of spatial joins, based on the R-tree analysis of [KF93]. The basic idea of that work was the consideration
of the one data set as the underlying database and the other data set as a source for query windows in order
to estimate the cost of a spatial join query based on the cost of range queries. Experimental results
showing the accuracy of the selectivity estimation formula were presented in that paper.
Huang et al. [HJR97] recently proposed a cost model for spatial joins using R-trees. Independently to
[TSS98], it is the first attempt to provide an efficient formula for join performance by distinguishing two
cases: considering zero- and non-zero buffer management. Using the analysis of [KF93, PSTW93] as a
starting point, it provides two formulae, one for each of the above cases. The efficiency of the proposed
was shown by comparing analytical estimations with experimental results for varying buffer size
(with the relative error being around 10%-20%). However, contrary to [TSS98], the model proposed in
assumes knowledge of R-tree properties like [KF93, PSTW93] do.
Compared to related work, our model provides robust analytical formulae for selection and join cost
estimation using R-trees,
(i) do not need knowledge of the underlying R-tree structure(s), since they are only based on primitive
data properties (cardinality N and density D of the data set) , and
(ii) are shown to be accurate by performing a wide set of experimental results on both uniform-like and
non-uniform data sets consisting of either point or non-point objects.
6. CONCLUSION
Selection and join queries are the fundamental operations supported by a DBMS. In the spatial database
literature, there exist several access methods for the efficient implementation of both operations mainly
using the R-tree spatial data structure. However, for query optimization purposes, efficient cost models
should be also available in order to make accurate cost estimations under various data distributions
(uniform and non-uniform ones).
In this paper, we presented a model that predicts the performance of R-tree-based structures for
selection (point or range) queries and extended this model to support join queries. The proposed cost
are functions of data properties only, namely, their number N and density D in the workspace,
and, therefore, can be used without any knowledge of the R-tree index properties. They are applicable to
point or non-point data sets and, although they make use of the uniformity assumption, they are also
adaptive to non-uniform (e.g. skewed) distributions, which usually appear in real applications.
Experimental results on synthetic and real [Bur91] data sets showed that the proposed analytical model
is very accurate, with the relative error being usually around 10%-15% when the analytical estimate is
compared to cost measures using the R*-tree, one of the most efficient R-tree variants. In addition, for join
query processing, a path buffer was considered and the analytical formula was adapted to support it. The
performance saving due to the existence of such a buffering mechanism was highly affected by the sizes
(and height) of the underlying indexes and reached up to 50% for two-dimensional data sets. The proposed
formulae and guidelines could be useful tools for spatial query processing and optimization purposes,
especially when complex spatial queries are involved.
In this work we focused on the overlap operator. Any spatial operator could be used instead. For
instance, a topological operator (e.g. meet, covers, contains, etc.) defined by Egenhofer and Fransoza in
[EF91], any of the 13 n possible directional operators between two n-dimensional objects [All83,
PT97], or a distance operator (close, far, etc.) perhaps involving fuzzy information. We have already
adapted the model for selection queries in order to estimate the cost of (a) direction relations between
spatial objects in GIS applications [TPSS98] and (b) spatiotemporal relations between objects in large
Multimedia applications [TVS96], by handling such relations as range queries with an appropriate
transformation of the query window q. We are currently working on appropriate modifications in order to
support join queries as well.
A second issue that arises is whether the overlap operator is representative for the accuracy of a cost
model. Recent research [PS96] has shown that range (window) queries can be widely regarded as
representative for other (e.g., enclosure or containment) queries for a wide range of region sizes. By
considering that work on range queries as a background we could also study the case of join queries and a
wide set of spatial operators.
--R
"Maintaining Knowledge about Temporal Intervals"
"A Cost Model for Query Optimization Using R-Trees"
"Efficient Processing of Spatial Joins Using R-trees"
"The R * -tree: an efficient and robust access method for points and rectangles"
"Multi-Step Processing of Spatial Joins"
Bureau of the Census
"A Relational Model of Data for Large Shared Data Banks"
"The Ubiquitous B-Tree"
"Point Set Topological Relations"
Searching Multimedia Databases by Content
"Recovering Information from Summary Data"
"Beyond Uniformity and Independence: Analysis of R-trees Using the Concept of Fractal Dimension"
"The BANG file: a new kind of grid file"
"Analysis of Object Oriented Spatial Access Methods"
"Multidimensional Access Methods"
"Efficient Computations of Spatial Joins"
"R-trees: a dynamic index structure for spatial searching"
"An Introduction to Spatial Database Systems"
"A Cost Model for Estimating the Performance of Spatial Joins Using R-trees"
"The LSD tree: spatial access to multidimensional point and non point objects"
"On Packing R-trees"
Modern Database Systems: The Object Model
The Art of Computer Programming
"The Effect of Buffering on the Performance of R-Trees"
"Redundancy in Spatial Databases"
Computational Geometry
"Are Window Queries Representative for Arbitrary Range Queries?"
"Towards an Analysis of Range Query Performance"
"Window Query-Optimal Clustering of Spatial Objects"
"Spatial Relations, Minimum Bounding Rectangles, and Spatial Data Structures"
"Cubetree: Organization of and Bulk Updates on the Data Cube"
"Direct Spatial Search on Pictorial Databases Using Packed R-trees"
The Design and Analysis of Spatial Data Structures
The Next Wave
-tree: a dynamic index for multidimensional objects"
"Direction Relations and Two-Dimensional Range Queries: Optimisation Techniques"
"A Model for the Prediction of R-tree Performance"
"Cost Models for Join Queries in Spatial Databases"
"Spatio-Temporal Indexing for Large Multimedia Applications"
"Faster Methods for Random Sampling"
"Random Sampling with Reservoir"
--TR
--CTR
Orlando Karam , Fred Petry, Optimizing distributed spatial joins using R-Trees, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Yufei Tao , Dimitris Papadias, Adaptive index structures, Proceedings of the 28th international conference on Very Large Data Bases, p.418-429, August 20-23, 2002, Hong Kong, China
Yinghua Zhou , Xing Xie , Chuang Wang , Yuchang Gong , Wei-Ying Ma, Hybrid index structures for location-based web search, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Jun Zhang , Manli Zhu , Dimitris Papadias , Yufei Tao , Dik Lun Lee, Location-based spatial queries, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Ji-Dong Chen , Xiao-Feng Meng, Indexing future trajectories of moving objects in a constrained network, Journal of Computer Science and Technology, v.22 n.2, p.245-251, March 2007
Bugra Gedik , Kun-Lung Wu , Philip Yu , Ling Liu, Motion adaptive indexing for moving continual queries over moving objects, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Yufei Tao , Dimitris Papadias , Jimeng Sun, The TPR*-tree: an optimized spatio-temporal access method for predictive queries, Proceedings of the 29th international conference on Very large data bases, p.790-801, September 09-12, 2003, Berlin, Germany
Yong-Jin Choi , Jun-Ki Min , Chin-Wan Chung, A cost model for spatio-temporal queries using the TPR-tree, Journal of Systems and Software, v.73 n.1, p.101-112, September 2004
Yufei Tao , Dimitris Papadias , Qiongmao Shen, Continuous nearest neighbor search, Proceedings of the 28th international conference on Very Large Data Bases, p.287-298, August 20-23, 2002, Hong Kong, China
Xuan Liu , Shashi Shekhar , Sanjay Chawla, Object-Based Directional Query Processing in Spatial Databases, IEEE Transactions on Knowledge and Data Engineering, v.15 n.2, p.295-304, February
Dimitris Papadias , Yufei Tao , Greg Fu , Bernhard Seeger, An optimal and progressive algorithm for skyline queries, Proceedings of the ACM SIGMOD international conference on Management of data, June 09-12, 2003, San Diego, California
Jianting Zhang , Le Gruenwald, Efficient placement of geographical data over broadcast channel for spatial range query under quadratic cost model, Proceedings of the 3rd ACM international workshop on Data engineering for wireless and mobile access, September 19-19, 2003, San Diego, CA, USA
Bugra Gedik , Aameek Singh , Ling Liu, Energy efficient exact kNN search in wireless broadcast environments, Proceedings of the 12th annual ACM international workshop on Geographic information systems, November 12-13, 2004, Washington DC, USA
Renato Bueno , Agma J. M. Traina , Caetano Traina, Jr., Genetic algorithms for approximate similarity queries, Data & Knowledge Engineering, v.62 n.3, p.459-482, September, 2007
Yufei Tao , Dimitris Papadias, Spatial queries in dynamic environments, ACM Transactions on Database Systems (TODS), v.28 n.2, p.101-139, June
Abhinandan Das , Johannes Gehrke , Mirek Riedewald, Approximation techniques for spatial data, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Yufei Tao , Dimitris Papadias , Jun Zhang, Cost models for overlapping and multiversion structures, ACM Transactions on Database Systems (TODS), v.27 n.3, p.299-342, September 2002
Dimitris Papadias , Yufei Tao , Greg Fu , Bernhard Seeger, Progressive skyline computation in database systems, ACM Transactions on Database Systems (TODS), v.30 n.1, p.41-82, March 2005
Antonio Corral , Yannis Manolopoulos , Yannis Theodoridis , Michael Vassilakopoulos, Cost models for distance joins queries using R-trees, Data & Knowledge Engineering, v.57 n.1, p.1-36, April 2006
Yufei Tao , Christos Faloutsos , Dimitris Papadias, The power-method: a comprehensive estimation technique for multi-dimensional queries, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Dimitris Papadias , Yufei Tao , Kyriakos Mouratidis , Chun Kit Hui, Aggregate nearest neighbor queries in spatial databases, ACM Transactions on Database Systems (TODS), v.30 n.2, p.529-576, June 2005
Yufei Tao , Dimitris Papadias, Historical spatio-temporal aggregation, ACM Transactions on Information Systems (TOIS), v.23 n.1, p.61-102, January 2005
Yufei Tao , Jimeng Sun , Dimitris Papadias, Analysis of predictive spatio-temporal queries, ACM Transactions on Database Systems (TODS), v.28 n.4, p.295-336, December
Hanan Samet, Decoupling partitioning and grouping: Overcoming shortcomings of spatial indexing with bucketing, ACM Transactions on Database Systems (TODS), v.29 n.4, December 2004
Hanan Samet, Object-based and image-based object representations, ACM Computing Surveys (CSUR), v.36 n.2, p.159-217, June 2004 | query optimization;spatial databases;access methods;r-trees;cost models |
628041 | The Effect of Buffering on the Performance of R-Trees. | AbstractPast R-tree studies have focused on the number of nodes visited as a metric of query performance. Since database systems usually include a buffering mechanism, we propose that the number of disk accesses is a more realistic measure of performance. We develop a buffer model to analyze the number of disk accesses required for spatial queries using R-trees. The model can be used to evaluate the quality of R-tree update operations, such as various node splitting and tree restructuring policies, as measured by query performance on the resulting tree. We use our model to study the performance of three well-known R-tree loading algorithms. We show that ignoring buffer behavior and using number of nodes accessed as a performance metric can lead to incorrect conclusions, not only quantitatively, but also qualitatively. In addition, we consider the problem of how many levels of the R-tree should be pinned in the buffer. | Introduction
R-trees [3] are a common indexing technique for spatial data and are widely used in spatial and multi-dimensional
databases. Typical applications include computer-aided design, geographic information
systems, computer vision and robotics, multi-keyed indexing for traditional databases, temporal
databases, and scientific databases.
A significant amount of effort has been devoted to the development of better R-tree construction
algorithms [9, 1, 8, 4, 5, 6]. Most of this work has focused on proposing and comparing a new
algorithm to previous ones, but little work has been done on methodology for comparing algorithms.
Notable exceptions are the work of Kamel and Faloutsos [4] and Theodoridis and Sellis [10].
In [4] the authors develop an analytical model for prediction of query performance. The model
provides good insight into the problem, especially by establishing a quantitative relationship between
performance and the total area and perimeter of the minimum bounding rectangles (MBRs) of the
tree nodes, but suffers from one major drawback: the primary objective function for comparison is
the number of nodes visited. In real databases some portion of the tree is buffered in main memory.
This buffering of portions of the tree can significantly affect performance. Consequently, we propose
that a better metric is the average number of disk accesses required to satisfy a query. The work of
Theodoridis and Sellis [10] provides a fully analytical model not requiring the minimum bounding
rectangles of the R-tree nodes as an input. This work also focuses on the number of nodes visited,
but our model could be coupled with their model as we have done with the model of Kamel and
Faloutsos.
In order to derive a model based on the number of disk accesses we borrow techniques developed
by Bhide et al [2] (in the context of modeling databases with uniform access within each of several
partitions) and develop a new buffer model. Our model can be used to evaluate the quality of any
R-tree update operation, such as node splitting policies [3] or packing algorithms [8, 4] as measured
by query performance of the resulting tree. The model is very accurate and simple to understand,
making it easy for researchers to integrate it into their studies. Furthermore, it is not only applicable
to R-trees, but can easily be modified to model B-tree performance.
The main contributions of this paper are: first, the buffer model methodology; second, a demonstration
of the importance of considering a buffer, third, insight into how large a buffer should be
used, and fourth, showing the effect of pinning top levels of the R-tree into the buffer. We emphasize
that the inclusion of a buffer not only changes the quantitative performance of the algorithms, but
in some cases changes the qualitative ordering of the algorithms.
Our buffer model is an analytic model based on simple probability theory, but our overall
performance model is a hybrid model. We start by using a packing algorithm to create actual R-trees
for various data sets. Then, we compute the minimum bounding rectangles of tree nodes and
use these as input to our buffer model.
The rest of the paper is organized as follows. In Section 2 we provide background information on
R-trees and describe three loading algorithms that were chosen to test our results. Since this choice
is by no means exhaustive, we emphasize that the purpose of this paper is not to draw irrevocable
conclusions about the best loading algorithm (although it allows comparisons between the chosen
ones), but rather to demonstrate the need and utility of the buffer model. In Section 3 we present
the query performance models of Kamel and Faloutsos, our modification of the region query model,
and our new buffer model. In Section 4 we present the validation of our model. Section 5 contains
results from our model experiments and Section 6 concludes.
2 Overview of R-tree and Loading Algorithms
In this section we provide a brief overview of the R-tree and loading algorithms. A detailed understanding
of the loading algorithms is useful but not necessary for the rest of this paper. The reader
interested in a more detailed description is referred to the articles listed in the bibliography.
2.1 R-trees
An R-tree is a hierarchical data structure derived from the B-tree and designed for efficient execution
of intersection queries. An R-tree stores a collection of rectangles which can change in time through
insertion and deletion of rectangles. Arbitrary geometric objects can also be handled by representing
each object by the smallest upright rectangle enclosing it. R-trees generalize easily to dimensions
higher than two. For notational simplicity we describe only the 2D case.
Each node of the R-tree stores a maximum of n entries. Each entry consists of a rectangle R
and a pointer P . At the leaf level, R is the bounding box of an actual object pointed to by P . At
internal nodes, R is the minimum bounding rectangle (MBR) of all rectangles stored in the subtree
pointed to by P . Note that a path along the tree corresponds to a sequence of nested rectangles, the
Figure
1: A sample R-tree. Input rectangles are shown solid.
last of which is an actual data object. Note also that rectangles at any level may overlap and that
the R-tree for a set of rectangles is by no means unique.
Figure
1 illustrates a 3-level R-tree assuming that a maximum of 4 rectangles fit per node. We
assume that the levels are numbered 0 (root), 1, and 2 (leaf level). There are 64 rectangles represented
by the small dark boxes. The 64 rectangles are grouped into 16 leaf level pages, numbered 1 to 16.
Note that the MBR enclosing each leaf node is the smallest box that fully contains the rectangles
within the node. These MBRs serve as rectangles to be stored at the next level of the tree. For
example, leaf level nodes 1 through 4 are placed in node 17 in level 1. The MBR of node 17 (and
nodes 18,19,20) is purposely drawn slightly larger than needed for clarity. The root node contains
the 4 level 1 nodes: 17, 18, 19, and 20.
To perform a query Q, all rectangles (internal or not) that intersect the query region must be
retrieved. This is accomplished with a simple recursive procedure that starts at the root and possibly
follows several paths along the tree. A node is processed by first retrieving all rectangles stored at
that node that intersect Q. If the node is an internal node the subtrees pointed to by the retrieved
rectangles (if any) are processed recursively. Otherwise, the node is a leaf node and the retrieved
rectangles are simply reported. For illustration, consider the query Q in the example of Figure 1.
After processing the root node, we determine that nodes 19 and 20 of level 1 must be searched. The
search then proceeds in both of these nodes. In both cases it is determined that the query region does
not intersect any rectangles within the two nodes and each of these two subqueries are terminated.
The R-tree shown in Figure 1 is fairly well structured. Inserting these same rectangles into an
R-tree based on the insertion algorithms of Guttman [3] would likely result in a less well structured
tree. Algorithms to create well structured trees have been developed and are described in Section 2.2.
These algorithms attempt to cluster rectangles so as to minimize the number of nodes visited while
processing a query.
For the rest of the paper we assume that exactly one node fits per page, and hereafter we use
the two terms interchangeably.
2.2 Loading Algorithms
In this section we describe three loading algorithms. The first is based on inserting a tuple at a time,
using one of the insertion algorithms proposed by Guttman [3].
Tuple-At-a-Time
This algorithm just inserts one tuple at a time into the R-tree using the quadratic split heuristic
of Guttman [3]. Note that the resultant R-tree has worse space utilization and structure relative
to the two algorithms discussed next. Thus, more node and disk accesses may be necessary to
satisfy a query.
The following two packing algorithms use a similar framework, based on preprocessing the entire
data file so as to determine how to group rectangles into nodes. In the following we assume that
a data file of R rectangles will be stored in a tree with up to n rectangles per node. The whole
processes is similar to building a B-tree out of a collection of keys from the leaf level up [7].
General Algorithm:
1. Preprocess the data file so that the R rectangles are ordered in dR=ne consecutive groups of n
rectangles, where each group of n are intended to be placed in the same leaf level node. Note
that the last group may contain less than n rectangles.
2. Load the dR=ne groups of rectangles into pages and output the tuples (MBR, page-number)
for each leaf level page into a temporary file. The page-numbers are needed for setting child
pointers in the nodes one level up.
3. Recursively pack these MBRs into nodes at the next level and up until only the root node
remains.
The algorithms differ only in how the rectangles at each level are ordered.
Nearest-X
This algorithm was proposed in [8]. The rectangles are sorted by x-coordinate. No details
are given in the paper so we assume that x-coordinate of the rectangle's center is used. The
rectangles are then packed into the nodes using this ordering.
Hilbert Sort (HS):
A fractal based algorithm was proposed in [4]. The algorithm orders the rectangles using the
Hilbert (fractal) space filling curve. The center points of the rectangles are sorted based on
their distance from the origin as measured along the Hilbert Curve. This determines the order
in which the rectangles are placed into the nodes of the R-Tree.
3 Model Description
Kamel and Faloutsos [4] introduced an analytical model for computing the average response time of
a query as a function of the geometric characteristics of the R-tree. Their model does not consider
the effect of a buffer and performance is measured by the number of R-tree nodes accessed. In
practice, query performance is mainly affected by the time required to retrieve nodes touched by
the query which do not reside in the buffer, as the CPU time required to retrieve and process buffer
resident nodes is usually negligible. We start by describing the model of Kamel and Faloutsos. We
then modify this model to better fit our definition for uniformly distributed queries of a fixed size.
Finally, we extend the model to take into account the existence of a buffer, including how to handle
the pinning in the buffer of the top few levels of the R-tree.
In this section we consider a 2-dimensional data set consisting of rectangles to be stored in an R-tree
T with H+1 levels, labeled 0 through H . All input rectangles have been normalized to fit within
the unit square We assume that queries are rectangles Q of size q x \Theta q y uniformly
distributed over the unit square. Note that a point query corresponds to the case q Our
description concentrates on 2D. Generalizations to higher dimensions are straightforward.
Throughout this section we will use the following notation,
number of pages at the ith level of T
Total number of pages in T , i.e.,
jth rectangle at the ith level of T
x-extent of R ij
y-extent of R ij
area of R ij , i.e.,
A Q
probability that R ij is accessed by query Q
number of accesses to R ij
Sum of the areas of all MBRs in T
Sum of the x-extents of all MBRs in T
Sum of the y-extents of all MBRs in T
number of queries performed so far
expected number of queries required to fill the buffer
number of distinct pages (at all levels) accessed in N queries
expected number of pages (buffer resident or not) of T
accessed while performing a query of size q x \Theta q y
expected number of disk accesses while performing a query
of size q x \Theta q y
3.1 The Model of Kamel and Faloutsos
Kamel and Faloutsos consider a bufferless model where performance is measured by number of nodes
accessed (independent of whether they currently reside in the buffer). They observe that for uniform
point queries the probability of accessing R ij is just the area of R ij , namely, A ij . They point out
that the level of T in which R ij resides is immaterial as all rectangles containing Q (and only those)
must be retrieved. Accordingly, for a point query the expected number of nodes retrieved as derived
by Kamel and Faloutsos is 1
which is the sum of the areas of all rectangles (both leaf level MBRs as well as MBRs of internal
nodes).
We now turn our attention to region queries. Let h(a; b); (c; d)i denote an upright rectangle
with bottom left and top right corners (a; b) and (c; d), respectively. Consider a rectangular query
We have modified the notation of [4] to make it consistent with the notation used in this paper.
(b)
(a)
Figure
2: (a) Two data rectangles and region query Q (b) Corresponding extended rectangles and
equivalent point query Q tr .
only if Q tr (the top right corner
of Q) is inside the extended rectangle R as illustrated in Figure 2.
Kamel and Faloutsos infer that the probability of accessing R while performing Q is the area
of R 0 , as the region query Q is equivalent to a point query Q tr where all rectangles in T have been
extended as outlined above. Thus, the expected number of nodes retrieved (as derived in [4]) is:
Equation 2 illustrates the fact that a good packing algorithm should cluster rectangles so as to
minimize both the total area and total perimeter of the rectangles in internal nodes of the R-tree.
For point queries, on the other hand, q minimizing the total area is enough.
3.2 A Modification to the Model for Region Queries
Consider the set of uniformly distributed region queries of size q x \Theta q y . Such a query is a rectangle of
the prescribed dimensions that fits entirely within the unit square (as otherwise, a rectangle would
be equivalent to a possibly much smaller rectangle).
There are two potential problems with using Equation 2 for analyzing the performance of queries
of a fixed size:
(b)
(a)
Figure
3: (a) The domain of Q tr for a query of size 0:3 \Theta 0:3 is U 0 (area not shaded), (b) Three data
rectangles. The probability of accessing R i is the area of R 0
divided by the area of
U 0 . Formula (2) uses the area of R 0
(shown dashed) instead.
1. For uniformly distributed rectangular queries of size q x \Theta q y the top right corner, Q tr , of the
query region Q, cannot be an arbitrary point inside the unit square if the entire query region
is to fit within the unit square. Rather, Q tr must be inside the box U
2. The probability of accessing a rectangle is not the area of R
this value can be bigger than one. Rather, the access probability is the percentage
of U 0 covered by the rectangle R
A Q
The difference between Formulas 2 and 3 is small if Q is small but becomes much larger as the
size of Q increases. This is illustrated in Figure 3, where a region query of size 0:3 \Theta 0:3 is considered.
Note that Q tr must fall inside U 0 , i.e., outside the shaded area of Figure 3a. Three rectangles (R 1 ,
R 2 , and R 3 ) of sizes 0:1 \Theta 0:2, 0:1 \Theta 0:1, and 0:05 \Theta 0:05, respectively, are shown in Figure 3b. Only
for R 3 does Equation 2 compute the same probability as Equation 3. The probability that Q will
access R 1 our modified model and 20% (resp. 16%) with the
original model of Kamel and Faloutsos.
The difference is, arguably, a matter of interpretation, but we point out that the original model
predicts that the probability of accessing rectangle h(0:1; 0:1); (0:9; 0:9)i with a query of size 0:3 \Theta 0:3
is 121%! With our modification the probability of accessing this rectangle is 100%. We point out
that under our model the probability of access of input rectangles is no longer uniform: rectangles
within the shaded region of Figure 3a are less likely to be accessed than rectangles inside U 0 . An
alternative that guarantees uniform access probability is to allow Q tr to fall uniformly inside
(which still requires a correction to the model of Kamel and Faloutsos, namely
A Q
query windows could partially lie
outside the area of interest, thus effectively reducing the actual query size. Since we want the model
to measure performance as a function of a specific query size, in the rest of the paper we use our
former interpretation, even though this means that data along the edges of the data set might be
accessed less frequently.
In both our buffer model as well as in the experiments performed to validate it, access probabilities
were computed by using Equation 3.
From now on we use A Q
ij to denote the probability that rectangle R ij will be accessed while
performing query Q. This probability is computed by applying Formula 3. Note that if Q is a point,
ij is simply A ij , the area of R ij .
3.3 A Buffer Model
Bhide et al [2] analyze the LRU buffer replacement policy for databases consisting of a number of
partitions with uniform page access within each partition. While modeling performance during buffer
warm-up they observed that the buffer hit probability at the end of the warm-up period is a good
estimator of the steady state buffer hit probability.
We conjectured that similar behavior would occur in the context of R-trees when applying
the LRU replacement policy, namely, that the steady state buffer hit probability is virtually the
same as the buffer hit probability when the buffer first becomes full. Our conjecture was verified
experimentally as discussed in Section 4.
Under uniformly distributed point queries, the probability of accessing rectangle R ij while performing
a query is A ij . Accordingly, the probability that R ij is not accessed during the next N
queries is and the expected
number of distinct pages accessed in N queries is,
Note that (which may or may not be bigger than B). The buffer,
which is initially empty, first becomes full after performing N queries, where N is the smallest
integer that satisfies D(N ) - B. The value of N can be determined by a simple binary
ComputeN ()
high /
while
high
low / high=2;
while D(N do
if (D(N
low / N ;
else
high / N ;
return N ;
Now, while the buffer is not full the probability that R ij is in the buffer is equal to P [B ij - 1].
The probability that a random query requires a disk access
Since the steady state buffer hit probability is approximately the same as the buffer hit probability
after N queries, the expected number of disk accesses for a point query at steady state
is
The above derivation also holds for region queries provided that A Q
ij is used instead of A ij (see
Formula 3).
We summarize the results of this section below:
The expected number of disk accesses required to satisfy a uniformly distributed random query Q
of size q x \Theta q y is:
A Q
where A Q
ij is the probability that Q intersects R ij as given by Equation 3.
Finally, we point out that it is easy to extend the above results to model a buffer management
policy that pins the top few levels of the R-tree in the buffer: simply reduce the number of buffer
pages by the number of pages in these pinned levels and omit the top levels from the model.
4 Model Validation
We validated the model by comparing it with simulation. The simulation models an LRU buffer
and, like the model, takes as input the list of the MBRs for all nodes at all levels. It then generates
bufsize simul model % dif simul model % dif simul model % dif
50 2.3120 2.2682 1.189 2.2060 2.1651 1.185 1.5892 1.5807 0.53
100 1.7008 1.6862 0.86 1.6288 1.6166 0.75 1.2336 1.2275 0.49
200 1.1627 1.1599 0.24 1.1194 1.1172 0.20 0.8720 0.8710 0.11
Table
1: Validation: average number of disk accesses per query for model and simulation.
random point queries in the unit square and checks each node's MBR to see if it contains the point. If
the MBR does contain the point the node is requested from the buffer pool. If the node is not in the
buffer pool, the least recently used node in the buffer is pushed out and the new node put on the top
of the LRU stack. Note that the simulation is accurate in the sense that an R-tree implementation
retrieves all and only those rectangles (internal or not) that intersect the region query. Confidence
intervals were collected using batch means with 20 batches of 1,000,000 queries each, resulting in
confidence intervals of less than 3% at a 90% confidence level.
We ran comparisons for three different R-trees and 6 different buffer sizes for each tree. Each
of the R-trees has 1668 nodes, but the MBRs of the nodes are different, as produced by the three
different packing algorithms. In Table 1 we report the average number of pages required from disk
per point query as predicted by the simulation, model, and the percent difference (relative to the
simulation). All of the results are within 2%, which is less than the confidence intervals returned
from the simulation. Other comparison experiments not shown resulted in agreement within 2% or
less. Simulation of region queries gave similar results.
5 Model Results
In this section we present performance results as predicted by our model. We consider two types
of queries: point queries and region queries. A point query specifies a point in the unit square and
finds all rectangles that contain the point.
Region queries specify a region of a given size and find all rectangles that intersect with this
region. Each query is a rectangle of size q x \Theta q y whose upper right corner is contained in region U 0
as described in Section 3.2.
5.1 Need For Consideration of Buffer Impact
Figure
4 plots the number of disk accesses versus buffer size for the Long Beach data set extracted
from the TIGER system of the U. S. Bureau of Census, assuming R-trees with 100 rectangles per
node. The left plot is for point queries, the right for region queries. Consider the top two curves of the
right plot. For small buffer sizes the TAT algorithm requires fewer disk access than the NX algorithm.
At a buffer size of 200 the performance of the two algorithms crosses and the NX algorithm becomes
the better algorithm. Hence, ignoring buffering would result in the incorrect conclusion that TAT is
better than NX, thus underscoring the importance of including buffer effects in comparison studies.
A second experiment shows that not considering the buffer impact can introduce significant
errors. In Figure 5 we present results for synthetic data. Uniformly distributed data sets were
created containing between 10,000 and 300,000 rectangles. For each rectangle the lower left corner
was uniformly distributed over the unit square. All rectangles are squares, where the length is chosen
uniformly between 0 and ffl. The value of ffl is fixed for all data set sizes and is equal to 2
0:25=10000.
Thus, for a 10,000 rectangle data set the sum of the rectangle areas is equal to 0.25 of the unit
square, for 100,000 rectangles the total area equals 2.5 times the unit square. This is similar to the
experimental methodology used in [4].
The top left graph plots the number of nodes visited (i.e. no buffering considered) versus
the number of rectangles in the data set. The top right graph plots the number of disk accesses
versus data set size for a buffer size of 10, and the bottom graph for a buffer size of 300. Ignoring
buffer impact (top left) leads to the conclusion that querying an R-tree of 300,000 rectangles is no
more expensive than querying an R-tree of 25,000 rectangles. This could cause a query optimizer
to produce a poor query plan. Once buffering is considered, the fact that larger trees are more
expensive is evident.
5.2 Choosing Buffer Size for R-trees
In this section we study the reduction in disk accesses obtained from increasing the buffer size. Since
main memory is a valuable resource, some insight into the gains from allocating more buffer space is
needed. Consider Figure 4 again where the number of disk accesses versus buffer size is plotted for
both point and region queries run against the Long Beach tiger data set. The data set has 53,145
rectangles (actually line segments). A node size of 100 rectangles is assumed resulting in the HS and
NX algorithms having 532 pages at the leaf level, 6 pages at level 1, and 1 root node. Buffer size is
varied from 2 to 500 pages (from 0.4% to 92.8% of the R-tree).
The left plot is for point queries. The curves plotted from top to bottom are TAT, NX, and HS.
The R-tree produced by the TAT algorithm is poorly structured, and, as a result, seems to benefit
significantly from each increase in buffer size. The HS tree is much better structured and, while
experiencing a halving of the number of required disk accesses for a buffer size of 10%, additional
buffer increases only help modestly. The NX algorithm does not experience as much of a reduction
in needed disk accesses for small buffer sizes as the HS algorithm. We hypothesize that the better
the R-tree structure, the more capable it is to capitalize on small amounts of buffer for point queries,
whereas the worse the R-tree structure, the more linear the reduction in disk accesses.
The right plot is for 1% region queries. Note that for region queries none of the curves have a
data size level 1 (root) level 2 level 3 level 4
Table
2: Number of nodes per level
well defined "knee". Thus, based on this experiment it appears it is possible that when executing
region queries reductions in disk accesses for increasing the buffer size is more linear than for point
queries.
5.3 Choosing the Number of Levels to Be Pinned
Past buffer management studies have shown that for B-trees the root and maybe the first level should
be pinned in the buffer pool. Pinning nodes decreases the buffer size available for other pages, but
guarantees that the pinned pages will be in the buffer. We present experiments to investigate what
gains in performance that can be expected from pinning R-tree nodes, and how many levels should
be pinned.
For the experiments in this section we wanted to get deeper R-trees while keeping the experiment
time reasonable. To do this we created synthetic data sets with 40,000 - 250,000 rectangles and used
R-tree nodes of size 25. This resulted in 4 level R-trees with numbers of nodes per level as shown in
Table
2.
In
Figure
6 we plot the number of disk accesses for point queries versus the data size for three
buffer sizes and for trees created by the HS algorithm. Results for the other algorithms were similar.
The plot on the top left is for a buffer of 500 pages; that on the top right if for a buffer of 1000 pages;
and the bottom graph is for a buffer of 2000 pages. The number of disk access remains roughly the
same for no pinning, pinning the first (root) level, and pinning the first two levels. All three scenarios
are plotted as the single top line in all three graphs. The bottom line is for pinning the third level
in addition to the first two levels.
As a general rule of thumb, pinning a level makes a big difference if the total number of pages
pinned is within a factor of two of the buffer size, but when the total number of pages pinned is
less than one third of the buffer size, only marginal benefits are seen. This is because for a smaller
relative number of pinned pages the LRU replacement policy succeeds in keeping the top levels in
the buffer without explicit pinning. For example, for a buffer size of 500 and a data size of 250,000
rectangles, pinning three levels pins 417 pages. This results in 53% fewer disk accesses. When the
data size is 80,000 rectangles, 135 pages are pinned and the saving is only 4%. For a buffer size
of 2000, pinning of the the first three levels makes almost no difference since the number of pages
pinned is less than one fourth of the buffer size.
Thus, for point queries pinning is advantageous, but only when the total number of nodes pinned
is within a factor of 2 of the buffer size. Note that pinning never hurts performance and hence for a
fixed size buffer dedicated to an R-tree application as many levels as possible should be pinned. If
the buffer is shared among many applications, the benefit of pinning must be compared to the other
applications. Our buffer model can be used to predict the benefit accrued by pinning a number of
R-tree levels.
For region queries all experiments to date (not shown) have resulted in only a modest (2-5%)
improvement from pinning. Thus, it appears that pinning may mostly benefit point queries, and
even then only under special scenarios.
6 Conclusions
We have developed a new buffer model to predict the number of disk accesses required per query for
a given input R-tree (specified by the minimum bounding rectangles of each node in the tree). Our
model has been shown experimentally to agree with simulation within 2% for numerous test cases.
The model is both simple to implement and quick to solve, thus providing a useful methodology for
further studies.
In order to develop our model, we first modified an earlier R-tree region query model. This
modified query model is used as a component of our overall model to predict the number of disk
accesses required per query.
With our model, we have demonstrated that using the number of nodes accessed per query as a
performance metric is not sufficient, because it ignores buffer effects. We show that once buffer effects
are taken into account not only does quantitative predicted performance change, but the qualitative
predictions can change as well. In particular, the actual ordering of policies can differ when buffering
is not considered versus when it is considered. Thus, it is essential to consider buffer effects in a
performance study of R-trees and use number of disk accesses as the primary metric.
In addition, we used our model to determine that for the data considered, small amounts of
buffer can super-linearly improve performance of well structured R-trees for point queries, but that
poorly structured trees experience only a more linear improvement in performance due to the increase
in buffer space. In addition, we find that for region queries even well structured R-trees result in a
more linear improvement as buffer is added.
Finally, we show that in most cases pinning the top levels of the R-tree has little effect on
performance relative to just using an LRU buffer. For point queries we did find certain scenarios
where if the buffer size is a about the same or a few times larger than the number of nodes at a level,
pinning that level does significantly improve performance.
Acknowledgments
. We would like to thank Jeff Edgington for developing the R-tree packing
algorithm code. We would also like to thank Ken Sevcik for useful discussions about a preliminary
version of this work.
--R
A simple analysis of lru buffer replacemtn policy and its relationship to buffer warm-up transient
R-trees: a dynamic index structure for spatial searching.
On packing r-trees
Hilbert r-tree: an improved r-tree using fractals
A simple and efficient algorithm for r-tree packing
Time and space optimality in b-trees
Direct spatial search on pictorial databases using packed r-trees
A model for the prediction of r-tree performance
--TR
--CTR
Shu-Ching Chen , Xinran Wang , Naphtali Rishe , Mark Allen Weiss, A web-based spatial data access system using semantic R-trees, Information SciencesInformatics and Computer Science: An International Journal, v.167 n.1-4, p.41-61, 2 December 2004
Yong-Jin Choi , Jun-Ki Min , Chin-Wan Chung, A cost model for spatio-temporal queries using the TPR-tree, Journal of Systems and Software, v.73 n.1, p.101-112, September 2004
Yufei Tao , Dimitris Papadias , Jun Zhang, Cost models for overlapping and multiversion structures, ACM Transactions on Database Systems (TODS), v.27 n.3, p.299-342, September 2002
Jignesh M. Patel , Yun Chen , V. Prasad Chakka, STRIPES: an efficient index for predicted trajectories, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Victor Teixeira De Almeida , Ralf Hartmut Gting, Indexing the Trajectories of Moving Objects in Networks*, Geoinformatica, v.9 n.1, p.33-60, March 2005
Byunggu Yu , Thomas Bailey, Processing partially specified queries over high-dimensional databases, Data & Knowledge Engineering, v.62 n.1, p.177-197, July, 2007
Antonio Corral , Yannis Manolopoulos , Yannis Theodoridis , Michael Vassilakopoulos, Cost models for distance joins queries using R-trees, Data & Knowledge Engineering, v.57 n.1, p.1-36, April 2006 | performance evaluation;analytical model;r-tree;buffer model;multidimensional indexing |
628045 | Dynamically Negotiated Resource Management for Data Intensive Application Suites. | AbstractIn contemporary computers and networks of computers, various application domains are making increasing demands on the system to move data from one place to another, particularly under some form of soft real-time constraint. A brute force technique for implementing applications in this type of domain demands excessive system resources, even though the actual requirements by different parts of the application vary according to the way it is being used at the moment. A more sophisticated approach is to provide applications with the ability to dynamically adjust resource requirements according to their precise needs, as well as the availability of system resources. This paper describes a set of principles for designing systems to provide support for soft real-time applications using dynamic negotiation. Next, the execution level abstraction is introduced as a specific mechanism for implementing the principles. The utility of the principles and the execution level abstraction is then shown in the design of three resource managers that facilitate dynamic application adaptation: Gryphon, EPA/RT-PCIP, and the DQM architectures. | Introduction
There is an emerging class of application programs, stimulated by the rapid evolution of computer
hardware and networks. These distributed applications are data intensive, requiring that diverse
data types (beyond the traditional numerical and character types) such as audio and video streams
be moved from one computer to another. Because of the time-sensitive nature of moving stream data
(and because of the similarity of the target applications with traditional hard real-time applications),
these applications are often referred to as soft real-time applications. In contrast to hard real-time sys-
tems, not every deadline must be met for these applications to be considered a success (although most
deadlines should be met).
A distributed virtual environment (DVE) is one example of soft real-time applications: a DVE supports
a logical world containing various shared entities; users interact with the entities in the world
using a multimedia workstation. DVEs are data-intensive, since shared information must be disseminated
throughout the network of user machines. There are many other classes of applications that
exhibit similar data movement characteristics, including multimedia systems, image-based information
systems, image processing systems, video conferencing, and virtual reality.
The purpose of an operating system is to manage the resources and facilities used by processes,
which distribute the data among the objects. Traditionally, general purpose operating systems are
designed with built-in policies; such resource managers provide their best effort to satisfy all resource
requests according to a relatively inflexible, but generally fair, policy. Best-effort resource management
policies typically do not include support for deadline management, particularly where the precise
nature of the deadline management depends on the application semantics.
In soft real-time applications, the importance of any specific data movement task depends on the
importance of the two entities involved in the transfer. At one moment, it may be very important to
the system's user to move data from location A to location B because the user is directing the computer
to perform some task depending on A and B, whereas a few minutes later, interactions between A and
B may be far less important since the user has shifted the focus to another set of computational entities.
For example, suppose that an object containing a video clip is shared between two users, Joe and Betty,
on two different workstations interconnected by a network; while Joe and Betty are discussing the
video clip, it might be important that both workstations have a high fidelity moving image on their
screens. As the video continues to run, suppose Joe and Betty shift their attention to a different set
of objects; it is no longer necessary to support the data transfer rate required for high fidelity video,
particularly if the maintaining the data transfer for the video clip uses resources that are needed for Joe
and Betty's new activity. Resource management policies that support such applications are sometimes
called user-centric [13]. These types of policies stress that when the system cannot satisfy all of the
resource requests of all running processes, the resources should be allocated to best satisfy the desires
of the user.
For the past few years we have focused on resource management techniques for supporting soft
real-time data movement. Early experimentation indicated that the operating system performance
was inadequate to support this type of data movement. However, it was observed that the effect of
resource bottlenecks could be minimized if the OS employed an allocation policy in which resources
were directed at applications (or the parts of the application) that needed them at the moment, yet
which could be changed as the user's activities changed - a resource management policy was needed
that was sensitive to the dynamic needs of the users in the context of their applications. Therefore, the
effort was focused on defining and experimenting with ways for the system to provide more effective
support for these applications (as originally reported in the conference paper from which this paper is
derived [21]).
There are two primary contributions in this paper. The first is the identification of a set of three principles
to guide the development of soft real-time applications, along with the explanation of a mechanism
execution levels - to implement the three principles. The principles are based on the requirements
of data-intensive, soft real-time applications that may collectively require more resources than
are available. These principles define guidelines for programmers to develop soft real-time applica-
tions, and serve as requirements for a system that can provide the accompanying support. Informally,
the execution level abstraction maps resource usage to goodness, and defines a unique mechanism
both to design applications and to manage real-time performance. The execution level abstraction is
one concrete mechanism for implementing software that embraces the principles.
The second contribution of the paper is the presentation and evaluation of three specific aspects of
soft real-time support that illustrate the utility of the principles and execution levels:
. Many contemporary applications in the target domain are written as object-oriented programs,
requiring support for distributed objects. Object management policies are crucial to the overall
data movement performance. The Gryphon distributed object system provides a means by which
applications can influence the system's object placement, caching, and consistency policies [10].
Gryphon uses execution levels to tradeoff shared object consistency versus network bandwidth.
Analysis and experiments show that by managing the policies to match the application strategy,
remote object reference performance can be improved by several orders of magnitude.
. Applications frequently need to move large amounts of stream data from one device to another,
e.g., from the network to a display. As data moves between devices, it sometimes needs to be
filtered (e.g., to compress/decompress the data, or to extrapolate missing sections). The Real-Time
Parametrically-Controlled In-kernel Pipe (RT-PCIP) mechanism provides a means to in-
sert/remove kernel-level filters used when moving data between devices. The Execution Performance
Agent (EPA) employs another form of execution levels, where a level is determined by the
confidence and reliability of the application's pipeline service request estimates. Once the EPA
has selected a level of execution (based on the application service estimates), it configures the
pipeline, then controls the RT-PCIP operation by adjusting filter priorities. The EPA/RT-PCIP
facility provides a form of soft real-time control not previously available using in-kernel pipe
mechanisms.
. Generic soft real-time applications need certain resource levels in order to meet their deadlines.
Such applications can be written to dynamically modify their processing according to the availability
of resources such as CPU, network bandwidth, etc. The Dynamic Quality of Service Resource
Manager (DQM) uses execution levels to allow applications to dynamically negotiate for
CPU allocation. As a result, these applications can implement a broad range of soft real-time
strategies not possible in other systems.
Section 2 explains the system design principles and shows how execution levels provide a mechanism
for implementing them. Section 3 introduces the Gryphon distributed object system, explains
how it uses execution levels, then discusses the performance of the approach. Section 4 presents the
EPA/RT-PCIP mechanism and shows how it allows applications to provide one approach to soft real-time
to control in-kernel filter mechanisms. Section 5 explains how the DQM mechanism uses execution
levels, then discusses several aspects of its behavior. Section 6 is the summary and conclusion.
Various researchers have addressed different aspects of soft real-time support (e.g., see [6, 7, 8, 9, 12, 14,
17, 19, 23, 25]). One problem with many of these studies is in the inherent definition of the term "soft
real-time:" It can mean that applications almost always meet deadlines for computation, that priorities
can be elevated if the deadline miss-rate is too high, that an application's period can be lengthened if
it misses too many deadlines, etc. One conclusion to draw from this diversity of perspectives is that
all of the characteristics are important in one context or another. Unfortunately, it would be difficult
to design an OS to behave properly on an case-by-case basis. It then follows that one might consider a
meta approach in which there is a framework for the way applications make their requirements known
to the OS; the responsibility for casting the specific soft real-time requirements into the framework is
then the responsibility of the application designer. This study is based on that meta approach. It is
best explained by considering a set of underlying principles that have been derived from studying
various soft real-time applications, particularly a DVE, and by carefully considering the types of OS
support these applications require. This section explains the rationale for the principles, the principles
themselves, and the execution level realization of the principles. The remainder of the paper shows
how the principles have been applied to three different aspects of system support.
2.1 Motivation for the Principles
The target applications have substantial data movement between units, where a unit is an application,
an object, a compound object such as one intended to represent a person in a DVE, a thread, etc. In these
applications, any unit needs to be able to change the importance of its interactions with other units
according to information that can only be known at runtime (e.g. the focus of the user's attention).
Based on this information, the relative importance of the units can be changed dynamically to reflect
the best interests of the users.
As a representative of the target class of applications, we built a prototype virtual planning room
[22]. The VPR is a multiperson DVE supporting free-form communication in a manner similar
to electronic meeting rooms and other distributed virtual environments. A VPR world is a collection of
objects, with VRML representations and behaviors of varying complexity. A compound object representing
a human participant (an "avatar") becomes a part of the world when the user enters the VPR.
The basic role of the VPR is to provide real-time audio and video support across the network, to render
objects on each user's screen (according to the user's avatar orientation), and to provide an environment
in which one can add domain-specific extensions. The VPR is a client-server system where each
person uses a workstation to implement the human-computer interface. Hence, the client machine
must render each visible artifact from the VRML description of appropriate objects in the world, and
cause behaviors (such as modifications to objects) to be reflected in all other appropriate clients. Once
the VPR had been developed, it was used to begin exploring system software design and organization
that might be well-suited to soft real-time applications.
Programmers in such an environment quickly learn to design objects so that they use different
strategies for performing work, according to the perceived importance of that work to the user:
. If a graphic object or video stream is not the focus of the user's attention (i.e., the user has not
oriented the avatar's eye directly at the video stream object), considerable system resources will
be used to render the image(s) when the user does not really care about it.
. If three people are using the VPR and two of them are engaged in high frequency manipulation of
a complex, shared object, the third person probably does not want to use inordinate workstation
resources tracking minor changes to the shared object.
. If a user is engaged with a video stream for which the system is unable to deliver a full 24 frames
per second, the user will often choose to run the video playback at 12 frames per second rather
than having it fluctuate between 16 and 24 frames per second.
. If the network bandwidth into a workstation is relatively underutilized and local resources are
oversubscribed, then local resources are momentarily more important than network bandwidth
and the processing should be changed accordingly. For example, application throughput (and
therefore end user satisfaction) would be improved if the workstation received an uncompressed
data stream from a remote site rather than decompressing the stream locally as it is received.
2.2 System Design Principles
Based on the experience with the VPR and various underlying systems, we developed a relatively
straightforward set of principles to direct our ongoing system research. Though the principles are sim-
ple, they highlight characteristics of the soft real-time application domain that most current operating
systems do not address.
Real-Time. Many of the aspects of the target application domain involve periodic computation in
which some processing must be done repeatedly and according to a regularly occurring deadline. For
example, display frame update must occur at least 24 times per second (many systems are designed
to support per second). However, users are often able to tolerate certain failures to meet
the deadline, especially if it means that another aspect of the user's work will receive higher quality
service. The failure mode (softening of the deadline requirement) can vary according to the exact nature
of the application mix: Occasional missed deadlines are acceptable, provided they are not regular;
consistently missing the deadline by a small amount of time may be acceptable; the application may
be able to scale back its service time requirement to make it possible for the system to make the dead-
line; some other application may be able to scale back its resource usage so that all units meet their
deadlines; etc.
Principle 1: Resource managers should support diverse definitions of application failure
and provide sufficient mechanisms to react to missed deadlines.
Application Knowledge. In an oversubscribed, multithreaded system, the resource managers must
block some threads while they allocate their resources to others. A best effort resource manager has a
built-in policy to guide the way it allocates resources. In an environment such as a DVE, the relative
importance of each thread changes according to the nature of the relevant objects and the attention of
the user. The resource manager needs additional information to perceive the situation - information
that is only available at runtime (via the application).
Principle 2: Applications should provide more information than a single number (e.g.,
priority) to represent their resource utilization needs, and the resource managers should be
designed to use this knowledge to influence the allocation strategy.
Dynamic Negotiation. In a conventional environment, when an application makes a request for re-
sources, it assumes that the resources will be available and that the application will, in turn, provide its
best service. In a more flexible environment, the application might request an amount of resource, K 0 ,
that the resource manager is unable to satisfy. The resource manager could respond to the application
by saying that it can offer K 1 units of the resource based on periodic usage - a form of admission
from hard real-time or quality of service (QoS) technologies. The application could respond by saying
that it would be willing to change, say, its period, and would need K 2 units of the resource, etc. That
is, the application and the resource manager potentially enter a negotiation whenever the application
asks for resources, or for an assurance of resource availability in the case of periodic computing. The
nature of soft real-time computing makes it very difficult for the application to provide an "optimized"
request, since it does not know the state of the system's resources. In a hard real-time system (and in
many soft real-time systems), the request is made using the worst case estimate; unfortunately, worst
case requests tend to tie up resources during times when they are not really needed, aggravating the
oversubscription problem.
Real-time systems determine their behavior at admission time by requiring that each application
unequivocally determine the maximum amount of resource that it will ever need. If a hard real-time
system supports dynamic admission, it must analyze each new request in the context of all extant
resource commitments. In the target domain, applications may enter and leave the system at any time,
and threads/objects may frequently change their resource needs: This suggests that resource requests
should change, and that the nature of each individual negotiation might change whenever any unit
makes a resource request.
Principle 3: The resource management interface should be designed so that the level of
allocation can be negotiated between the two parties, with negotiation initiated by either
party at any time.
2.3 Execution Levels: A Mechanism for Dynamic Negotiation
These principles discuss some responsibilities of soft real-time applications and resource managers
and the communication interface between them. The first principle focuses on application behavior
- what the application should do to address different forms of soft real-time, and how it should deal
with missed deadlines. The second principle addresses the interface for the application and resource
manager to interact with one another. The third principle is concerned with the resource manager's
obligation in the negotiated policy.
The principles suggest that a resource management philosophy is needed where the application
can assume part of the responsibility for the resource allocation strategy (Principles 1 and 2), yet which
fits within the general framework of a multiprogrammed operating system. This requires that the
nature of the application-system programming interface be enhanced to address the principles. Others
have also recognized that this kind of shift in the interface could substantially improve overall system
performance for "single-application, multiprogrammed" domains e.g., see [20, 23, 25].
Principle 3 suggests that there is a framework in which applications and resource managers can
pose resource allocation scenarios to one another. This can be accomplished by providing a language
for interaction, then ensuring that the two parties are prepared to interact with one another.
The principles can be realized using a software abstraction called execution levels. An application's
execution levels are defined during the design and implementation of the software for the application.
Execution levels do not provide a mechanism for designing or selecting a soft real-time strategy; this
is still the responsibility of the application developer. However, once a strategy is selected, execution
levels allow an application to specify the resource requirements that are consistent with the strategy
that it implements.
A set of execution levels represents varying strata of resource allocation under which an application
is able to operate. In the simplest case, the execution levels for an application are:
different levels and m different resource types. The application unit can provide its
highest quality service if it is able to acquire R j,1 units of the j resource types. The application writer can
also design a first alternative strategy that uses R j,2 units of the j resource types, providing degraded
service, e.g., graphics figures may not be rendered as well, a period may need to be longer, or the frame
update rate may be lower.
Applications written to run in oversubscribed environments commonly use a form of execution
levels as a matter of course (without any system support). For example, graphics programs frequently
use "wireframes" to represent geometric solids during certain phases of graphic editing. Table 1 shows
a set of execution levels related to this kind of graphic programming technique used in the VPR (the
table represents the amount of CPU time used for various rendering options in the VPR). A simple
moving object changes its processing time over a 4:1 range in 12 execution levels by varying only 3
Rendering Lights Polygons Frames per second % of Max
smooth 1 2X 3.19 100.0%
wireframe 1 2X 4.45 71.7%
smooth 0 2X 4.76 67.0%
smooth 1 1X 5.87 54.3%
wireframe 0 2X 7.70 41.4%
smooth 0 1X 7.97 40.0%
wireframe 1 1X 8.94 35.7%
wireframe 0 1X 12.74 25.0%
Table
1: Varying Resource Usage in the VPR
parameters: rendering mode (wireframe, flat shading, or smooth shading), number of specific light
sources (0 or 1), and number of polygons (those marked 2X used twice as many polygons as those
marked 1X). The table shows frames per second generated and CPU time used as a percentage of the
highest level. The OpenGL Performance Characterization Organization [24] has similar performance
measurements showing applications that exhibit 10 different execution levels with CPU requirements
varying by as much as a factor of 10. See [11] for further justification for using execution levels in
applications.
Execution levels define a total order over a resource vector for all system resource types. While this
is the underlying theory for the approach, none of the projects described in this paper currently address
more than one resource type. Thus, it is relatively easy to define the total order so that as the level
increases (i.e., the quality of the solution decreases), the resource requirements also decrease; this is the
fundamental constraint the approach makes on the application in defining its notion of soft real-time.
Even though the resource requirement versus level is monotonic, in practical applications it would
not normally be linear. As the level increases (the quality of the application's service decreases), there
is usually some point above which the application provides no service. For example, once the video
frame rate of a streaming video application falls below 5 frames per second, its quality is effectively
zero.
Execution levels are a mechanism that enables applications and resource managers to implement
soft real-time consistent with the principles described above. Figure 1(a) represents the conventional
Thread
Thread
Thread
Application
Resources
Resource
Manager
(a) Conventional
Thread
Thread
Thread
Application
Negotiation
Resources
Resource
Manager
Mechanism
Execution Levels
(b) Execution Levels
Figure
1: Execution Level API for Resource Manager
relationship among threads in an application and the resources they use (the undirected, solid lines).
The dashed lines in the figure represent control flow among the resource, the resource manager, and
the application. In best effort approaches, the resource management policy is built into the resource
manager at the time it is designed; there is no interaction between the policy module and the set of
applications.
Figure
1(b) shows a new framework with a logical component, called a negotiation mechanism, that
interacts with the application using execution levels. Conceptually, the negotiation mechanism appears
as a conventional application to resource managers, and the negotiation mechanism appears as
a resource manager capable of dynamic negotiation to the applications. In the framework, applications
are able to create their own tactics for defining and managing soft real-time, then for expressing their
resource needs to a resource manager (the negotiation mechanism) using execution levels; this supports
Principles 1 and 2. The negotiation mechanism is an extension of the work done by conventional
Thread
Thread
Thread
Application
Negotiation
Mechanism
Execution Levels
Object Store
Gryphon ORB
Figure
2: An ORB with Gryphon
resource managers - it provides a module to do dynamic negotiation; this supports Principles 2 and
3.
Execution levels are a sufficient language for supporting dynamic negotiation, though the problem
now shifts to how the application and negotiation modules should be designed. The distributed object
manager study, the real-time pipe control study, and the dynamic QoS manager study all employ
different approaches for designing and implementing these modules. In the remainder of the paper
we consider each of these studies in detail.
3 The Gryphon Distributed Object Manager
Network bandwidth is the limited resource being managed and shared in this aspect of the work.
For object oriented systems, it is natural to study distributed object managers as a means of addressing
network bandwidth performance (e.g., see [15, 16, 18, 26]). The Gryphon is an enhancement to a
conventional distributed object manager (such as a CORBA ORB). The purpose of the Gryphon is to
support dynamic negotiation of object placement policies according to the needs of the application and
the state of the system resources. Figure 2 describes the general architecture of the Gryphon approach
(in the context of Figure 1). Each application uses the CORBA IDL interface for normal object refer-
ences, and a supplementary language for describing the object placement and caching policy preferred
by the application; the supplementary parts of the language are called policy hints. A set of hints constitutes
one execution level, e.g., the distributed view of the object may use a strong consistency model
at one level and a weaker form of consistency at a lower level. The negotiation mechanism defines the
execution level order for the application, then uses that definition in directing the Gryphon according
to observed performance of the system. If the negotiation mechanism detects a shortage of network
bandwidth, it lowers the execution level of the applications to free up bandwidth. The Gryphon has
been designed to analyze the hint information it receives from all applications, then to set policies in
the ORB.
The Gryphon system supports the fundamental principles described in Section 2 as follows:
Principle 1: Soft Real-Time Using execution levels an application can have variable object coherence,
object placement, and various timeliness updates policies.
Principle 2: Application Knowledge The application supplies information regarding location and caching
(for each level) in the form of per user, per object update granularity and consistency requirements
Principle 3: Dynamic Negotiation The application dynamically modifies the object coherence, place-
ment, and timeliness policies. For example, users change their focus from one set of objects to
another.
3.1 Representing Execution Levels
real-time applications can have a wide variety of object reference patterns. For example each of
the following scenarios represent one recurring class of VPR applications (there are also other types,
though these three will illustrate the approach). Each of these scenarios generates a radically different
set of requirements on the object manager:
Scenario A: A Learning Laboratory The DVE is used as a laboratory in which a student or small
group of students can conduct various experiments. A student may browse through different
experiments without communicating with other students, or join a group to work with other
people. The laboratory has a number of static objects with complex VRML specifications, e.g.,
lab apparatus and documentation. Avatars move infrequently, but most other objects do not
move at all.
Scenario B: Collaboratively Flying an Unoccupied Air Vehicle Siewert built an unoccupied air vehi-
cle, FLOATERS, to test various parts of the EPA/RT-PCIP work (see Section 4). The VPR has been
used to "fly" FLOATERS, i.e., one can navigate FLOATERS by manipulating its virtualization in
the VPR. This scenario is for a group of people to navigate FLOATERS as collaborative work
from within the VPR. Here avatars are in the virtual space together and they can see each other
and other objects in the room.
Scenario C: A Weather Modeling Application Weather modeling is highly data and computation in-
tensive, with the end result being weather information displayed in a DVE. Weather data is partitioned
into small regional subsets, then intense processing is performed on each partition. After
the first phase of processing is complete, data at the fringes of the subsets are distributed to other
processes and then computation continues.
If any of these scenarios are implemented in a system like the VPR, objects will be shared across
many workstations, causing substantial network traffic. Conventional distributed object managers
provide location transparency, though the experience with the VPR shows that the distributed components
needed to have substantial influence over object location policies (without this flexibility, the
applications could not make performance tradeoffs based on access demands).
There are several techniques that can reduce network traffic due to remote object reference:
Placement If an object is stored on host X and is frequently referenced (only) from host Y, the traffic
and delay to the application could be reduced by storing the object at host Y.
Caching If an object is stored on host X, and is frequently read from hosts Y and Z, then keeping copies
of the object at hosts Y and Z can reduce network traffic.
Consistency If an object is stored at host X and is rapidly being changed by arbitrary hosts but is being
read by host Y, traffic and processing overhead at host Y can be reduced by allowing Y to keep
an out-of-date copy of the object (as opposed to updating Y's copy each time X's copy changes).
The Gryphon approach is based on the idea that the applications are the only component capable
of choosing among these techniques, since application behavior is a critical factor in the benefit of each
technique. Table 2 represents the relationship between the application hints and levels for Scenarios
Level Hints Semantics
Centralized location
Strong consistency
Distributed location
caching
3 Location Best location
caching
4 Location Best location
Cache strong Strong consistency
5 Location Best location
Cache sequential Sequential consistency
6 Location Best location
App directed caching App directed caching
7-N Location Best location
Caching Some updates not propagated
Update frequency
Table
2: Execution Levels for Scenarios A and B
A and B (which differ primarily in number of objects and their movement). An application implementing
Scenario A or B would use decreasing amounts of network bandwidth with decreasing level
(higher level number). In these scenarios, an application would be inspired to run at a lower (higher-
level if some of its components were missing soft deadlines. That is, levels 1-6 all produce
the same behavior, though the application has to provide more information in level i than it does in
level i-1, and the overall system benefits due to reduced network traffic. The distinction between levels
1 and 2 is related to using a single object storage location versus distributing the objects to multiple
storage servers; while this would be an unusual application option, it is included to emphasize that
centralized servers cause higher network traffic. In levels 7 to N, the application's fidelity erodes since
at level 7, a host machine allows changing objects to become inconsistent due to lack of update propa-
gation. (The differences among these levels is in the number of updates a host is willing to miss.)
Scenario C (Table 3) has a different set of data reference patterns than do the other two scenarios
(and consequently it can operate under a different set of resource allocation criteria than Scenarios A
and B). Data are distributed to host machines that perform localized computation - this is the "best
location, strong consistency" case. Again, lower (higher-numbered) levels represent situations that use
less network bandwidth, meaning that the application must do more work to achieve the same result,
but that there will be more bandwidth available to the system.
Level Hints Semantics
location
Strong consistency
distributed location
Strong consistency
3 Location Best location
Sequential consistency
4 Location Best location
caching
5 Location Best location
App directed caching App directed caching
6-N Location Best location
Caching App directed caching
Different algorithms
Table
3: Execution Levels for Scenario C
3.2 Performance Analysis
To analyze the performance of a Gryphon system implementation, models based on the scenarios (and
others not discussed here [10]) were used to characterize traffic patterns resulting from different object
managers. In the VPR, object state changes when the object moves (it may also change due to other
behaviors, though this simplification is sufficient for this analysis). Assuming that a single message is
used to move an object, and that all messages are small and fit into one network data packet, Table 4
shows the parameters to characterize message traffic, and Table 5 shows the values to represent the
three scenarios. Scenario A is notable for its large number of objects with many of them
moving there are also many processes using objects
finally, video fidelity is required to be very good in this scenario
many of the same characteristics as Scenario A, except that the number of objects is greatly reduced (S
represents a different kind of application where there are
many moving objects objects being updated at a time relatively
high update rates however, the frame update rate is zero.
These parameters are used to derive equations for three metrics:
T VPR Amount of network traffic to all VPR processes in messages per second
Tapp Amount of network traffic to all non-VPR processes in messages per second
N Number of moving objects
M Number of objects being modified at each process
U Update rate for each of the moving objects
L Number of processes using the object
V Number of VPR processes
S Number of static (not moving) objects
F Update rate of display frames
R Ratio of updates that get propagated
Table
4: Parameters used to Model Network Traffic
C: Weather Modeling 10,000 1,000 1,000
Table
5: Characteristics for Scenarios used to Evaluate the Gryphon System
total Total traffic in the network in messages per second
Using the model and the scenarios, Gryphon system performance can be compared with centralized
and distributed CORBA object managers:
System 1 (ORB centralized CORBA) This object manager is a centralized ORB. There is a single server
that stores all objects, so any reference to an object requires a remote reference. In addition, since
the ORB has no special knowledge of the application, a send and a receive message is required
to determine the state of an object. Because the ORB is centralized and because of the amount of
traffic, the server will likely be a bottleneck.
System 2 (ORB distributed CORBA) The object manager is a distributed configuration of an ORB. All
objects are randomly and equally distributed among the processes. The ORB is not centralized
and local objects do not result in message traffic. Now, accesses that would have gone to the
central ORB now go to the process where the object is located (distribution addresses the implicit
bottleneck due to centralized configurations). In T VPR the first part of the expression represents
read operations by the local client and the second part represents reads by external clients to the
data stored on the local server. Note that the expression includes references due to frame updates
(a DVE needs to render objects, it would implicitly read each object at the frame update rate).
System 3 (Gryphon with location policy) The object manager includes a Gryphon capable of acting
only on location hints. Like System 2, objects are evenly distributed across processes but in this
case they are assumed to be located on the client making the modifications.
System 4 (Gryphon with location and caching policies) The object manager includes a Gryphon capable
of acting on location and caching hints. The model reflects the fact that data are pushed
to the clients instead of being pulled via request messages (i.e., we remove the 2X multiplier to
reflect the absence of a send message).
System 5 (Gryphon with location, caching, and consistency policies) This configuration is the full Gryphon
system.
Models Scenario A Scenario B Scenario C
System 1 528,004 5,764 2,000,000
System 2 1,054,950 10,952 3,600,000
System 3 1,054,940 10,944 0
System 4 1,998 38 9,000,000
System 5 20 19 900
System Scenario A Scenario B Scenario C
System 2 528,008 5,768 3,600,000
System 3 528,000 5,760 0
System 4 1,998 38 9,000,000
System 5 20 19 900
Tapp Comparison
System Scenario A Scenario B Scenario C
System 1 528,004,000 115,280 20,000,000
System 2 527,476,000 109,516 18,000,000
System 3 527,472,000 109,440 0
System 4 1,998,000 760 90,000,000
System 5 19,980 380 9,000
total Comparison
Table
Gryphon System Performance Comparison
Table
6 summarizes network message traffic using the load generated by the three scenarios, and
supported by the five different object management configurations. Some highlights of the results are:
. System 1 (centralized CORBA) and System 2 (distributed CORBA) are subject to substantially
more traffic than the others in almost every scenario due to their requirement to support location
transparency (and not supporting caching).
. Application-favored object placement has a substantial impact in Scenario C, and consequently
Gryphon performs much better than the other systems with location transparency.
. Caching (System results in large performance gains in Scenario B, but results in unnecessary
cache consistency updating in Scenarios A and C.
. The network traffic for Scenario 3 with System 3 is zero since the analysis did not show infrequent
reads of small portions of data. In any case, the load is negligible.
In general, the table shows that the Gryphon approach significantly reduces the message traffic rate
compared to the other approaches; the total message traffic, T total , for the Gryphon system is only a
fraction of a percent of centralized and distributed CORBA systems for all three scenarios. This work
illustrates how object policies can be cast as execution levels to support the principles, and also shows
the relative performance at the different levels. Further experiments and results are reported in [10].
4 In-Kernel Pipeline Module Thread Control
This aspect of the work focuses on support for continuous media flow between devices. Several studies
have shown how to embed application-specific code in a kernel so that it can perform operations specific
to the data stream (e.g., see [6, 7, 9]). However, these approaches do not allow the application to
influence the way resources are allocated to the components to address application-specific tradeoffs.
The Real-time, Parametrically Controlled In-kernel Pipe (RT-PCIP) facility provides a means for insert-
ing/deleting filter programs into/from a logical stream between two devices, where each filter can be
parametrically controlled by the negotiation mechanism (see Figure 3). The Execution Performance
Agent (EPA) is the negotiation mechanism that interacts with the user-space part of the application to
dynamically adjust scheduling priorities to achieve soft real-time control over the pipeline.
The EPA/RT-PCIP supports the principles for soft real-time as follows:
Principle 1: Soft Real-Time Diverse forms of computation can be expressed in terms of deadline confidence
and execution time reliability (analogous to execution levels).
Principle 2: Application Knowledge The application provides desired deadline confidence and reliability
in the expected execution time rather than a simple priority to the EPA.
Principle 3: Dynamic Negotation The EPA supports negotiated management of pipelines through
an admission policy based on relative execution time reliabilities and requested deadline con-
fidences. Requested confidence in deadlines may be renegotiated online.
The EPA and the policy for admission and confidence negotiation are derived from an extension
to the deadline monotonic scheduling and admission policy which relaxes the hard real-time require-
CPU
Scheduler
Application
Filter
Device
Device
EPA
kernel
user
Filter
Filter
Execution Levels
Parametric Control
Figure
3: The EPA/RT-PCIP Architecture
ment that worst-case deterministic execution time be provided. The EPA provides the confidence and
reliability semantics and the online monitoring of actual reliability and admission testing. In order to
describe how the EPA is able to provide execution control in terms of deadline confidence and execution
time reliability, we will review deadline monotonic hard real-time scheduling and show how it
can be extended to implement the EPA.
In this aspect of the work, the goal is to dynamically negotiate the policy for allocating resources
used to move data from one node to another, or within one node, from one device such as a disk to
another device such as the sound card. The RT-PCIP architecture uses existing techniques for creating
modules to be embedded in kernel space as extensions of device drivers (c.f. [3, 7]). Each device has
an interface module that can be connected to an arbitrary pipe-stage filter; a pipeline is dynamically
configured by inserting filters between a source and sink device interface. An application in user-space
monitors summary information from the kernel in order to control the movement of data between the
source and sink devices. The purpose of the EPA is to interact with the user-space application and with
the modules in the pipeline. Specifically, it provides status information to the user-space program, and
accepts parameters that control the behavior of the filter modules, and ensures that data flows through
the pipeline according to real-time constraints and estimated module execution times.
4.1 Soft Real-Time Pipeline Control
In a thread-based operating system environment, pipe module execution is controlled by a kernel
thread scheduler-typically a best-effort scheduler. As long as the system does not become overloaded,
the pipe facility will provide satisfactory service. In overload conditions the EPA dynamically computes
new priorities for the threads executing the modules, then provides them to the scheduler so
that it can allocate the CPU to threads with imminent deadlines.
Since hard real-time systems guarantees that each task admitted to the system can be completed
prior to a prespecified deadline, they are, of necessity, conservative. Processing time estimates are
expressed in terms of the worst case execution time (WCET), admission is based on the assumption
that every task uses its maximum amount of resources, and the schedule ensures that all admitted tasks
execute by their deadline. Continuous media applications have softened deadline requirements: The
threads in a continuous media pipe must usually meet deadlines, but it is acceptable to occasionally
miss one. When the system is overloaded-the frequency of missed deadlines is too high-the EPA
reduces the loading conditions by reconfiguring the pipeline, e.g., by removing a compression filter
(trading off network bandwidth for CPU bandwidth).
The EPA design is driven by experience and practicality: Rather than using WCET for computing
the schedule, it uses a range of values with an associated confidence level to specify the execution time.
The additional requirement on the application is to provide execution time estimates with a range and
a confidence; this is only a slightly more complex approach than is described in the use of Rialto [14].
An application that loads pipeline stages must specify the following parameters:
. Service type common to all modules in a single pipeline: guaranteed, reliable, or best-effort
. Computation time: WCET for guaranteed service, expected execution time (with specification of
distribution, such as a normal distribution with mean # and a specified number of samples) for
reliable service, or none for best-effort service
. Input source or device interface designation
. Input and output block sizes
. Desired termination and soft deadlines with confidence for reliable service (D term , D soft ,
confidence term , and confidence soft )
. Minimum, R min , and optimal, R opt , time for output response
. Release period (expected minimum interarrival time for aperiodics) and I/O periods
4.2 EPA-DM Approach to Thread Scheduling
The approach for scheduling RT-PCIP thread execution is based on a branch of hard real-time scheduling
theory called Deadline Monotonic (DM) [1]. DM consists of fixed-priority scheduling in which
threads are periodic in nature and are assigned priorities in inverse relation to their deadlines. For
example, the thread with the smallest deadline is assigned the highest priority. DM has been proven
to be an optimal scheduling policy for a set of periodic threads in which the deadline of every thread
is less than or equal to the period of the thread.
In addition, the concept of EPA-DM thread scheduling for pipeline stages is based on a definition
of soft and termination deadlines in terms of utility and potential damage to the system controlled by
the application (see Figure 4 and [5]). Figure 4 shows response time utility and damage in relation to
soft and termination deadlines as well as early responses. The EPA signals the controlling application
when either deadline is missed, and specifically will abort any thread not completed by its termination
deadline. Likewise, the EPA will buffer early responses for later release at R opt , or at R min worst
case. Signaled controlling applications can handle deadline misses according to specific performance
goals, using the EPA interface for renegotiation of service. For applications where missed termination
deadline damage is catastrophic (i.e. the termination deadline is a "hard deadline"), the pipeline must
be configured for guaranteed service rather than reliable service.
The DM theories do not apply directly to the in-kernel pipeline mechanism, because DM is appropriate
only for hard real-time systems. The EPA-DM schedulability test eases restriction on the
DM admission requirements to allow threads to be admitted with expected execution times (in terms
of an execution confidence interval), rather than requiring deterministic WCET. The expected time is
determined using offline estimates of the execution time based on confidence intervals. Knowledge of
expected time can be refined online by the EPA each time a thread is run. By relaxing the WCET admission
requirement, more complex processing can be incorporated, and pessimistic WCET with conservative
assumptions (e.g. cache misses and pipeline stalls) need not reduce utility of performance-oriented
pipelines which can tolerate occasional missed deadlines (especially if the probability of a
deadline miss can be quantified beforehand).
earliest
computation
time distribution
C high , D term
signal
and abort
R min
buffered
best-case execution
hold early
response
time
release start
time
latest
desired
response termination
C low , D soft
signal
response failure:
dropout degradation
desired
optimal
response
desired
response
earliest
possible
response
utility
curve
WCET
expected
R opt
buffered
desired
response interval
d d
context
switch
overhead
response
utility
response
damage
Figure
4: Execution Events Showing Utility and Desired Response
The evaluation of the EPA-DM schedulability test based on an execution duration described by
confidence intervals results in probabilistic performance predictions on a per-thread basis, in terms of
the expected number of missed soft and termination deadlines. For simplification in the formulas, all
other threads are assumed to contribute the maximum amount of "interference", which can be loosely
defined as the amount of time spent executing threads other than the one in question. The confidence
in the number of missed soft and termination deadlines is largely a function of the confidence the EPA
user has in the execution time. For example, if a thread has an execution time confidence of 99.9% and
passes the admission test, then it is expected to miss its associated deadline 0.1% of the time or less.
The sufficient (but not necessary) schedulability tests for DM is used in part to determine schedulability
in the EPA-DM scheduling policy shown in Figure 5; here it is assumed that computation time
is expressed as a normal distribution (the normal distribution assumption is not required, but greatly
reduces the number of offline samples needed compared to assuming no distribution). I max (i) is the
interference time by higher priority threads which preempt and execute a number of
times during the period in which thread i runs. The number of times that thread k executes during a
period of thread i is based on the period and execution time of thread k. C low (i) is the shortest execution
duration of thread i, C high (i) is the longest execution duration of thread i, and T j is the period
Eq. 1: (From probability theory for a normal distribution)
C low or high low or high
(i)( #(i)
Eq. 2: (EPA-DM admission test)
low or high (i)
D soft or term (i)
I
D soft or term (i) # 1.0?
where
I
D term (i)
Figure
5: Schedulability Formulas for EPA-DM Policy
of thread j. Z p low
(i) and Z p high
(i) are the unit normal distribution quantiles for the execution time of
thread i.
An example illustrates the use of the EPA-DM scheduling theory. Assuming that there are two
threads that have a normal distribution of execution times, and that the worst-case execution time,
WCET(i), is known for comparison, the attributes of the threads are shown in Table 7. If these threads
can be scheduled based on the EPA-DM scheduling admission test, then thread 1 has a probability
of completing execution before D soft of at least 99.9% expressed P (C low < D soft ) # 0.999. Simi-
larly, probability P (C high < D term ) # 0.9998. Likewise thread 2 has respective deadline confidences
low < D soft
Thread C exp. # N trials Z p low conf soft Z p high conf term WCET D soft D term T
Table
7: Parameters for Example Threads
The equations in Figure 5 are used to determine the schedulability of the two threads using execution
time confidence and desired D soft and D term confidence.
Thread 1
Using eq. 1:
C high
C low
Because Thread 1 has the shorter deadline of the two threads, it is assigned the highest prior-
ity. Therefore, the interference term, I max (i), is zero, which simplifies the schedulability test for
Thread 1. In this case, Equation 2, as applied to Thread 1, becomes:
C low or high (i)
D soft or term (i) # 1.0
The use of C high (1) in this formula shows 48.72
while the use of C low (1) in this formula
shows 49.86
50 # 1.0, so this thread is schedulable.
Thread 2
Using eq. 1:
C high
C low
Using eq. 2:
loworhigh (2)
D softorterm (2)
I max (2)
D softorterm (2) # 1.0?
I
D term (2)
The meaning of I max is that Thread 2 can be interrupted twice during its period
by Thread 1, and that in each case Thread 1 might execute until it is terminated by the EPA at
D term (2). Evaluating with C high yields400
Evaluating with C low yields420
Because both of these formulas are satisfied, Thread 2 is schedulable.
The example shows how the EPA-DM scheduling approach supports soft real-time computation in
which it is not necessary to guarantee that every instance of a periodic computation complete execution
by its deadline. In fact, although it is not shown here, the use of WCET in the basic DM formulas result
in the lack of schedulability of thread 2. WCET is a statistical extreme, and cannot be guaranteed.
In general, the RT-PCIP mechanism, in conjunction with the EPA-DM scheduling approach offers
new, flexible support for device-to-device processing such as needed by DVEs. Threads can be created,
executed, and monitored in order to deliver predictable, quantifiable performance according to the
application's needs. Operating system overhead is kept to a minimum, as the amount of dynamic
interaction between application code and the operating system is low.
5 Dynamically Negotiated Scheduling
There are also more general cases where there is a need to control the soft real-time execution of applications
in a DVE. Soft real-time applications must be able to modify their resource consumption (and
consequently the quality of their output) at any given time based on the relative importance of the data
to the user, the amount of physical resources available to the applications, and the importance of other
concurrently executing applications.
This section discusses our work in applying dynamic negotiation to CPU scheduling in support of
soft real-time application execution in which a middleware Dynamic QoS Manager (DQM) allocates a
CPU to individual applications according to dynamic application need and corresponding user satis-
faction. Applications are able to trade off individual performance for overall user satisfaction, cooperating
to maximize user satisfaction by selectively reducing or increasing resource consumption as
available resources and requirements change.
A Quality of Service (QoS) [2] approach can be applied to scheduling to provide operating system
support for soft real-time application execution. A QoS system allows an application to reserve
a certain amount of resources at initialization time (subject to resource availability), and guarantees
that these resources will be available to the application for the duration of its execution. Applied to
scheduling, this means that each application can reserve a fixed percentage of the CPU for its sole use.
Once the available CPU cycles have been committed, no new applications can begin executing until
other applications have finished, freeing up enough CPU for the new applications requests to be met.
In a soft real-time environment, the application needs a reasonable assurance (rather than an absolute
assurance) that resources will be available on request. In both QoS and hard real-time environ-
ments, the system makes strict guarantees of service, and requires that each application make a strict
statement of its resource needs. As a result, applications in these environment must use worst case
estimates of resource need. In soft real-time systems, applications make more optimistic estimates of
their resource needs, expecting that the operating system will generally be able to meet those needs on
demand and will inform the applications when it is unable to do so.
Several operating systems designers have created designs and interfaces to support some form of
soft real-time operation. These new operating systems interfaces allow a process to either (1) negotiate
with the operating system for a specific amount of resources as in RT Mach [17] and Rialto [14]; (2)
specify a range of resource allocations as in MMOSS [8]; or (3) specify a measure of application importance
that can be used to compute a fair resource allocation as in SMART [19]. These systems all
provide a mechanism that can be used to reduce the resource allotment granted to the running appli-
cations. Even though the system is able to allocate resources more aggressively, the hypothesis is that
soft real-time applications will still perform acceptably. Since their average case resource requirements
may be significantly lower than the worst-case estimates, resources can be allocated so that the benefit
is amortized over the set of executing applications.
In creating resource management mechanisms, operating systems developers have assumed that it
is possible for applications to adjust their behavior according to the availability of resources, but without
providing a general model of application development for such an environment. In the extreme,
the applications may be forced to dynamically adapt to a strategy in which the resource allocation is
less than that required for average-case execution. Mercer, et al. suggest that a dynamic resource manager
could be created to deal with situation of processor overload [17]. In Rialto, the researchers have
used the mechanism to develop an application repertoire (though there was apparently no attempt to
define a general model for its use).
In the DQM framework (see Figure 6) applications are constructed to take advantage of such mechanisms
without having to participate directly in a detailed negotiation protocol. The framework is
based on execution levels; each application program is constructed using a set of strategies for achieving
its goals where the strategies are ordered by their relative resource usage and the relative quality of
their output. The DQM interprets resource usage information from the operating system and execution
level information from the community of applications to balance the system load, overall user satis-
faction, and available resources across the collection of applications. Section 5.3 describes experiments
conducted to evaluate the approach.
The DQM framework supports the fundamental principles asserted in this paper for soft real-time
Thread
Thread
Thread
Application
Execution Levels
CPU
Scheduler
Figure
The DQM Architecture
processes as follows:
Principle 1: Soft Real-Time The DQM supports generic softening of real-time processes. Through execution
levels, applications may modify their period and algorithms to implement any arbitrary
soft real-time policy.
Principle 2: Application Knowledge The application provides three pieces of information per execution
level: Resources needed, benefit provided, and period.
Principle 3: Dynamic Negotation The DQM dynamically adjusts application levels at runtime based
on application deadline misses and current system state.
5.1 Execution Levels in the DQM
For this aspect of the work, each application execution is characterized by a set of quadruples:
{Level
At runtime, each application specifies its maximum CPU requirements, maximum benefit, and
its set of quadruples (Level, Resource usage, Benefit, Period) to the DQM. Level 1 represents the
Maximum benefit: 6
Maximum CPU usage: 0.75
Number of execution levels: 6
Level CPU Benefit Period(mS)
Table
8: Quadruples for an Example Application
highest level and provides the maximum benefit using the maximum amount of resources, and lower
execution levels are represented with larger numbers. For example, an application might provide
information such as shown in Table 8, which indicates that the maximum amount of CPU that the
application will require is 75% of the CPU, when running at its maximum level, and that at this level
it will provide a user-specified benefit of 6. The table further shows that the application can run with
relatively high benefit (80%) with 65% of its maximum resource allocation, but that if the level of
allocation is reduced to 40%, the quality of the result will be substantially less (25%).
5.2 Dynamic QoS Manager (DQM)
The DQM dynamically determines a specific allocation profile that best suits the needs of the applications
and the user while conforming to the requirements imposed by resource availability, as delivered
by the operating system. At runtime, applications monitor themselves to determine when deadlines
have been missed and notify the DQM in such an event. In response, the DQM informs each application
of the level at which it should execute. A modification of execution level causes the application
to internally change the algorithm used to execute. This allows the DQM to leverage the mechanisms
provided by systems such as RT Mach, Rialto, and SMART in order to provide CPU availability to
applications.
The DQM dynamically determines a level for the running applications based on the available resources
and benefit. Resource availability can be determined in a few different ways. CPU overload
is determined by the incidence of deadline misses in the running applications. CPU underutilization
is determined by CPU idle time. In the current DQM this is done by reading the CPU usage of a low
priority application. In situations of CPU overload (and consequently missed deadlines), levels are
selected that reduce overall CPU usage while maintaining adequate performance over the set of running
applications. Similarly, in situations of CPU underutilization, levels are selected so as to increase
overall CPU usage.
Four resource allocation policies have been examined for use with the DQM:
Distributed. When an application misses a deadline, the application autonomously selects the next
lower level. A variation of this policy allows applications to raise their level when they have
successfully met N consecutive deadlines, where N is application-specific. This policy could be
used in conjunction with RT Mach reserves, MMOSS, and SMART.
Fair. This policy has an even and a proportional option: In the event of a deadline miss, the even option
reduces the level of the application that is currently using the most CPU. It assumes that all
applications are equally important and therefore attempts to distribute the CPU resource fairly
among the running applications. In the event of underutilization, this policy raises the level
of the application that is currently using the least CPU time. The proportional option uses the
benefit parameter and raises or lowers the level of the application with the highest or lowest
benefit/CPU ratio. This policy approximates the scheduling used in the SMART system.
Optimal. This policy uses each application's user-specified benefit (i.e., importance, utility, or priority)
and application-specified maximum CPU usage, as well as the relative CPU usage and benefit information
specified for each level to determine a QoS allocation of CPU resources that maximizes
overall user benefit. This policy performs well for initial QoS allocations, but our experiments
have shown that execution level choice can fluctuate wildly. As a result, a second option was implemented
that restricts the change in level to at most 1. This policy is similar to the value-based
approach proposed for the Alpha kernel [12].
Hybrid. This policy uses Optimal to specify the initial QoS allocations, and then uses different algorithms
to decide which levels to modify dynamically as resource availability changes. The two
options we have implemented use absolute benefit and benefit density (benefit/incremental CPU
usage) to determine execution level changes.
Application 1
Maximum benefit: 8
Max CPU usage: 0.42
No. of levels: 9
Level CPU Benefit
9
Application 2
Maximum benefit: 4
Max CPU usage: 0.77
No. of levels: 6
Level CPU Benefit
Application 3
Maximum benefit: 5
Max CPU usage: 0.22
No. of levels: 8
Level CPU Benefit
Application 4
Maximum benefit: 2
Max CPU usage: 0.62
No. of levels: 4
Level CPU Benefit
Table
9: Synthetic Program Characteristics
5.3 DQM Experiments
For the experiments presented in this section we represented VPR applications with synthetic appli-
cations. The synthetic applications consume CPU cycles and attempt to meet deadlines in accordance
with their specified execution levels, without performing any useful work. The synthetic applications
are generated as random programs that meet the desired general criteria-random total QoS require-
ment, absolute benefit, number of execution levels, and relative QoS requirements and benefit for
each level. The synthetic applications are periodic in nature, with a constant period of 0.1 second-
applications must perform some work every period. While this does not reflect the complete variability
of real applications, it simplifies the analysis of the resulting data.
For a given set of applications, data was generated by running the applications and the DQM
and recording 100 samples of the current level, expected CPU usage, and actual CPU usage for each
application, as well as the total CPU usage, total benefit over all applications, and current system idle
time. The applications ran for a total of 10 seconds (100 periods). Our results indicate that this is
adequate for observing the performance of the policies at steady state.
The experiments can be run with 1-9 applications each having between 2 and 9 levels. For simplifying
the comparison presented here, a single representative set of synthetic applications was used.
The execution level information for the application set is shown in Table 9. There are 4 applications,
each having between 4 and 9 levels with associated benefit and CPU usage numbers.
Figure
7(a) shows the execution levels that result for the given application set when running the
DQM with the Distributed policy with a skip value of 0. The skip value indicates the number of missed
Execution
Level
Application 1
Application 2
Application 3
Application 4
(a) Execution Levels0.51.52.5
Fraction
of
CPU
Application 1
Application 2
Application 3
Application 4
Sum
(b) CPU Usage
Figure
7: Performance of Distributed (skip=0)
deadlines that must occur in succession before the application reduces its execution level. The skip
value of 0 means that the application reacts instantly in lowering its level, regardless of the transient
nature of the overload situation. The execution levels can be seen to change rapidly at the beginning,
because the system is started in a state of CPU overload, i.e. the combined QoS requirement for the
complete set of applications running at the highest level (level 1) is approximately 200% of the CPU.
By the 10th sample, the applications have stabilized at levels that can operate within the available
CPU resources. There is an additional level adjustment of application 3 at the 38th sample due to
an additional missed deadline probably resulting from transient CPU load generated by some non-QoS
application. The lack of changes at the very beginning and the wild fluctuations at the end of
each graph are a result of the start-up and termination of the applications at the beginning and end
of each experiment, combined with a slightly longer than 1/10 second data recording sample interval.
Figure
7(b) shows the CPU usage for the applications in the same experiment. The total requested CPU
usage (designated Sum) starts out at approximately twice the available CPU, and then drops down to 1
as the applications are adjusted to stable levels. Note also the same adjustment at sample 38, lowering
the total CPU usage to approximately 80%.
Figure
8(a) shows the CPU usage for the Distributed policy, with a skip value of 2. Using a larger
skip value desensitizes the algorithm to deadline misses such that a level adjustment is only made
for every 3rd deadline miss, rather than for each one. This can result in a longer initial period before
stability is reached, but will result in less overshoot as it gives the applications time to stabilize after
(a) Distributed (skip=2)0.51.52.5
Fraction
of
CPU
Application 1
Application 2
Application 3
Application 4
Sum
(b) Fair (proportional)
Figure
8: CPU Usage
level adjustments. Stability is not reached until about sample 16, and there are two small adjustments
at samples 24 and 49. However, the overall CPU usage stays very close to 100% for the duration of the
experiment with essentially no overshoot as is observed in Figure 7(b).
The results of running the applications with the Fair policy using the even option are not shown.
This centralized policy makes decisions in an attempt to give all applications an equal share of the
CPU. This policy generally produces results nearly identical to the Distributed policy, as it did with
this set of applications. Figure 8(b) shows the results of running the applications with the Fair policy
using the proportional option. This version of the policy attempts to distribute shares of the available
CPU cycles to each application proportional to that application's benefit. Under the previous policies,
the CPU percentage used by all applications was approximately the same. With this policy, the cpu-
usage/benefit ratio is approximately the same for all applications. In fact, the ratio is as close to equal
as can be reached given the execution levels defined for each applications.
Figure
9 shows the CPU usage for the applications running with the Optimal policy. This policy
reaches steady state operation immediately, as the applications enter the system at a level that uses no
more than the available CPU cycles. This policy optimizes the CPU allocation so as to maximize the
total benefit for the set of applications, producing an overall benefit number of 14.88 as compared with
13.02 for the other policies. Note also that because this policy optimizes for benefit and not necessarily
for utilization as in the other policies shown, it can result in a more stable steady state, yielding no
additional deadline misses and requiring no corrections. However, this policy is the least stable given
Figure
9: CPU Usage with Optimal0.20.61
Fraction
of
CPU
Fair(proportional)
Optimal
(a) Application 20.51.52.5
Fraction
of
CPU
Fair(proportional)
Optimal
(b) Sum
Figure
10: Performance of Four Policies
changing CPU resources, such as those caused by other applications entering or leaving the system.
Figure
10(a) shows the plots for application 2 with the four different policies. Figure 10(b) shows
the summed CPU usage for the same four policies shown in Figure 10(a). This graph gives an indication
of the time required for all applications to reach steady state, along with the CPU utilization
resulting from the allocations. Figure 10(a), in particular, summarizes the differences between the various
policies. The Optimal policy selects a feasible value immediately and so the level of the application
is unchanged for the duration of the experiment. The Distributed and Fair (even) policies reach steady
state at the same value, although they take different amounts of time to reach that state, the Distributed
policy taking slightly longer. The Fair (proportional) policy reaches steady state at about the same time
as the Distributed and Fair policies, although its allocation is slightly less in this case.
In general, these experiments show that given a set of level-based applications, it is possible to create
a DQM that dynamically adjusts application execution levels to maximize user satisfaction within
available resources, even in the absence of any underlying QoS or other soft real-time scheduling mech-
anisms. Four DQM decision policies demonstrate the range possibilities inherent in this model. In [4]
the DQM is extended to support more extensive use of execution levels.
6 Summary and Conclusion
Next-generation multimedia applications require the timely delivery of complex data across and within
nodes in dynamic computing environments. User requirements can change frequently; writing and
executing applications that deliver and manage the data according to these rapidly-changing requirements
requires new support from the operating system and development tools.
This paper has introduced a set of principles that have evolved from our work in designing a DVE
and resource managers to support it. The principles address soft real-time, communicating application
knowledge, and support for dynamic negotiation. The paper also presents a framework and execution
levels to implement the principles. Execution levels have been used in three different contexts -
the Gryphon distributed object manager, the EPA/RT-PCIP facility, and the DQM - to support soft
real-time applications. In the Gryphon project the paper has shown how the approach can be used
adjust object storage strategies to compensate for scarce network bandwidth. The EPA/RT-PCIP study
describes how execution levels can be used to provide real-time support to kernel pipelines, using
confidence and reliability as parameters in the execution levels. The DQM study demonstrates how
execution levels can be used to implement very general dynamic negotiation on top of a best-effort
operating system, including measurements to reflect the behavior of the system.
Acknowledgment
Authors Nutt, Brandt, and Griff were partially supportedby NSF Grant No. IRI-9307619. Jim Mankovich
has designed and almost single-handedly built three different versions of the VPR. Several graduate
students, particularly Chris Gantz, have helped us through their participation in group discussions of
virtual environments, human-computer interfaces, real-time, soft real-time, and performance of our
prototype system.
--R
Hard real-time schedul- ing: The deadline monotonic approach
A survey of QoS architectures.
Extensibility, safety and performance in the spin operating system.
real-time application execution with dyanamic quality of service assurance
Scheduling hard real-time systems: A review
The design of a QoS controlled ATM based communication system.
Exploiting in-kernel data paths to improve I/O throughput and CPU availability
Evalutions of soft real-time handling methods in a soft real-time framework
Scheduling and IPC mechanisms for continuous me- dia
Tailorable location policies for distributed object systems
Dynamic quality of service resource management for multimedia applications on general purpose operating systems.
III. Support for user-centric modular real-time resource management in the Rialto operating system
CPU reservations and time constraints: Efficient
Object caching in a CORBA compliant system.
Constructing reliable distibuted communication systems with corba.
Processor capacity reserves: Operating system support for multimedia applications.
A highly available
The design
Agile applications-aware adaptation for mobility
Resource management for a virtual planning room.
Software support for a virtual planning room.
A resource centric approach to multimedia operating systems.
WWW page at http://www.
A resource allocation model for QoS management.
The architectural design of globe: A wide-area distributed system
--TR
--CTR
Ing-Ray Chen , Sheng-Tun Li , I-Ling Yen, Adaptive QoS Control Based on Benefit Optimization for Video Servers Providing Differentiated Services, Multimedia Tools and Applications, v.25 n.2, p.167-185, February 2005
Scott A. Brandt , Gary J. Nutt, Flexible Soft Real-Time Processing in Middleware, Real-Time Systems, v.22 n.1-2, p.77-118, Jan.-March 2002
Eyal de Lara , Yogesh Chopra , Rajnish Kumar , Nilesh Vaghela , Dan S. Wallach , Willy Zwaenepoel, Iterative Adaptation for Mobile Clients Using Existing APIs, IEEE Transactions on Parallel and Distributed Systems, v.16 n.10, p.966-981, October 2005 | dynamic resource negotiation;multimedia support;execution levels;confidence-based scheduling;tailorable distributed object policies;soft real-time |
628048 | Natural Language Grammatical Inference with Recurrent Neural Networks. | AbstractThis paper examines the inductive inference of a complex grammar with neural networksspecifically, the task considered is that of training a network to classify natural language sentences as grammatical or ungrammatical, thereby exhibiting the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or Government-and-Binding theory. Neural networks are trained, without the division into learned vs. innate components assumed by Chomsky, in an attempt to produce the same judgments as native speakers on sharply grammatical/ungrammatical data. How a recurrent neural network could possess linguistic capability and the properties of various common recurrent neural network architectures are discussed. The problem exhibits training behavior which is often not present with smaller grammars and training was initially difficult. However, after implementing several techniques aimed at improving the convergence of the gradient descent backpropagation-through-time training algorithm, significant learning was possible. It was found that certain architectures are better able to learn an appropriate grammar. The operation of the networks and their training is analyzed. Finally, the extraction of rules in the form of deterministic finite state automata is investigated. | Introduction
This paper considers the task of classifying natural language sentences as grammatical or ungrammatical.
We attempt to train neural networks, without the bifurcation into learned vs. innate components assumed by
Chomsky, to produce the same judgments as native speakers on sharply grammatical/ungrammatical data.
Only recurrent neural networks are investigated for computational reasons. Computationally, recurrent neural
networks are more powerful than feedforward networks and some recurrent architectures have been
shown to be at least Turing equivalent [53, 54]. We investigate the properties of various popular recurrent
neural network architectures, in particular Elman, Narendra & Parthasarathy (N&P) and Williams & Zipser
recurrent networks, and also Frasconi-Gori-Soda (FGS) locally recurrent networks. We find that
both Elman and W&Z recurrent neural networks are able to learn an appropriate grammar after implementing
techniques for improving the convergence of the gradient descent based backpropagation-through-time
training algorithm. We analyze the operation of the networks and investigate a rule approximation of what
the recurrent network has learned - specifically, the extraction of rules in the form of deterministic finite
state automata.
Previous work [38] has compared neural networks with other machine learning paradigms on this problem
- this work focuses on recurrent neural networks, investigates additional networks, analyzes the operation
of the networks and the training algorithm, and investigates rule extraction.
This paper is organized as follows: section 2 provides the motivation for the task attempted. Section 3
provides a brief introduction to formal grammars and grammatical inference and describes the data. Section
4 lists the recurrent neural network models investigated and provides details of the data encoding for the
networks. Section 5 presents the results of investigation into various training heuristics, and investigation of
training with simulated annealing. Section 6 presents the main results and simulation details, and investigates
the operation of the networks. The extraction of rules in the form of deterministic finite state automata
is investigated in section 7, and section 8 presents a discussion of the results and conclusions.
Motivation
2.1 Representational Power
Natural language has traditionally been handled using symbolic computation and recursive processes. The
most successful stochastic language models have been based on finite-state descriptions such as n-grams or
hidden Markov models. However, finite-state models cannot represent hierarchical structures as found in
natural language 1 [48]. In the past few years several recurrent neural network architectures have emerged
1 The inside-outside re-estimation algorithm is an extension of hidden Markov models intended to be useful for learning hierarchical
systems. The algorithm is currently only practical for relatively small grammars [48].
which have been used for grammatical inference [9, 21, 19, 20, 68]. Recurrent neural networks have been
used for several smaller natural language problems, e.g. papers using the Elman network for natural language
tasks include: [1, 12, 24, 58, 59]. Neural network models have been shown to be able to account
for a variety of phenomena in phonology [23, 61, 62, 18, 22], morphology [51, 41, 40] and role assignment
[42, 58]. Induction of simpler grammars has been addressed often - e.g. [64, 65, 19] on learning Tomita
languages [60]. The task considered here differs from these in that the grammar is more complex. The
recurrent neural networks investigated in this paper constitute complex, dynamical systems - it has been
shown that recurrent networks have the representational power required for hierarchical solutions [13], and
that they are Turing equivalent.
2.2 Language and Its Acquisition
Certainly one of the most important questions for the study of human language is: How do people unfailingly
manage to acquire such a complex rule system? A system so complex that it has resisted the efforts
of linguists to date to adequately describe in a formal system [8]. A couple of examples of the kind of
knowledge native speakers often take for granted are provided in this section.
For instance, any native speaker of English knows that the adjective eager obligatorily takes a complementizer
for with a sentential complement that contains an overt subject, but that the verb believe cannot.
Moreover, eager may take a sentential complement with a non-overt, i.e. an implied or understood, subject,
but believe cannot
*I am eager John to be here I believe John to be here
I am eager for John to be here *I believe for John to be here
I am eager to be here *I believe to be here
Such grammaticality judgments are sometimes subtle but unarguably form part of the native speaker's language
competence. In other cases, judgment falls not on acceptability but on other aspects of language
competence such as interpretation. Consider the reference of the embedded subject of the predicate to talk
to in the following examples:
John is too stubborn for Mary to talk to
John is too stubborn to talk to
John is too stubborn to talk to Bill
In the first sentence, it is clear that Mary is the subject of the embedded predicate. As every native speaker
knows, there is a strong contrast in the co-reference options for the understood subject in the second and
2 As is conventional, an asterisk is used to indicate ungrammaticality.
third sentences despite their surface similarity. In the third sentence, John must be the implied subject of the
predicate to talk to. By contrast, John is understood as the object of the predicate in the second sentence,
the subject here having arbitrary reference; in other words, the sentence can be read as John is too stubborn
for some arbitrary person to talk to John. The point to emphasize here is that the language faculty has
impressive discriminatory power, in the sense that a single word, as seen in the examples above, can result
in sharp differences in acceptability or alter the interpretation of a sentence considerably. Furthermore, the
judgments shown above are robust in the sense that virtually all native speakers agree with the data.
In the light of such examples and the fact that such contrasts crop up not just in English but in other languages
(for example, the stubborn contrast also holds in Dutch), some linguists (chiefly Chomsky [7]) have
hypothesized that it is only reasonable that such knowledge is only partially acquired: the lack of variation
found across speakers, and indeed, languages for certain classes of data suggests that there exists a fixed
component of the language system. In other words, there is an innate component of the language faculty of
the human mind that governs language processing. All languages obey these so-called universal principles.
Since languages do differ with regard to things like subject-object-verb order, these principles are subject to
parameters encoding systematic variations found in particular languages. Under the innateness hypothesis,
only the language parameters plus the language-specific lexicon are acquired by the speaker; in particular,
the principles are not learned. Based on these assumptions, the study of these language-independent principles
has become known as the Principles-and-Parameters framework, or Government-and-Binding (GB)
theory.
This paper investigates whether a neural network can be made to exhibit the same kind of discriminatory
power on the sort of data GB-linguists have examined. More precisely, the goal is to train a neural network
from scratch, i.e. without the division into learned vs. innate components assumed by Chomsky, to produce
the same judgments as native speakers on the grammatical/ungrammatical pairs of the sort discussed
above. Instead of using innate knowledge, positive and negative examples are used (a second argument for
innateness is that it is not possible to learn the grammar without negative examples).
3 Data
We first provide a brief introduction to formal grammars, grammatical inference, and natural language; for
a thorough introduction see Harrison [25] and Fu [17]. We then detail the dataset which we have used in our
experiments.
3.1 Formal Grammars and Grammatical Inference
Briefly, a grammar G is a four tuple fN; are sets of terminals and nonterminals
comprising the alphabet of the grammar, P is a set of production rules, and S is the start symbol. For every
there exists a language L, a set of strings of the terminal symbols, that the grammar generates
or recognizes. There also exist automata that recognize and generate the grammar. Grammatical inference
is concerned mainly with the procedures that can be used to infer the syntactic or production rules of an
unknown grammar G based on a finite set of strings I from L(G), the language generated by G, and
possibly also on a finite set of strings from the complement of L(G) [17]. This paper considers replacing
the inference algorithm with a neural network and the grammar is that of the English language. The simple
grammar used by Elman [13] shown in table 1 contains some of the structures in the complete English
grammar: e.g. agreement, verb argument structure, interactions with relative clauses, and recursion.
cat.
Mary
Table
1. A simple grammar encompassing a subset of the English language (from [13]).
phrase, the full sentence.
In the Chomsky hierarchy of phrase structured grammars, the simplest grammar and its associated automata
are regular grammars and finite-state-automata (FSA). However, it has been firmly established [6] that the
syntactic structures of natural language cannot be parsimoniously described by regular languages. Certain
phenomena (e.g. center embedding) are more compactly described by context-free grammars which are
recognized by push-down automata, while others (e.g. crossed-serial dependencies and agreement) are
better described by context-sensitive grammars which are recognized by linear bounded automata [50].
3.2 Data
The data used in this work consists of 552 English positive and negative examples taken from an introductory
GB-linguistics textbook by Lasnik and Uriagereka [37]. Most of these examples are organized into minimal
pairs like the example I am eager for John to win/*I am eager John to win above. The minimal nature of the
changes involved suggests that the dataset may represent an especially difficult task for the models. Due to
the small sample size, the raw data, namely words, were first converted (using an existing parser) into the
major syntactic categories assumed under GB-theory. Table 2 summarizes the parts of speech that were
used.
The part-of-speech tagging represents the sole grammatical information supplied to the models about particular
sentences in addition to the grammaticality status. An important refinement that was implemented
Category Examples
Nouns (N) John, book and destruction
Verbs (V) hit, be and sleep
Adjectives (A) eager, old and happy
Prepositions (P) without and from
Complementizer (C) that or for as in I thought that .
or I am eager for .
Determiner (D) the or each as in the man or each man
Adverb (Adv) sincerely or why as in I sincerely believe .
or Why did John want .
Marker (Mrkr) possessive 's, of, or to as in John's mother,
the destruction of . , or I want to help .
Table
2. Parts of speech
was to include sub-categorization information for the major predicates, namely nouns, verbs, adjectives and
prepositions. Experiments showed that adding sub-categorization to the bare category information improved
the performance of the models. For example, an intransitive verb such as sleep would be placed into a different
class from the obligatorily transitive verb hit. Similarly, verbs that take sentential complements or
double objects such as seem, give or persuade would be representative of other classes 3 . Fleshing out the
sub-categorization requirements along these lines for lexical items in the training set resulted in 9 classes for
verbs, 4 for nouns and adjectives, and 2 for prepositions. Examples of the input data are shown in table 3.
Sentence Encoding Grammatical
Status
I am eager for John to be here n4 v2 a2 c n4 v2 adv 1
I am eager John to be here n4 v2 a2 n4 v2 adv 0
I am eager to be here n4 v2 a2 v2 adv 1
Table
3. Examples of the part-of-speech tagging
Tagging was done in a completely context-free manner. Obviously, a word, e.g. to, may be part of more than
one part-of-speech. The tagging resulted in several contradictory and duplicated sentences. Various methods
3 Following classical GB theory, these classes are synthesized from the theta-grids of individual predicates via the Canonical
Structural Realization (CSR) mechanism of Pesetsky [49].
were tested to deal with these cases, however they were removed altogether for the results reported here.
In addition, the number of positive and negative examples was equalized (by randomly removing examples
from the higher frequency class) in all training and test sets in order to reduce any effects due to differing a
priori class probabilities (when the number of samples per class varies between classes there may be a bias
towards predicting the more common class [3, 2]).
4 Neural Network Models and Data Encoding
The following architectures were investigated. Architectures 1 to 3 are topological restrictions of 4 when
the number of hidden nodes is equal and in this sense may not have the representational capability of model
4. It is expected that the Frasconi-Gori-Soda (FGS) architecture will be unable to perform the task and it
has been included primarily as a control case.
1. Frasconi-Gori-Soda locally recurrent networks [16]. A multilayer perceptron augmented with
local feedback around each hidden node. The local-output version has been used. The FGS network has
also been studied by [43] - the network is called FGS in this paper in line with [63].
2. Narendra and Parthasarathy [44]. A recurrent network with feedback connections from each output
node to all hidden nodes. The N&P network architecture has also been studied by Jordan [33, 34] - the
network is called N&P in this paper in line with [30].
3. Elman [13]. A recurrent network with feedback from each hidden node to all hidden nodes. When
training the Elman network backpropagation-through-time is used rather than the truncated version used
by Elman, i.e. in this paper "Elman network" refers to the architecture used by Elman but not the training
algorithm.
4. Williams and Zipser [67]. A recurrent network where all nodes are connected to all other nodes.
Diagrams of these architectures are shown in figures 1 to 4.
For input to the neural networks, the data was encoded into a fixed length window made up of segments containing
eight separate inputs, corresponding to the classifications noun, verb, adjective, etc. Sub-categories
of the classes were linearly encoded into each input in a manner demonstrated by the specific values for the
noun input: Not a noun = 0, noun class class
1. The linear order was defined according to the similarity between the various sub-categories 4 . Two
outputs were used in the neural networks, corresponding to grammatical and ungrammatical classifications.
The data was input to the neural networks with a window which is passed over the sentence in temporal
4 A fixed length window made up of segments containing 23 separate inputs, corresponding to the classifications noun class 1,
noun class 2, verb class 1, etc. was also tested but proved inferior.
Figure
1. A Frasconi-Gori-Soda locally recurrent network. Not all connections are shown fully.
Figure
2. A Narendra & Parthasarathy recurrent network. Not all connections are shown fully.
Figure
3. An Elman recurrent network. Not all connections are shown fully.
Figure
4. A Williams & Zipser fully recurrent network. Not all connections are shown fully.
order from the beginning to the end of the sentence (see figure 5). The size of the window was variable from
one word to the length of the longest sentence. We note that the case where the input window is small is
of greater interest - the larger the input window, the greater the capability of a network to correctly classify
the training data without forming a grammar. For example, if the input window is equal to the longest
sentence, then the network does not have to store any information - it can simply map the inputs directly
to the classification. However, if the input window is relatively small, then the network must learn to store
information. As will be shown later these networks implement a grammar, and a deterministic finite state
automaton which recognizes this grammar can be extracted from the network. Thus, we are most interested
in the small input window case, where the networks are required to form a grammar in order to perform
well.
Gradient Descent and Simulated Annealing Learning
Backpropagation-through-time [66] 5 has been used to train the globally recurrent networks 6 , and the gradient
descent algorithm described by the authors [16] was used for the FGS network. The standard gradient
descent algorithms were found to be impractical for this problem 7 . The techniques described below for
improving convergence were investigated. Due to the dependence on the initial parameters, a number of
simulations were performed with different initial weights and training set/test set combinations. However,
due to the computational complexity of the task 8 , it was not possible to perform as many simulations as
5 Backpropagation-through-time extends backpropagation to include temporal aspects and arbitrary connection topologies by
considering an equivalent feedforward network created by unfolding the recurrent network in time.
6 Real-time [67] recurrent learning (RTRL) was also tested but did not show any significant convergence for the present problem.
modifying the standard gradient descent algorithms it was only possible to train networks which operated on a large
temporal input window. These networks were not forced to model the grammar, they only memorized and interpolated between the
training data.
8 Each individual simulation in this section took an average of two hours to complete on a Sun
Figure
5. Depiction of how the neural network inputs come from an input window on the sentence. The window
moves from the beginning to the end of the sentence.
desired. The standard deviation of the NMSE values is included to help assess the significance of the results
Table
4 shows some results for using and not using the techniques listed below. Except where noted,
the results in this section are for Elman networks using: two word inputs, 10 hidden nodes, the quadratic
cost function, the logistic sigmoid function, sigmoid output activations, one hidden layer, the learning rate
schedule shown below, an initial learning rate of 0.2, the weight initialization strategy discussed below, and
million stochastic updates (target values are only provided at the end of each sentence).
Standard NMSE Std. Dev. Variation NMSE Std. Dev.
Update: batch 0.931 0.0036 Update: stochastic 0.366 0.035
Learning rate: constant 0.742 0.154 Learning rate: schedule 0.394 0.035
Activation: logistic 0.387 0.023 Activation: tanh 0.405 0.14
Sectioning: no 0.367 0.011 Sectioning: yes 0.573 0.051
Cost function: quadratic 0.470 0.078 Cost function: entropy 0.651 0.0046
Table
4. Comparisons of using and not using various convergence techniques. All other parameters are constant in
each case: Elman networks using: two word inputs (i.e. a sliding window of the current and previous word), 10 hidden
nodes, the quadratic cost function, the logistic activation function, sigmoid output activations, one hidden layer, a
learning rate schedule with an initial learning rate of 0.2, the weight initialization strategy discussed below, and 1
million stochastic updates. Each NMSE result represents the average of four simulations. The standard deviation
value given is the standard deviation of the four individual results.
1. Detection of Significant Error Increases. If the NMSE increases significantly during training then net-work
weights are restored from a previous epoch and are perturbed to prevent updating to the same point.
This technique was found to increase robustness of the algorithm when using learning rates large enough
to help avoid problems due to local minima and "flat spots" on the error surface, particularly in the case
of the Williams & Zipser network.
2. Target Outputs. Targets outputs were 0.1 and 0.9 using the logistic activation function and -0.8 and 0.8
using the tanh activation function. This helps avoid saturating the sigmoid function. If targets were set to
the asymptotes of the sigmoid this would tend to: a) drive the weights to infinity, b) cause outlier data to
produce very large gradients due to the large weights, and c) produce binary outputs even when incorrect
- leading to decreased reliability of the confidence measure.
3. Stochastic Versus Batch Update. In stochastic update, parameters are updated after each pattern presen-
tation, whereas in true gradient descent (often called "batch" updating) gradients are accumulated over
the complete training set. Batch update attempts to follow the true gradient, whereas a stochastic path is
followed using stochastic update.
Stochastic update is often much quicker than batch update, especially with large, redundant datasets [39].
Additionally, the stochastic path may help the network to escape from local minima. However, the error
can jump around without converging unless the learning rate is reduced, most second order methods
do not work well with stochastic update, and stochastic update is harder to parallelize than batch [39].
Batch update provides guaranteed convergence (to local minima) and works better with second order
techniques. However it can be very slow, and may converge to very poor local minima.
In the results reported, the training times were equalized by reducing the number of updates for the batch
case (for an equal number of weight updates batch update would otherwise be much slower). Batch up-date
often converges quicker using a higher learning rate than the optimal rate used for stochastic update 9 ,
hence altering the learning rate for the batch case was investigated. However, significant convergence was
not obtained as shown in table 4.
4. Weight Initialization. Random weights are initialized with the goal of ensuring that the sigmoids do not
start out in saturation but are not very small (corresponding to a flat part of the error surface) [26]. In ad-
dition, several (20) sets of random weights are tested and the set which provides the best performance on
the training data is chosen. In our experiments on the current problem, it was found that these techniques
do not make a significant difference.
5. Learning Rate Schedules. Relatively high learning rates are typically used in order to help avoid slow
convergence and local minima. However, a constant learning rate results in significant parameter and
performance fluctuation during the entire training cycle such that the performance of the network can
9 Stochastic update does not generally tolerate as high a learning rate as batch update due to the stochastic nature of the updates.
alter significantly from the beginning to the end of the final epoch. Moody and Darken have proposed
"search then converge" learning rate schedules of the form [10, 11]:
(1)
where j(t) is the learning rate at time t, j 0 is the initial learning rate, and - is a constant.
We have found that the learning rate during the final epoch still results in considerable parameter fluc-
hence we have added an additional term to further reduce the learning rate over the final
epochs (our specific learning rate schedule can be found in a later section). We have found the use of
learning rate schedules to improve performance considerably as shown in table 4.
6. Activation Function. Symmetric sigmoid functions (e.g. tanh) often improve convergence over the
standard logistic function. For our particular problem we found that the difference was minor and that
the logistic function resulted in better performance as shown in table 4.
7. Cost Function. The relative entropy cost function [4, 29, 57, 26, 27] has received particular attention and
has a natural interpretation in terms of learning probabilities [36]. We investigated using both quadratic
and relative entropy cost functions:
1 The quadratic cost function is defined as
The relative entropy cost function is defined as
(3)where y and d correspond to the actual and desired output values, k ranges over the outputs (and also the
patterns for batch update). We found the quadratic cost function to provide better performance as shown
in table 4. A possible reason for this is that the use of the entropy cost function leads to an increased
variance of weight updates and therefore decreased robustness in parameter updating.
8. Sectioning of the Training Data. We investigated dividing the training data into subsets. Initially, only
one of these subsets was used for training. After 100% correct classification was obtained or a pre-specified
time limit expired, an additional subset was added to the "working" set. This continued until
the working set contained the entire training set. The data was ordered in terms of sentence length with
results which are obtained over an epoch involving stochastic update can be misleading. We have been surprised to
find quite significant difference in these on-line NMSE calculations compared to a static calculation even if the algorithm appears
to have converged.
the shortest sentences first. This enabled the networks to focus on the simpler data first. Elman suggests
that the initial training constrains later training in a useful way [13]. However, for our problem, the use
of sectioning has consistently decreased performance as shown in table 4.
We have also investigated the use of simulated annealing. Simulated annealing is a global optimization
method [32, 35]. When minimizing a function, any downhill step is accepted and the process repeats from
this new point. An uphill step may also be accepted. It is therefore possible to escape from local minima.
As the optimization process proceeds, the length of the steps declines and the algorithm converges on the
global optimum. Simulated annealing makes very few assumptions regarding the function to be optimized,
and is therefore quite robust with respect to non-quadratic error surfaces.
Previous work has shown the use of simulated annealing for finding the parameters of a recurrent network
model to improve performance [56]. For comparison with the gradient descent based algorithms the use
of simulated annealing has been investigated in order to train exactly the same Elman network as has been
successfully trained to 100% correct training set classification using backpropagation-through-time (details
are in section 6). No significant results were obtained from these trials 11 . The use of simulated annealing
has not been found to improve performance as in Simard et al. [56]. However, their problem was the parity
problem using networks with only four hidden units whereas the networks considered in this paper have
many more parameters.
This result provides an interesting comparison to the gradient descent backpropagation-through-time
(BPTT) method. BPTT makes the implicit assumption that the error surface is amenable to gradient descent
optimization, and this assumption can be a major problem in practice. However, although difficulty is
encountered with BPTT, the method is significantly more successful than simulated annealing (which
makes few assumptions) for this problem.
6 Experimental Results
Results for the four neural network architectures are given in this section. The results are based on multiple
training/test set partitions and multiple random seeds. In addition, a set of Japanese control data was used as
a test set (we do not consider training models with the Japanese data because we do not have a large enough
dataset). Japanese and English are at the opposite ends of the spectrum with regard to word order. That is,
Japanese sentence patterns are very different from English. In particular, Japanese sentences are typically
SOV (subject-object-verb) with the verb more or less fixed and the other arguments more or less available
to freely permute. English data is of course SVO and argument permutation is generally not available.
For example, the canonical Japanese word order is simply ungrammatical in English. Hence, it would be
extremely surprising if an English-trained model accepts Japanese, i.e. it is expected that a network trained
11 The adaptive simulated annealing code by Lester Ingber [31, 32] was used.
on English will not generalize to Japanese data. This is what we find - all models resulted in no significant
generalization on the Japanese data (50% error on average).
Five simulations were performed for each architecture. Each simulation took approximately four hours on a
summarizes the results obtained with the various networks. In order to make
the number of weights in each architecture approximately equal we have used only single word inputs for
the W&Z model but two word inputs for the others. This reduction in dimensionality for the W&Z network
improved performance. The networks contained 20 hidden units. Full simulation details are given in section
6.
The goal was to train a network using only a small temporal input window. Initially, this could not be
done. With the addition of the techniques described earlier it was possible to train Elman networks with
sequences of the last two words as input to give 100% correct (99.6% averaged over 5 trials) classification
on the training data. Generalization on the test data resulted in 74.2% correct classification on average.
This is better than the performance obtained using any of the other networks, however it is still quite low.
The data is quite sparse and it is expected that increased generalization performance will be obtained as the
amount of data is increased, as well as increased difficulty in training. Additionally, the dataset has been
hand-designed by GB linguists to cover a range of grammatical structures and it is likely that the separation
into the training and test sets creates a test set which contains many grammatical structures that are not
covered in the training set. The Williams & Zipser network also performed reasonably well with 71.3%
correct classification of the test set. Note that the test set performance was not observed to drop significantly
after extended training, indicating that the use of a validation set to control possible overfitting would not
alter performance significantly.
TRAIN Classification Std. dev.
Elman 99.6% 0.84
FGS 67.1% 1.22
W&Z 91.7% 2.26
ENGLISH TEST Classification Std. dev.
Elman 74.2% 3.82
FGS 59.0% 1.52
W&Z 71.3% 0.75
Table
5. Results of the network architecture comparison. The classification values reported are an average of five
individual simulations, and the standard deviation value is the standard deviation of the five individual results.
Complete details on a sample Elman network are as follows (other networks differ only in topology except
for the W&Z for which better results were obtained using an input window of only one word): the network
contained three layers including the input layer. The hidden layer contained 20 nodes. Each hidden layer
node had a recurrent connection to all other hidden layer nodes. The network was trained for a total of 1
million stochastic updates. All inputs were within the range zero to one. All target outputs were either 0.1 or
0.9. Bias inputs were used. The best of 20 random weight sets was chosen based on training set performance.
Weights were initialized as shown in Haykin [26] where weights are initialized on a node by node basis as
uniformly distributed random numbers in the range (\Gamma2:4=F is the fan-in of neuron i.
The logistic output activation function was used. The quadratic cost function was used. The search then
converge learning rate schedule used was
learning rate,
learning rate, training epochs, current training epoch, c 0:65. The
training set consisted of 373 non-contradictory examples as described earlier. The English test set consisted
of 100 non-contradictory samples and the Japanese test set consisted of 119 non-contradictory samples.
We now take a closer look at the operation of the networks. The error during training for a sample of each
network architecture is shown in figure 6. The error at each point in the graphs is the NMSE over the
complete training set. Note the nature of the Williams & Zipser learning curve and the utility of detecting
and correcting for significant error increases 12 .
Figure
7 shows an approximation of the "complexity" of the error surface based on the first derivatives of
the error criterion with respect to each weight
where i sums over all weights in the network
and Nw is the total number of weights. This value has been plotted after each epoch during training. Note
the more complex nature of the plot for the Williams & Zipser network.
Figures
8 to 11 show sample plots of the error surface for the various networks. The error surface has many
dimensions making visualization difficult. We plot sample views showing the variation of the error for two
dimensions, and note that these plots are indicative only - quantitative conclusions should be drawn from
the test error and not from these plots. Each plot shown in the figures is with respect to only two randomly
chosen dimensions. In each case, the center of the plot corresponds to the values of the parameters after
training. Taken together, the plots provide an approximate indication of the nature of the error surface
for the different network types. The FGS network error surface appears to be the smoothest, however the
results indicate that the solutions found do not perform very well, indicating that the minima found are poor
compared to the global optimum and/or that the network is not capable of implementing a mapping with
low error. The Williams & Zipser fully connected network has greater representational capability than the
Elman architecture (in the sense that it can perform a greater variety of computations with the same number
of hidden units). However, comparing the Elman and W&Z network error surface plots, it can be observed
that the W&Z network has a greater percentage of flat spots. These graphs are not conclusive (because they
only show two dimensions and are only plotted around one point in the weight space), however they back up
12 The learning curve for the Williams & Zipser network can be made smoother by reducing the learning rate but this tends to
promote convergence to poorer local minima.
Epoch
Epoch
Epoch
Epoch
W&Z
Figure
6. Average NMSE (log scale) over the training set during training. Top to bottom: Frasconi-Gori-Soda, Elman,
Narendra & Parthasarathy and Williams & Zipser.
the hypothesis that the W&Z network performs worse because the error surface presents greater difficulty
to the training method.
7 Automata Extraction
The extraction of symbolic knowledge from trained neural networks allows the exchange of information
between connectionist and symbolic knowledge representations and has been of great interest for understanding
what the neural network is actually doing [52]. In addition symbolic knowledge can be inserted
into recurrent neural networks and even refined after training [15, 47, 45].
The ordered triple of a discrete Markov process (fstate; input ! next-stateg) can be extracted from a RNN
Epoch
Epoch
Epoch
Epoch
W&Z
Figure
7. Approximate "complexity" of the error surface during training. Top to bottom: Frasconi-Gori-Soda, Elman,
Narendra & Parthasarathy and Williams & Zipser.
and used to form an equivalent deterministic finite state automata (DFA). This can be done by clustering
the activation values of the recurrent state neurons [46]. The automata extracted with this process can only
recognize regular grammars 13 .
However, natural language [6] cannot be parsimoniously described by regular languages - certain phenomena
(e.g. center embedding) are more compactly described by context-free grammars, while others (e.g.
crossed-serial dependencies and agreement) are better described by context-sensitive grammars. Hence, the
networks may be implementing more parsimonious versions of the grammar which we are unable to extract
with this technique.
13 A regular grammar G is a 4-tuple is the start symbol, N and T are non-terminal and terminal
symbols, respectively, and P represents productions of the form A ! a or A ! aB where A; B 2 N and a 2 T .
Weight 0: -0.4992 -6 -5 -4 -3 -2 -1
Weight 0: -0.4992
Weight 41: -0.07465 -5 -4 -3 -2 -1
Weight 162: 0.49550.70.80.9-6 -5 -4 -3 -2
Weight 109: -0.0295 -5 -4 -3 -2
1.15 1.2
Weight 28: 0.01001 -6 -5 -4 -3 -2
Weight 43: -0.075290.6750.6850.6950.705
Weight 96: 0.013050.684 0.685 0.686 0.687 0.688 0.689 0.690.692 0.693
Weight 146: -0.1471 -6 -5 -4 -3 -2 -1Weight 45: -0.12570.685 0.69
Weight 81: 0.1939 -6 -5 -4 -3 -2 -15
Weight
Weight 144: 0.1274 -5 -4 -3 -2 -1
Weight 67: 0.73420.66 0.680.72 0.74
Weight
-5 -4 -3 -2
Weight 101: 0.03919 -6 -5 -4 -3 -2
Weight 76: -0.38080.682 0.684 0.686 0.688 0.69
Weight 13: 1.463 -5 -4 -3 -2 -1 0Weight 7: 0.6851
Weight 60: 0.334 -5 -4 -3 -2 -1
Weight
Figure
8. Error surface plots for the FGS network. Each plot is with respect to two randomly chosen dimensions. In
each case, the center of the plot corresponds to the values of the parameters after training.
-2 -1Weight 0: 3.033 -2
Weight 0: 3.0331.221.241.261.28
Weight 132: 1.213 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4 -3 -2
Weight 100: -7.0071.24 1.26 1.281.32 1.34
1.36 1.38-5 -4 -3 -2
Weight 152: 0.6957 -8 -7 -6 -5 -4 -3 -2 -1Weight 47: -2.201
1.24 1.25 1.26 1.27 1.28 1.29 1.3-6 -5 -4 -3 -2
Weight 84: -0.1601 -5 -4 -3 -2 -1
Weight 152: 0.69571.221.241.261.28
Weight 24: 0.1485
1.24 1.25 1.26
1.27 1.28 1.291.31
Weight 160: -0.5467 -6 -5 -4 -3 -2
Weight 172: -0.7099
1.24 1.251.27 1.281.31.32
Weight 105: -2.075 -9 -8 -7 -6 -5 -4 -3 -2
Weight 13: -1.695 -7 -6 -5 -4 -3 -2 -14
Weight 8: -1.0081.2451.2551.2651.275
Weight 195: 0.1977 -3 -2 -1
Weight 5: 2.464
1.24 1.26 1.28 1.3
Weight
Weight 155: 2.0941.225 1.23
1.235 1.24
1.255 1.26
Weight 163: -2.216 -4 -3 -2 -1 07
Weight 138: 1.867
1.22 1.23
1.24 1.25
1.26 1.27 1.28
Weight
Figure
9. Error surface plots for the N&P network. Each plot is with respect to two randomly chosen dimensions. In
each case, the center of the plot corresponds to the values of the parameters after training.
The algorithm we use for automata extraction (from [19]) works as follows: after the network is trained (or
even during training), we apply a procedure for extracting what the network has learned - i.e., the network's
current conception of what DFA it has learned. The DFA extraction process includes the following steps:
clustering of the recurrent network activation space, S, to form DFA states, 2) constructing a transition
Weight 0: -3.612 -9 -8 -7 -6 -5 -4 -3 -2
Weight 0: -3.6120.602 0.6040.608 0.610.614 0.616
Weight 102: -0.7384 3 4
Weight 122: 8.320.590.610.630.65
Weight 244: -8.597 -8 -7 -6 -5 -4 -3 -2 -1
Weight 81: -2.907
Weight
Weight 151: -1.307
Weight 263: 9.384 -10 -9 -8 -7 -6 -5 -4 -3 -2 -1
Weight 215: -4.0610.58 0.590.61 0.62
Weight 265: -8.874 -5 -4 -3 -2 -1
Weight
Weight 62: -5.243 -7 -6 -5 -4 -3 -2 -1Weight 226: -1.2180.56 0.57 0.58 0.59 0.6
Weight 128: -10.25 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5 -4
Weight 162: -8.5270.590.610.63 0.640.66
Weight 118: 9.467
Weight
Weight 118: 9.4670.590.610.630.65
Weight 228: 4.723 -5 -4 -3 -2 -1 0Weight 8: 0.37070.5850.595 0.60.610.62
Weight
Weight 228: 4.7230.5950.6050.6150.625
Figure
10. Error surface plots for the Elman network. Each plot is with respect to two randomly chosen dimensions.
In each case, the center of the plot corresponds to the values of the parameters after training.
Weight 0: 3.879 -2 -14
Weight 0: 3.8791.4151.4251.435
Weight 65: -2.406 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5
Weight 153: -101.4335 1.434
Weight
Weight 105: 14.08
Weight 122: -3.084 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5
Weight 158: -9.7311.43352 1.43354
Weight 74: -8.938 -112 -111 -110 -109 -108 -107 -106 -105 -104 -103 -102
Weight 204: -1071.4351.445 1.451.461.47
Weight 192: 9.58 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5
Weight 86: -9.935
Weight 170: 8.782
Weight 2: 8.368
Weight 214: -8.855 -5 -4 -3 -2 -1
Weight 39: 0.55991.415 1.42
43
Weight 147: 38.69 1 2
Weight 223: 5.6561.411.42
Weight 210: -13.29 -15 -14 -13 -12 -11 -10 -9 -8 -7 -6 -5
Weight 86: -9.9351.41.421.441.46
Weight 63: 3.889 -8 -7 -6 -5 -4 -3 -2 -13
Weight 81: -2.1651.4251.4351.4451.455
Weight 88: -8.262 6 7
Weight 85: 9.6311.39 1.41.42 1.431.45 1.46
Figure
11. Error surface plots for the W&Z network. Each plot is with respect to two randomly chosen dimensions.
In each case, the center of the plot corresponds to the values of the parameters after training.
diagram by connecting these states together with the alphabet labelled arcs, putting these transitions
together to make the full digraph - forming loops, and 4) reducing the digraph to a minimal representation.
The hypothesis is that during training, the network begins to partition (or quantize) its state space into
fairly well-separated, distinct regions or clusters, which represent corresponding states in some finite state
automaton (recently, it has been proved that arbitrary DFAs can be stably encoded into recurrent neural
networks [45]). One simple way of finding these clusters is to divide each neuron's range into q partitions of
equal width. Thus for N hidden neurons, there exist q N possible partition states. The DFA is constructed
by generating a state transition diagram, i.e. associating an input symbol with the partition state it just left
and the partition state it activates. The initial partition state, or start state of the DFA, is determined from
the initial value of S (t=0) . If the next input symbol maps to the same partition state value, we assume that
a loop is formed. Otherwise, a new state in the DFA is formed. The DFA thus constructed may contain a
maximum of q N states; in practice it is usually much less, since not all partition states are reached by S (t) .
Eventually this process must terminate since there are only a finite number of partitions available; and, in
practice, many of the partitions are never reached. The derived DFA can then be reduced to its minimal
DFA using standard minimization algorithms [28]. It should be noted that this DFA extraction method may
be applied to any discrete-time recurrent net, regardless of order or hidden layers. Recently, the extraction
process has been proven to converge and extract any DFA learned or encoded by the neural network [5].
The extracted DFAs depend on the quantization level, q. We extracted DFAs using values of q starting
from 3 and used standard minimization techniques to compare the resulting automata [28]. We passed the
training and test data sets through the extracted DFAs. We found that the extracted automata correctly
classified 95% of the training data and 60% of the test data for 7. Smaller values of q produced DFAs
with lower performance and larger values of q did not produce significantly better performance. A sample
extracted automata can be seen in figure 12. It is difficult to interpret the extracted automata and a topic for
future research is analysis of the extracted automata with a view to aiding interpretation. Additionally, an
important open question is how well the extracted automata approximate the grammar implemented by the
recurrent network which may not be a regular grammar.
Automata extraction may also be useful for improving the performance of the system via an iterative combination
of rule extraction and rule insertion. Significant learning time improvements can be achieved by
training networks with prior knowledge [46]. This may lead to the ability to train larger networks which
encompass more of the target grammar.
This paper has investigated the use of various recurrent neural network architectures (FGS, N&P, Elman
and W&Z) for classifying natural language sentences as grammatical or ungrammatical, thereby exhibiting
the same kind of discriminatory power provided by the Principles and Parameters linguistic framework, or
Government-and-Binding theory. From best to worst performance, the architectures were: Elman, W&Z,
N&P and FGS. It is not surprising that the Elman network outperforms the FGS and N&P networks. The
computational power of Elman networks has been shown to be at least Turing equivalent [55], where the
N&P networks have been shown to be Turing equivalent [54] but to within a linear slowdown. FGS networks
Figure
12. An automata extracted from an Elman network trained to perform the natural language task. The start
state is state 1 at the bottom left and the accepting state is state 17 at the top right. All strings which do not reach the
accepting state are rejected.
have recently been shown to be the most computationally limited [14]. Elman networks are just a special
case of W&Z networks - the fact that the Elman and W&Z networks are the top performers is not surprising.
However, theoretically why the Elman network outperformed the W&Z network is an open question. Our
experimental results suggest that this is a training issue and not a representational issue. Backpropagation-
through-time (BPTT) is an iterative algorithm that is not guaranteed to find the global minima of the cost
function error surface. The error surface is different for the Elman and W&Z networks, and our results
suggest that the error surface of the W&Z network is less suitable for the BPTT training algorithm used.
However, all architectures do learn some representation of the grammar.
Are the networks learning the grammar? The hierarchy of architectures with increasing computational power
(for a given number of hidden nodes) give an insight into whether the increased power is used to model the
more complex structures found in the grammar. The fact that the more powerful Elman and W&Z networks
provided increased performance suggests that they were able to find structure in the data which it may not be
possible to model with the FGS network. Additionally, investigation of the data suggests that 100% correct
classification on the training data with only two word inputs would not be possible unless the networks were
able to learn significant aspects of the grammar.
Another comparison of recurrent neural network architectures, that of Giles and Horne [30], compared
various networks on randomly generated 6 and 64-state finite memory machines. The locally recurrent and
Narendra & Parthasarathy networks proved as good or superior to more powerful networks like the Elman
network, indicating that either the task did not require the increased power, or the vanilla backpropagation-
through-time learning algorithm used was unable to exploit it.
This paper has shown that both Elman and W&Z recurrent neural networks are able to learn an appropriate
grammar for discriminating between the sharply grammatical/ungrammatical pairs used by GB-linguists.
However, generalization is limited by the amount and nature of the data available, and it is expected that
increased difficulty will be encountered in training the models as more data is used. It is clear that there
is considerable difficulty scaling the models considered here up to larger problems. We need to continue
to address the convergence of the training algorithms, and believe that further improvement is possible by
addressing the nature of parameter updating during gradient descent. However, a point must be reached
after which improvement with gradient descent based algorithms requires consideration of the nature of the
error surface. This is related to the input and output encodings (rarely are they chosen with the specific
aim of controlling the error surface), the ability of parameter updates to modify network behavior without
destroying previously learned information, and the method by which the network implements structures
such as hierarchical and recursive relations.
Acknowledgments
This work has been partially supported by the Australian Telecommunications and Electronics Research Board (SL).
--R
Sequential connectionist networks for answering simple questions about a microworld.
A comparison between criterion functions for linear classifiers
Supervised learning of probability distributions by neural networks.
The dynamics of discrete-time computation
Three models for the description of language.
Lectures on Government and Binding.
Knowledge of Language: Its Nature
Finite state automata and simple recurrent networks.
Note on learning rate schedules for stochastic optimization.
Towards faster stochastic gradient search.
Structured representations and connectionist models.
Distributed representations
Computational capabilities of local-feedback recurrent networks acting as finite-state machines
Unified integration of explicit rules and learning by example in recurrent networks.
Local feedback multilayered networks.
Syntactic Pattern Recognition and Applications.
Networks that learn phonology.
Learning and extracting finite state automata with second-order recurrent neural networks
Extracting and learning an unknown grammar with recurrent neural networks.
Higher order recurrent networks
The role of similarity in Hungarian vowel harmony: A connectionist account.
Connectionist perspective on prosodic structure.
Representing variable information with simple recurrent networks.
Introduction to Formal Language Theory.
Neural Networks
Introduction to the Theory of Neural Computation.
Introduction to Automata Theory
Learning algorithms and probability distributions in feed-forward and feed-back networks
Giles. An experimental comparison of recurrent neural networks.
Very fast simulated re-annealing
Adaptive simulated annealing (ASA).
Attractor dynamics and parallelism in a connectionist sequential machine.
Serial order: A parallel distributed processing approach.
Simulated annealing.
Information Theory and Statistics.
A Course in GB Syntax: Lectures on Binding and Empty Categories.
Giles. Natural language grammatical inference: A comparison of recurrent neural networks and machine learning methods.
Efficient learning and second order methods.
Learning the past tense of English verbs using recurrent neural networks.
Language learning: cues or rules?
Encoding input/output representations in connectionist cognitive systems.
A focused backpropagation algorithm for temporal pattern recognition.
and control of dynamical systems using neural networks.
Constructing deterministic finite-state automata in recurrent neural networks
Extraction of rules from discrete-time recurrent neural networks
Rule revision with recurrent neural networks.
Paths and Categories.
The induction of dynamical recognizers.
On learning the past tenses of English verbs.
Combining symbolic and neural learning.
Computation beyond the turing limit.
Computational capabilities of recurrent NARX neural networks.
On the computational power of neural nets.
Analysis of recurrent backpropagation.
Accelerated learning in layered neural networks.
Learning and applying contextual constraints in sentence comprehension.
Learning feature-based semantics with simple recurrent networks
Dynamic construction of finite-state automata from examples using hill-climbing
Rules and maps in connectionist symbol processing.
"many maps"
Locally recurrent globally feedforward networks: A critical review of architectures.
Induction of finite state languages using second-order recurrent networks
Induction of finite-state languages using second-order recurrent networks
An efficient gradient-based algorithm for on-line training of recurrent network trajec- tories
A learning algorithm for continually running fully recurrent neural networks.
Learning finite state machines with self-clustering recurrent networks
--TR
--CTR
Michal eransk , Matej Makula , ubica Beukov, Organization of the state space of a simple recurrent network before and after training on recursive linguistic structures, Neural Networks, v.20 n.2, p.236-244, March, 2007
Marshall R. Mayberry, Iii , Risto Miikkulainen, Broad-Coverage Parsing with Neural Networks, Neural Processing Letters, v.21 n.2, p.121-132, April 2005
Peter Tio , Ashely J. S. Mills, Learning Beyond Finite Memory in Recurrent Networks of Spiking Neurons, Neural Computation, v.18 n.3, p.591-613, March 2006
Edward Kei Shiu Ho , Lai Wan Chan, Analyzing Holistic Parsers: Implications for Robust Parsing and Systematicity, Neural Computation, v.13 n.5, p.1137-1170, May 2001
Peter C. R. Lane , James B. Henderson, Incremental Syntactic Parsing of Natural Language Corpora with Simple Synchrony Networks, IEEE Transactions on Knowledge and Data Engineering, v.13 n.2, p.219-231, March 2001
Juan C. Valle-Lisboa , Florencia Reali , Hctor Anastasa , Eduardo Mizraji, Elman topology with sigma-pi units: an application to the modeling of verbal hallucinations in schizophrenia, Neural Networks, v.18 n.7, p.863-877, September 2005
Henrik Jacobsson, Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review, Neural Computation, v.17 n.6, p.1223-1263, June 2005 | simulated annealing;grammatical inference;principles-and-parameters framework;recurrent neural networks;natural language processing;gradient descent;government-and-binding theory;automata extraction |
628059 | Justification for Inclusion Dependency Normal Form. | AbstractFunctional dependencies (FDs) and inclusion dependencies (INDs) are the most fundamental integrity constraints that arise in practice in relational databases. In this paper, we address the issue of normalization in the presence of FDs and INDs and, in particular, the semantic justification for Inclusion Dependency Normal Form (IDNF), a normal form which combines Boyce-Codd normal form with the restriction on the INDs that they be noncircular and key-based. We motivate and formalize three goals of database design in the presence of FDs and INDs: noninteraction between FDs and INDs, elimination of redundancy and update anomalies, and preservation of entity integrity. We show that, as for FDs, in the presence of INDs being free of redundancy is equivalent to being free of update anomalies. Then, for each of these properties, we derive equivalent syntactic conditions on the database design. Individually, each of these syntactic conditions is weaker than IDNF and the restriction that an FD not be embedded in the righthand side of an IND is common to three of the conditions. However, we also show that, for these three goals of database design to be satisfied simultaneously, IDNF is both a necessary and sufficient condition. | Introduction
Functional dependencies (FDs) [Arm74, Mai83, Ull88, AA93] generalise the notions of entity integrity
and keys [Cod79] and inclusion dependencies (INDs) [Mit83, CFP84] generalise the notions
of referential integrity and foreign keys [Cod79, Dat86]. In this sense FDs and INDs are the most
fundamental data dependencies that arise in practice.
Relational database design in the presence of FDs is an established area in database theory,
which has been researched for over twenty years [Cod74, BB79, Mai83, Ull88, AA93]. The semantic
justification of the normal forms in the presence of FDs is well-understood in terms of eliminating
the so called update anomalies and redundancy problems that can arise in a relation satisfying
a set of FDs [BG80, Fag81, Cha89, Vin94, Vin98]. The advice that is given as a result of this
investigation of the semantics of the normal forms is that in order to eliminate the above mentioned
problems we should design database schemas which are in Boyce-Codd Normal Form (BCNF)
[Cod74].
Despite the importance of INDs as integrity constraints little research has been carried out on
how they should be integrated into the normalisation process of a relational database. Such an
integration is fundamental to the success of a design, since the enforcement of referential integrity
is no simple matter [CTF88]. Normal forms which include FDs and INDs have been considered in
[CA84, MR86, LG92, MR92, BD93] but necessary and sufficient conditions in terms of removing
the update anomalies and redundancy problems were not given. It is our goal in this paper to
fill in this gap by providing sufficient and necessary semantics for Inclusion Dependency Normal
We consider some of the problems that occur in the presence of FDs and INDs through two
examples. The first example illustrates the situation when an attribute is redundant due to
interaction between FDs and INDs.
Example 1.1 Let HEAD be a relation schema, with attributes H and D, where H stands for head
of department and D stands for department, and let LECT be a relation schema, with attributes
L and D, where L stands for lecturer and as before D stands for department. Furthermore, let
Dg be a set of FDs over a database schema
stating that a head of department manages a unique department and a lecturer works in a unique
department, and I = fHEAD[HD] ' LECT[LD]g be a set of INDs over R stating that a head of
department also works as a lecturer in the same department. We note that I [
logical implication, by the pullback inference rule (see
Proposition 2.1), and thus the FD HEAD in F is redundant. Also, note that we have not
assumed that HEAD : D ! H in F and thus a department may have more than one head.
Two problems arise with respect to R and F [ I. Firstly, the interaction between F and I
may lead to the logical implication of data dependencies that were not envisaged by the database
designer and may not be easy to detect; in general the implication problem for FDs and INDs is
intractable (see the discussion in Section 2). In this example the pullback inference rule implies
that an FD in F is redundant.
Secondly, the IND HEAD[HD] ' LECT[LD] combined with the FD LECT imply that
the attribute D in HEAD is redundant, since the department of a head can be inferred from the fact
that L is a key for LECT. (Formally this inference can be done with the aid of a relational algebra
expression which uses renaming, join and projection; see [Mai83, Ull88, AA93] for details on the
relational algebra.) Thus HEAD[HD] ' LECT[LD] can be replaced by HEAD[H] ' LECT[L] and
the attribute D in HEAD can be removed without any loss of information.
The second example illustrates the situation when the propagation of insertions due to INDs
may result in the violation of entity integrity.
Example 1.2 Let EMP be a relation schema, with attributes E and P, where E stands for employee
name and P stands for project title, and let PROJ be a relation schema, with attributes P
and L, where as before P stands for project title and L stands for project location. Furthermore,
Pg be a set of FDs over a database schema stating
that an employee works on a unique project, and I = fEMP[P] ' PROJ[P]g be a set of INDs over
R stating that an employee's project is one of the listed projects. We note that a project may
be situated in several locations and correspondingly a location may be associated with several
projects and thus fP, Lg is the primary key of PROJ.
The problem that arises with respect to R and F [ I is that the right-hand side, P, of the IND
EMP[P] ' PROJ[P] is a proper subset of the primary key of PROJ. Let r 1 and r 2 be relations
over EMP and PROJ, respectively. Suppose that an employee is assigned to a new project which
has not yet been allocated a location and is thus not yet recorded in r 2 . Now, due to the IND in
I, the insertion of the employee tuple into r 1 , having this new project, should be propagated to r 2
by inserting into r 2 a tuple recording the new project. But since the location of the project is still
unknown, then due to entity integrity, it is not possible to propagate this insertion to r 2 .
We summarise the problems that we would like to avoid when designing relational databases in
the presence of FDs and INDs. Firstly, we should avoid redundant attributes, secondly we should
avoid the violation of entity integrity when propagating insertions, and lastly we should avoid any
interaction between FDs and INDs due to the intractability of the joint implication problem for
FDs and INDs. The main contributions of this paper are the formalisation of these design problems
and the result that if the database schema is in IDNF then all of these problems are eliminated.
We also demonstrate the robustness of IDNF by showing that, as for BCNF, removing redundancy
from the database schema in the presence of FDs and INDs is also equivalent to eliminating update
anomalies from the database schema.
The layout of the rest of the paper is as follows. In Section 2 we formally define FDs, INDs
and their satisfaction, and introduce the chase procedure as a means of testing and enforcing the
satisfaction of a set of FDs and INDs. In Section 3 we formalise the notion of no interaction between
a set of FDs and INDs and give a necessary and sufficient condition for FDs and noncircular INDs
not to interact. In Section 4 we characterise redundancy in the presence of FDs and INDs. In
Section 5 we characterise insertion and modification anomalies in the presence of FDs and INDs
and show an equivalence between being free of either insertion or modification anomalies and being
free of redundancy. In Section 6 we characterise a generalisation of entity integrity in the presence
of FDs and INDs. In Section 7 we define IDNF and present our main result that establishes
the semantics of IDNF in terms of either the update anomalies or redundancy problems, and the
satisfaction of generalised entity integrity. Finally, in Section 8 we give our concluding remarks
and indicate our current research direction.
Definition 1.1 (Notation) We denote the cardinality of a set S by jSj. The size of a set S is
defined to be the cardinality of a standard encoding of S.
If S is a subset of T we write S ' T and if S is a proper subset of T we write S ae T. We often
denote the singleton fAg simply by A and write A 2 S to mean fAg ' S. In addition, we often
denote the union of two sets S, T, i.e. S [ T, simply by ST.
Functional and inclusion dependencies
We formalise the notions of FDs and INDs and their satisfaction, and define some useful subclasses
of FDs and INDs. We also present the chase procedure for testing and enforcing the satisfaction
of a set of FDs and INDs. The chase procedure is instrumental in proving our main results.
Definition 2.1 (Database schema and database) Let U be a finite set of attributes. A relation
schema R is a finite sequence of distinct attributes from U. A database schema is a finite set
g, such that each R i 2 R is a relation schema and S
We assume a countably infinite domain of values, D; without loss of generality we assume that
D is linearly ordered. An R-tuple (or simply a tuple whenever R is understood from context) is a
member of the Cartesian product D \Theta
A relation r over R is a finite (possibly empty) set of R-tuples. A database d over R is a family
of n relations fr such that each r i 2 d is over R i 2 R. Given a tuple t over R and
assuming that r 2 d is the relation in d over R, we denote the insertion of t into r by d [ ftg, and
the deletion of t from r by d \Gamma ftg.
From now on we let R be a database schema and d be a database over R. Furthermore, we let
r 2 d be a relation over the relation schema R 2 R.
Definition 2.2 (Projection) The projection of an R-tuple t onto a set of attributes Y ' R,
denoted by t[Y] (also called the Y-value of t), is the restriction of t to Y. The projection of a
relation r onto Y, denoted as -Y (r), is defined by -Y rg.
Definition 2.3 (Functional Dependency) A functional dependency (or simply an FD) over a
database schema R is a statement of the Y is an FD over the
relation schema R), where R 2 R and X, Y ' R are sets of attributes. An FD of the
Y is said to be trivial if Y ' X; it is said to be standard if X 6= ;.
An is satisfied in d, denoted by d
Definition 2.4 (Inclusion Dependency) An inclusion dependency (or simply an IND) over a
database schema R is a statement of the form R i [X] ' R j [Y], where R i
are sequences of distinct attributes such that An IND is said to be trivial if it is
of the form R[X] ' R[X]. An IND R[X] ' S[Y] is said to be unary if An IND is said to
be typed if it is of the form R[X] ' S[X].
An IND R i [X] ' R j [Y] over R is satisfied in d, denoted by d
are the relations over R i and R j , respectively.
From now on we let F be a set of FDs over R and F
the set of FDs in F over R i 2 R. Furthermore, we let I be a set of INDs over R and let
Definition 2.5 (Logical implication) \Sigma is satisfied in d, denoted by d
logically implies an FD or an IND oe, written \Sigma whenever d is a database over R
then the following condition is true:
if d holds then d
logically implies a set \Gamma of FDs and INDs over R, written \Sigma
denote the set of all FDs and INDs that are logically implied by \Sigma.
Definition 2.6 (Keys, BCNF and key-based INDs) A set of attributes X ' R i is a superkey
for R i with respect to F i if F i is a key for R i with respect to F i if it is
a superkey for R i with respect to F i and for no proper subset Y ae X is Y a superkey for R i with
respect to F i . We let KEYS(F) be the set of all FDs of the form X ! R i , where X is a key for R i
with respect to F i , for ng.
A database schema R is in Boyce-Codd Normal Form (or simply BCNF) with respect to F if
for all R i 2 R, for all nontrivial FDs R is a superkey for R i with respect to F i .
An IND R i [X] ' R j [Y] is superkey-based, respectively key-based, if Y is a superkey, respectively
a key, for R j with respect to F j .
Definition 2.7 (Circular and noncircular sets of INDs) A set I of INDs over R is circular
if either
1. I contains a nontrivial IND R[X] ' R[Y], or
2. there exist m distinct relation schemas, R 1
contains the INDs: R 1
A set of INDs I is noncircular if it is not circular.
The class of proper circular INDs [Imi91] defined below includes the class of noncircular INDs
as a special case.
Definition 2.8 (Proper circular sets of INDs) A set I of INDs over R is proper circular if it
is either noncircular or whenever there exist m distinct relation schemas, R 1
I contains the INDs: R 1
It is well known that Armstrong's axiom system [Arm74, Mai83, Ull88, AA93] can be used
to compute F + and that Casanova's et al. axiom system [CFP84] can be used to compute I + .
However, when we consider FDs and INDs together computing \Sigma was shown to be undecidable
[Mit83, CV85]. On the other hand, when I is noncircular then Mitchell's axiom system [Mit83]
can be used to compute \Sigma + [CK86]. Moreover, in the special case when I is a set of unary INDs
then Cosmadakis's et al. axiom system [CKV90] can be used to compute \Sigma + .
The implication problem is the problem of deciding whether oe oe is an FD or
IND and \Sigma is a set of FDs and INDs. It is well-know that the implication problem for FDs on
their own is decidable in linear time [BB79]. On the other hand the implication problem for
INDs is, in general, PSPACE-complete [CFP84]. The implication problem for noncircular INDs is
NP-complete [Man84, CK86]. Typed INDs have a polynomial time implication problem [CV83].
Unary INDs have a linear time implication problem [CKV90]. When we consider FDs and INDs
together the implication problem is undecidable, as mentioned above. The implication problem for
FDs and noncircular INDs is EXPTIME-complete and if the noncircular INDs are typed then the
implication problem is NP-hard [CK86]. FDs and unary INDs have a polynomial time implication
problem [CKV90].
The next proposition describes the pullback inference rule [Mit83, CFP84], which allows us to
infer an FD from an FD and an IND.
Proposition 2.1 If \Sigma
Definition 2.9 (Reduced set of FDs and INDs) The projection of a set of FDs F i over R i
onto a set of attributes Y ' R i , denoted by F i [Y], is given by F i
i and WZ ' Yg.
A set of attributes Y ' R i is said to be reduced with respect to R i and a set of FDs F i over
simply reduced with respect to F i if R i is understood from context) if F i [Y] contains only
trivial FDs. A set of FDs and INDs I is said to be reduced if 8 R i [X] ' R j [Y] 2 I, Y is
reduced with respect to F j .
It can easily be shown that it can be decided in polynomial time in the size of \Sigma whether \Sigma is
reduced or not.
The chase procedure provides us with an algorithm which forces a database to satisfy a set of
FDs and INDs.
Definition 2.10 (The chase procedure for INDs) The chase of d with respect to \Sigma, denoted
by CHASE(d; \Sigma), is the result of applying the following chase rules, that is the FD and the IND
rules, to the current state of d as long as possible. (The current state of d prior to the first
application of the chase rule is its state upon input to the chase procedure.)
FD rule: If R
all the occurrences in d of the larger of the values of t 1 [A] and t 2 [A] to the
smaller of the values of t 1 [A] and t 2 [A].
IND rule: If R i [X] ' R j [Y] 2 I and 9t 2 r i such that t[X] 62 -Y (r j ), then add a tuple u over R j
to r j , where assigned a new value greater than any
other current value occurring in the tuples of relations in the current state of d.
We observe that if we allow I to be circular then the chase procedure does not always terminate
[JK84]. (When the chase of d with respect to \Sigma does not terminate, then CHASE(d; \Sigma) is said to
violate a set of FDs G over R, i.e. CHASE(d; \Sigma) 6j= G, if after some finite number of applications
of the IND rule to the current state of d, resulting in d 0 , we have that d 0 6j= G.) In the special case
when I is in the class of proper circular INDs then it was shown that the chase procedure always
terminates [Imi91].
The following theorem is a consequence of results in [MR92, Chapter 10].
Theorem 2.2 Let I be a set of FDs and proper circular INDs over a database schema
R. Then the following two statements are true:
(ii) CHASE(d; \Sigma) terminates after a finite number of applications of the IND rule to the current
state of d. 2
3 Interaction between FDs and INDs
As demonstrated by Proposition 2.1 FDs and INDs may interact in the sense that there may be
FDs and INDs implied by a set of FDs and INDs which are not implied by the FDs or INDs taken
separately. From the point of view of database design, interaction is undesirable since a database
design may be normalised with respect to the set of FDs but not with respect to the combined set
of FDs and INDs. This is illustrated in the following example.
Example 3.1 Consider the database schema and a set \Sigma of
FDs F and INDs I over R given by S[AB]g. It can easily
be verified that R in BCNF with respect to F. However, if we augment F with the
B, which is logically implied by \Sigma on using Proposition 2.1, then R is not in BCNF with respect
to the augmented set of FDs F, since A is not a superkey for R with respect to the set of FDs
Bg.
The other major difficulty that occurs in database design when the set of FDs and INDs
interact is a result of the fact that, as noted earlier, the implication problem for an arbitrary set of
FDs and INDs is undecidable [Mit83, CV85]. Because of this, in general it cannot be determined
whether a database design is in BCNF with respect to an arbitrary set of FDs and INDs, since
the set of all logically implied FDs cannot be effectively computed. As a consequence, a desirable
goal of database design is that the set of FDs and INDs do not interact. We now formalise the
notion of non interaction and characterise, for proper circular INDs, a special case when this non
interaction occurs.
Definition 3.1 (No interaction occurring between FDs and INDs) A set of FDs F over
R is said not to interact with of set of INDs I over R, if
1) for all FDs ff over R, for all subsets G ' F, G [ I
for all INDs fi over R, for all subsets J ' I, F [ J only if J
The following theorem is proven in [LL95] (see [MR92, Chapter 10]).
Theorem 3.1 If R is in BCNF with respect to a set of FDs F over R I is a proper circular set of
of INDs over R and I is reduced then F and I do not interact. 2
As the next example shows we cannot, in general, extend Theorem 3.1 to the case when the set
of INDs I is not proper circular. In particular, by Proposition 2.1 \Sigma being reduced is a necessary
condition for no interaction to occur between F and I, but it is not a sufficient condition for non
interaction.
Example 3.2 Consider a database schema and a set \Sigma of FDs
F and INDs I over R given by, Ag and I = fR[A] ' S[A], S[B] ' R[B]g.
It can easily be verified that \Sigma is reduced and that I is circular. On using the axiom system of
it follows that \Sigma thus F and I interact.
As another example let It can easily be verified
that \Sigma is reduced and I is circular. Again, F and I interact, since on using the axiom system of
R[A]g.
4 Attribute redundancy
In this section we investigate the conditions on database design which ensure the elimination of
redundancy, a goal which has been long cited as one of the principal motivations for the use
of normalisation in database design [Mai83, Ull88, MR92, AA93]. However it has proven to be
somewhat difficult to formalise the intuitive notion of redundancy, and it was only relatively
recently that this notion was formalised and its relationship to the classical normal forms was
established [Vin94, Vin98]. This definition of redundancy, and the associated normal form that
guarantees redundancy elimination, is as follows.
Definition 4.1 (Value redundancy) Let d be a database over R that satisfies F and t 2 r be
a tuple, where r 2 d is a relation over a relation schema R 2 R. The occurrence of a value t[A],
where A 2 R is an attribute, is redundant in d with respect to F if for every replacement of t[A]
by a distinct value v 2 D such that v 6= t[A], resulting in the database d 0 , we have that d 0 6j= F.
A database schema R is said to be in Value Redundancy Free Normal Form (or simply VRFNF)
with respect to a set of FDs F over R if there does not exist a database d over R and an occurrence
of a value t[A] that is redundant in d with respect to F.
We now illustrate the definition by a simple example.
Example 4.1 Consider the single relation scheme and a set F of FDs
given by Bg. Then R is not in VRFNF, since if we consider the relation r over
R, shown in Table 1, then the B-value, 2, present in both tuples, is redundant, since r
replacing the value 2 in either tuple by another value results in F being violated.
Table
1: The relation r
The following result, which was established in [Vin94, Vin98], shows that given a set of FDs
F, VRFNF is equivalent to BCNF. For the sake of completeness, we provide a sketch of the proof.
Theorem 4.1 A database schema R is in BCNF with respect to F if and only if R is in VRFNF
with respect to F.
Proof. The if part follows by showing the contrapositive that if R is not in BCNF then it is
not in VRFNF. This result follows because if R is not in BCNF then there exists a non-trivial
implied FD X ! A where X is not a superkey and using a well known construction (Theorem 7.1
in [Ull88]) there exists a two tuple relation r in which the tuples are identical on XA. The relation
r is not in VRFNF since changing either of the A-values results in X ! A being violated. The
only if part follows from the observation that if an occurrence t[A] is redundant in a relation r,
then there must exist r such that
This implies that X cannot be a superkey and hence that R is not in BCNF and so establishes
the only if part. 2
In [Vin98] it was shown that in the presence of multivalued dependencies 4NF (fourth normal
form [Fag77]) is also equivalent to VRFNF. Somewhat surprisingly, the syntactic equivalent for
VRFNF, in the most general case where join dependencies are present, is a new normal form that
is weaker than PJ/NF (project-join normal form [Fag79]) and 5NF (fifth normal form [Mai83])
[Vin98].
The concept of value redundancy is of little use though, in evaluating database designs in
the presence of INDs because of the following result. It demonstrates that no design where the
constraints involve nontrivial INDs can be in VRFNF with respect to the set of INDs.
Lemma 4.2 Let I be a set of FDs and noncircular INDs over a database schema R.
Then R is not in VRFNF with respect to \Sigma if I is nonempty and contains at least one nontrivial
IND.
Proof. Let R i [X] ' R j [Y] be a nontrivial IND in I. We construct a database d such that the
relation r i in d over R i has in it a single tuple t i containing zeros and every other relation r k in d
is empty. Now, let d thus by Theorem 2.2 d 0 Due to the noncircularity
of I, we have in d 0 that r
i is the current state of r i in d 0 . Let r 0
j be the current state
of the relation r j over R j in d 0 . Then the Y-values of the tuple in r 0
must contain zeros, since
Thus all of the zeros in the single tuple in r 0
are redundant, since changing any of them
results in R i [X] ' R j [Y] being violated. 2
Due to the above lemma, we require only that the design be in VRFNF with respect to the
FDs but not with respect to the INDs. However, as noted by others [Sci86, LG92, MR92], an even
stronger form of redundancy can occur in a database in the presence of INDs. We refer to this as
attribute redundancy, which was illustrated in Example 1.1 given in the introduction.
Definition 4.2 (Attribute redundancy) An attribute A in a relation schema R 2 R is redundant
with respect to \Sigma if whenever d is a database over R which satisfies \Sigma and r 2 d is a nonempty
relation over R, then for every tuple t 2 r, if t[A] is replaced by a distinct value v 2 D such that
resulting in the database d 0 , then d 0 6j= \Sigma.
A database schema R is said to be in Attribute Redundancy Free Normal Form (or simply
ARFNF) with respect to a set of FDs and INDs \Sigma over R if there does not exists an attribute A
in a relation schema R 2 R which is redundant with respect to \Sigma.
The next example shows that ARFNF is too weak when \Sigma contains only FDs, highlighting the
difference between VRFNF and ARFNF.
Example 4.2 Consider the relation r of Example 4.1, shown in Table 1, and let r 0 be the result
of adding the tuple t =! 5; 2; 4 ? to r. Then, it can easily be verified that, although R is not in
VRFNF with respect to F, R is in ARFNF with respect to F, since if we replace any value in t
by a distinct value then the resulting database still satisfies F.
Combining Definitions 4.1 and 4.2 we can define redundancy free normal form.
Definition 4.3 (Redundancy free normal database schema R is said to be in Redundancy
Free Normal Form (or simply RFNF) with respect to a set of FDs and INDs \Sigma over R
if it is in VRFNF with respect to F and in ARFNF with respect to \Sigma.
The next theorem shows that when the set of INDs is noncircular then RFNF is equivalent to
the set of FDs and INDs being reduced and to the database schema being in BCNF.
Theorem 4.3 Let I be a set of FDs and noncircular INDs over a database schema R.
Then R is in RFNF with respect to \Sigma if and only if \Sigma is reduced and R is in BCNF with respect
to F.
Proof. If. By Theorem 4.1 R is in VRFNF with respect to F. So it remains to show that R is in
ARFNF.
In the proof we utilise a directed graph representation, G I = (N, E), of the set of INDs I,
which is constructed as follows (see [Sci86]). Each relation schema R in R has a separate node in
labelled R; we do not distinguish between nodes and their labels. There is an arc (R,
and only if there is a nontrivial IND R[X] ' S[Y] 2 I. It can easily be verified that there is a path
in G I from R to S if and only if for some IND R[X] ' S[Y] we have I
since I is noncircular, we have that G I is acyclic.
Let A 2 R i be an attribute, where R i 2 R is a relation schema. We construct a database d
having a nonempty relation r i 2 d, which exhibits the fact that A is nonredundant with respect
to \Sigma.
We first initialise the database d to be a database d 0 as follows. Let r
have a single tuple
t such that for all B relations r 0
R k are initialised to be empty in d 0 . Therefore, by Theorem 2.2, we have that d
i in d 1 be the current state of r i . Then, by Theorem 3.1 we have r 1
and the current state r 1
k in d 1 of a relation r 0
k is empty, if there does not exist a path in G I from
R i to R k .
i has a single tuple t 0 such that for all B 2 R i , including A,
Therefore, by Theorem 2.2, we have that d
i in d 3 is the current state of r i , since d 2
be
the final state of the initialisation of d. Then, d j= F, since d 3 We claim that it is also that
case that d
Let us call a nontrivial IND R i [X] ' R j [Y] 2 I a source IND, if A 2 X. By the projection and
permutation inference rule for INDs [CFP84] we assume without loss of generality that a source
IND has the form R i [VA] ' R j [WB].
Due to the noncircularity of I any current state r k in d of a relation r 3
k is empty, if there does
not exist a path in G I from R i to R k ; therefore for such r k we have r
Now, if there is an arc from R i to R j in G I , then there is some IND R i [X] ' R j [Y] in I. There are
two cases to consider.
First, if A 62 X then contains only zeros and thus d Secondly,
if A 2 X then R i [X] ' R j [Y] is a source IND R i [VA] ' R j [WB]. Let r j in d be the current state
of r 3
Therefore d It follows that for any IND R i [X] ' R j [Y] such that there is a path
from R i to R j and such that I we have d since d 3
I as required. The if part is now concluded since d 3 and d differ only by the
replacement of t 0
Only if. By Theorem 4.1 if R is not in BCNF with respect to F then it is not in VRFNF. So
assuming that \Sigma is not reduced it remains to show that R is not in ARFNF. By this assumption
there exists an IND R i [X] ' R j [Y] 2 I such that is a nontrivial FD. By the
pullback inference rule there is a nontrivial FD by the
projection and permutation inference rule for INDs [CFP84]. Now let t 2 r i be a tuple, where
is a nonempty relation over R i , and assume that d j= \Sigma. It follows that A is redundant with
respect to \Sigma, since whenever we replace t[A] by a distinct value resulting in a database d 0 , it can
be seen that d 0 contrary to assumption. 2
We next construct two examples which demonstrate that if the conditions of Theorem 4.3 are
violated then R is not in ARFNF.
Example 4.3 Let R be the database schema from Example 1.1 and consider the database d
overR, shown in Tables 2 and 3, respectively. It can be verified that the set of FDs and INDs for
this example is not reduced but that R is in BCNF with respect to the set of FDs. The attribute
D of the relation schema HEAD can be seen to be redundant, since changing e 1 in Table 2 causes
I to be violated. A similar situation occurs for every other database defined over R, and so R is
not in ARFNF.
Table
2: The relation in d over HEAD
Table
3: The relation in d over LECT
Example 4.4 Let R= fSTUDENT, ENROLg, be a database schema, with STUDENT= fStud ID,
Nameg and Addressg. Furthermore, let
Addressg be a set of FDs over R and I = fENROL[Stud ID] ' STUDENT[Stud ID]g be a set of
INDs over R. It can be verified that the set of FDs and INDs for this example is reduced but that
R is not in BCNF with respect to the set of FDs. Then R is not in RFNF because it is not in
VRFNF, since both occurrences of a 1 in ENROL are redundant in the database d shown in Tables
4 and 5, respectively.
Stud ID Name
Table
4: The relation in d over
over STUDENT
Stud ID Course Address
Table
5: The relation in d over ENROL
As the next example shows we cannot extend Theorem 4.3 to the case when the set of INDs I
is circular.
Example 4.5 Consider a database schema and a set
\Sigma of FDs F and INDs I over R given by, Ag and I = fR[A] ' S[A],
R[A]g. It can be verified that \Sigma is reduced, R is in BCNF with respect to F and that I is
proper circular, unary, typed and also key-based.
Let d be a database over R such that d d be a nonempty relation over R and let
r be a tuple. Assume without loss of generality that d 0 is the database resulting from replacing
by a distinct value 1 resulting in a tuple t 0 , with t 0 1. In order to conclude the
example we show that d 0 6j= \Sigma. Assume to the contrary that d 0 \Sigma. Thus there must be a tuple
which is distinct from t and such that 1. If this is not the case then d 0 6j= R[A] ' S[A],
since -A
It follows that thus d A contrary to
assumption. Therefore the attribute A 2 R must be redundant with respect to \Sigma. The reader can
easily verify that the attribute A 2 S is also redundant with respect to \Sigma even if F was empty. It
appears that in this example the relation schema S can be removed from R without any loss of
semantics.
5 Insertion and modification anomalies
In this section, we investigate the conditions under which database design ensures the elimination
of key-based update anomalies, (as distinct from other types of update anomalies as investigated
in [BG80, Vin94]). This concept was originally introduced in [Fag81] to deal with the insertion and
deletion of tuples and was later extended in [Vin92, Vin94] to include the modifications of tuples. A
key-based update anomaly is defined to occur when an update to a relation, which can either be an
insertion or a deletion or a modification, results in the new relation satisfying key uniqueness but
violating some other constraint on the relation. The reason for this being considered undesirable
is that the enforcement of key uniqueness can be implemented via relational database software
in a much more efficient manner than the enforcement of more general constraints such as FDs
[Mai83, Ull88, aa93]. So, if the satisfaction of all the constraints on a relation is a result of key
uniqueness then the integrity of the relation after an update can be easily enforced, whereas the
existence of a key-based update anomaly implies the converse. Herein we formalise these concepts
based on this approach, with the only difference being that because of the presence of INDs we
allow an update to propagate to other relations by the chase procedure. We show that being free
of insertion anomalies is equivalent to being free of modification anomalies. In addition, we show
that when the INDs are noncircular then being free of either insertion or modification anomalies
is equivalent to the set of FDs and INDs being reduced and the database schema being in BCNF
with respect to the set of FDs. We do not consider deletion anomalies, since in the presence of
FDs removing a tuple from a relation that satisfies a set of FDs, does not cause any violation of
an FD in the set.
Definition 5.1 (Compatible tuple) A tuple t over R is compatible with d with respect to a set
of FDs and INDs I over R (or simply compatible with d whenever \Sigma is understood from
Definition 5.2 (Free of insertion anomalies) A database d over R has an insertion violation
with respect to a set of FDs and INDs (or simply d has an insertion violation
whenever \Sigma is understood from context) if
1. d
2. there exists a tuple t over R which is compatible with d but CHASE(d [ ftg, I) 6j= \Sigma.
A database schema R is free of insertion anomalies with respect to \Sigma (or simply R is free of
insertion anomalies if \Sigma is understood from context) if there does not exist a database d over R
which has an insertion violation.
We note that in Definition 5.2 we have utilised the chase procedure to enforce the propagation
of insertions of tuples due to the INDs in I. As an example of an insertion violation consider the
database schema R in Example 1.1 and let d be the database where r 1 , the relation over HEAD, is
empty and r 2 , the relation over LECT, contains the single tuple ! 0; 0 ?. Then d has an insertion
violation when the tuple ! since applying the chase procedure results in
being added to the relation r 2 and thus violating the FD L ! D.
The next theorem shows that assuming that the set of INDs is noncircular, being in BCNF
and the set of FDs and INDs being reduced is equivalent to being free of insertion anomalies.
Theorem 5.1 Let I be a set of FDs and noncircular INDs over a database schema R.
Then R is free of insertion anomalies if and only if \Sigma is reduced and R is in BCNF with respect
to F.
Proof. If. Let d be a database over R such that d
a tuple which is compatible with d. It remains to show that CHASE(d [ ftg, I) We first
claim that CHASE(d [ ftg, This holds due to Theorem 3.1 implying
that the FD rule need never be invoked during the computation of CHASE(d[ ftg; \Sigma). Moreover,
due to the fact that R is in BCNF with respect to F and t is compatible with d. So,
by Theorem 2.2 we have CHASE(d [ ftg; \Sigma)
Only if. There are two cases to consider.
Case 1. If R is not in BCNF with respect to F, then some R i 2 R is not in BCNF with
respect to F i . Thus there is a nontrivial FD is not a superkey for R i
with respect to F i . Assume that X is reduced with respect to R i and F i , otherwise replace X by a
reduced subset W of X such that W! X
i . It follows that X is a proper subset of a superkey
of R i with respect to F i . Let r i over R i contain a single tuple containing zeros and let all other
relations in d be empty. We can assume without loss of generality that d j= \Sigma, otherwise we let d
be CHASE(d; \Sigma). Due to I being noncircular the state of r i remains unchanged in CHASE(d; \Sigma).
let t be a tuple whose X-values are zeros and such that all its other values are ones. Then
t is compatible with d but CHASE(d [ ftg, I) 6j= \Sigma, since the FD will be violated in the
current state of r i . Therefore, R is not free of insertion anomalies.
Case 2. If \Sigma is not reduced but R is in BCNF with respect to F then we have an IND R i [X]
Y is a proper superset of a key, say W, for R j with respect to F j . We let r j
over R j contain a single tuple, say t containing zeros and let all other relations in d, including
r i over R i , be empty. We can assume without loss of generality that d j= \Sigma, otherwise we let d
be CHASE(d; \Sigma). Due to I being noncircular the state of r i remains unchanged in CHASE(d; \Sigma).
a tuple which agrees with t j on its W-value but disagrees with t j on the rest
of its values. Then, t is compatible with d but CHASE(d [ ftg, I) 6j= \Sigma, since the FD W
will be violated in the resulting current state of r j . 2
To illustrate this theorem we note firstly that the example given before Theorem 5.1, demonstrates
the case where a database schema has an insertion anomaly when the set of dependencies
is not reduced. Alternatively, the following example demonstrates the case of a database schema
not being in BCNF and having an insertion anomaly.
Example 5.1 Let R, be as in Example 4.4. We start with the database d shown in
Tables
6 and 7, respectively. If we then insert the tuple which is compatible with
ENROL, into the ENROL relation, applying the chase procedure results in the database d 0 shown
in
Tables
8 and 9, respectively, where n 2 is a new value. It can be seen that d 0 violates \Sigma and so
R is not free of insertion anomalies.
In the next example we show that the only if part of Theorem 5.1 is, in general, false, even
when I is a proper circular set of INDs.
Stud ID Name
Table
The relation in d over
over STUDENT
Stud ID Course Address
Table
7: The relation in d over ENROL
Stud ID Name
Table
8: The relation in d 0
over over STUDENT
Stud ID Course Address
Table
9: The relation in d 0 over ENROL
Example 5.2 Consider a database schema and a set \Sigma of FDs
F and INDs I over R given by,
' R[AB]g. It can easily be verified that I is proper circular, R is in BCNF but that \Sigma is not
reduced. In addition, R is free of insertion anomalies, since for any database d = fr; sg such that
d are the relations in d over R and S, respectively, we have that due to
I. If we drop S[AB] ' R[AB] from I, then, as in the proof of the only if part of Theorem 5.1, R
has an insertion violation.
The next example illustrates that we cannot, in general, extend Theorem 5.1 to the case when
the set of INDs I is circular even when \Sigma is reduced, due to possible interaction between the FDs
and INDs.
Example 5.3 Consider a database schema and a set \Sigma of FDs F and
INDs I over R given by, R[B]g. It can be verified that \Sigma is
reduced but I is circular. As was shown in Example 3.2 although \Sigma is reduced \Sigma
thus F and I interact.
Let d be a database over R such that the relation r over R contains the single tuple !
and let t be the tuple ! has an insertion violation, since d
with d but CHASE(d [ ftg, I) A. (In fact, in this case the chase procedure does not
terminate, but since t is inserted into r and the chase procedure does not modify any of the tuples
in its input database, then it does not satisfy \Sigma; see the comment after Definition 2.10.)
We now formally define the second type of key-based update anomaly, a modification anomaly,
following the approach in [Vin92, Vin94] with the only difference again being that the chase
procedure is used to propagate the effects of the change into other relations.
Definition 5.3 (Free of modification anomalies) A database d over R has a modification
violation with respect to a set of FDs and INDs (or simply d has a modification
violation whenever \Sigma is understood from context) if
1. d
2. there exists a tuple u d is the relation over R, and a tuple t over R which is
compatible with
A database schema R is free of modification anomalies with respect to \Sigma (or simply R is free
of modification anomalies if \Sigma is understood from context) if there does not exist a database d
over R which has a modification violation.
Theorem 5.2 Let I be a set of FDs and noncircular INDs over a database schema R.
Then R is free of modification anomalies if and only if R is free of insertion anomalies.
Proof. If. Let d be a database over R such that d be a tuple that is compatible with
d and u 2 r be a tuple, where r 2 d is the relation over R. It follows that t is compatible with
fug. We need to show that if CHASE(d [ ftg, I)
\Sigma. By Theorem 2.2 of the chase procedure CHASE((d remains
to show that CHASE((d \Gamma fug) [ ftg, I)
g. Then by Definition 2.10 of the chase procedure we
have that for all It follows that CHASE((d since by
Definition 5.2 CHASE(d [ ftg, I)
Only if. If R is free of modification anomalies then by a similar argument to that made in the
only if part of the proof of Theorem 5.1 it follows that \Sigma is reduced and R is in BCNF. In the
first case we add an additional tuple u over R i , which contains ones, to the original state of r i ,
and in the second case we add an additional tuple u over R i , which contains zeros, to the original
state of r i . The result now follows by the if part of Theorem 5.1. 2
Combining Theorems 4.3, 5.1 and 5.2 we obtain the next result.
Corollary 5.3 Let I be a set of FDs and noncircular INDs over a database schema R.
Then the following statements are equivalent:
(i) R is free of insertion anomalies.
(ii) R is free of modification anomalies.
(iii) R is in RFNF. 2
6 Generalised entity integrity
In this section we justify superkey-based INDs on the basis that they do not cause the propagation
of the insertion of tuples that represent undefined entities, thus causing the violation of entity
integrity. This problem was illustrated in Example 1.2 given in the introduction.
In the next definition we view the chase procedure as a mechanism which enforces the propagation
of insertions of tuples due to the INDs in I.
Definition 6.1 (Generalised entity integrity) Let t be a tuple that is added to a relation r i
in the current state of a database d, during the computation of CHASE(d; \Sigma). Then, t is
entity-based if there exists at least one key X for R i with respect to F i such that for all A 2 X,
t[A] is not a new value that is assigned to t as a result of invoking the IND rule.
A database schema R satisfies generalised entity integrity with respect to a set I of
FDs and INDs over R if for all databases d over R, all the tuples that are added to relations in
the current state of d during the computation of CHASE(d; \Sigma) are entity-based.
The next theorem shows that satisfaction of generalised entity integrity is equivalent to the
set of INDs being superkey-based.
Theorem 6.1 A database schema R satisfies generalised entity integrity with respect to a set of
FDs and INDs if and only if I is superkey-based.
Proof. If I is superkey-based then the result immediately follows by the definition of the IND rule
(see Definition 2.10). On the other hand, if I is not superkey-based then there is some IND R i [X]
I such that Y is not a superkey for R j with respect to F j . Let d be a database over R
such that all its relations apart for r i over R i are empty. The relation r i has a single tuple. By
the definition of the FD rule (see Definition 2.10), we have that for every key, say K, of R j with
respect to F j there is at least one attribute, say A 2 K, such that the tuple t j added to r j over R j
is assigned a new A-value by the IND rule, otherwise, contrary to assumption, we can deduce that
Y is a superkey for R j with respect to F j . It follows that R does not satisfy generalised entity
integrity, concluding the proof. 2
7 Inclusion dependency normal form
A database schema is in IDNF with respect to a set of FDs and INDs if it is in BCNF with respect
to the set of FDs and the set of INDs is noncircular and key-based. We show that a database
schema is in IDNF if and only if it satisfies generalised entity integrity and is either free of insertion
anomalies or free of modification anomalies or in redundancy free normal form.
We next formally define IDNF (cf. [MR86, MR92]).
Definition 7.1 (Inclusion dependency normal form) A database schema R is in Inclusion
Dependency Normal Form (IDNF) with respect to a set of \Sigma of FDs F and INDs I over R (or
simply in IDNF if \Sigma is understood from context) if
1. R is in BCNF with respect to F, and
2. I is a noncircular and key-based set of INDs.
We note that if the set of INDs I is empty then R being in IDNF is equivalent to R being in
BCNF. We further note that we have not restricted the FDs in F to be standard.
The next result follows from Corollary 5.3, Theorem 6.1 and Definition 7.1.
Theorem 7.1 Let I be a set of FDs and noncircular INDs over a database schema R.
Then the following statements are equivalent:
(i) R is in IDNF
(ii) R is free of insertion anomalies and satisfies generalised entity integrity.
(iii) R is free of modification anomalies and satisfies generalised entity integrity.
(iv) R is in RFNF and satisfies generalised entity integrity. 2
Concluding Remarks
We have identified three problems that may arise when designing databases in the presence of
FDs and INDs, apart from the update anomalies and redundancy problems that may arise in each
relation due to the FDs considered on their own. The first problem is that of attribute redundancy,
the second problem is the potential violation of entity integrity when propagating insertions, and
the third problem concerns avoiding the complex interaction which may occur between FDs and
INDs and the intractability of determining such interaction. The first problem was formalised
through RFNF and it was shown in Corollary 5.3 that a database schema is in RFNF with respect
to a set of FDs and INDs if and only if it is free of insertion anomalies or equivalently free
of modification anomalies. This result can be viewed as an extension of a similar result when
considering FDs on their own. The second problem was formalised through generalised entity
integrity and it was shown in Theorem 6.1 that a database schema satisfies generalised entity
integrity with respect to a set of FDs and INDs if and only the set of INDs is superkey-based.
The third problem was formalised through the non interaction of the implication problem for FDs
and INDs and it was shown in Theorem 3.1 that a set of FDs and INDs do not interact when
the set of INDs is proper circular, the set of FDs and INDs are reduced and the database schema
is in BCNF. Combining all these result together we obtained, in Theorem 7.1, three equivalent
semantic characterisations of IDNF. Theorem 7.1 justifies IDNF as a robust normal form that
eliminates both redundancy and update anomalies from the database schema.
If the goal of normalisation is to reduce redundancy then it seems that apart from R being
in BCNF, in general, we must restrict the set of INDs to be noncircular (see Example 4.5).
Nonetheless circular sets of INDs arise in practice, for example when we want to express pairwise
consistency. (Two relation schemas R and S are consistent if the set of INDs I includes the two
INDs: database schema R is pairwise consistent
if every pair of its relation schemas are consistent [BFMY83]; we note that pairwise consistency
can be expressed by a set of proper circular INDs.) In this case we need alternative semantics
to express the goal of normalisation. A minimal requirement is that the FDs and INDs have no
interaction. By Theorem 3.1 as long as the set of FDs and INDs I is reduced and R
is in BCNF with respect to F then, when the set of INDs I expresses pairwise consistency, F
does not interact with I, since I is proper circular. Consider a BCNF database schema R with
relation schemas
with
EMP[DNAME]g. The set of INDs I
expresses the fact that all employees work in departments that exist and all departments have at
least one employee. The first IND is key-based but the second is not. Despite this fact it can be
verified that F and I do not interact. Moreover, if managers are also employees then we could add
the IND DEPT[MGR] ' EMP[ENAME] to I, and it can be verified by exhibiting the appropriate
counterexamples that it is still true that F and I do not interact although, now I is not even
proper circular. The database schema R seems to be a reasonable design but it is not in IDNF.
Further research needs to be carried out to determine the semantics of normal forms for such FDs
and INDs. We conclude the paper by proposing such a normal form. A database schema R is
in Interaction Free Inclusion Dependency Normal Form with respect to a set of \Sigma of FDs F and
INDs I over R if
1. R is in BCNF with respect to F,
2. All the INDs in I are either key-based or express pairwise consistency, and
3. F and I do not interact.
--R
Relational Database Theory.
Dependency structures of data base relationships.
Computational problems related to the design of normal form relational schemas.
Objects in relational database schemes with functional
On the desirability of acyclic database schemes.
What does Boyce-Codd normal form do? <Proceedings>In Proceedings of the International Conference on Very Large Data Bases</Proceedings>
Inclusion dependencies and their interaction with functional dependencies.
A design theory for solving the anomalies problem.
A graph theoretic approach.
Recent investigations in relational data base systems.
Extending the database relational model to capture more meaning.
Enforcing
Towards a sound view integration methodology.
The implication problem for functional and
Referential integrity.
Multivalued dependencies and a new normal form for relational databases.
Normal forms and relational database operators.
A normal form for relational databases that is based on domains and keys.
Abstraction in query processing.
Testing containment of conjunctive queries under functional and
Logical database design with
How to prevent interaction of functional and
The Theory of Relational Databases.
On the complexity of the inference problem for subclasses of
The implication problem for functional and
Comparing the universal instance and relational data models.
Principles of Database and Knowledge-Base Systems
Modification anomalies and Boyce-Codd normal form
The Semantic Justification for Normal Forms in Relational Database Design.
Redudnancy elimination and a new normal form for relational databases.
--TR
--CTR
Fabien De Marchi , Jean-Marc Petit, Semantic sampling of existing databases through informative Armstrong databases, Information Systems, v.32 n.3, p.446-457, May, 2007
Stphane Lopes , Jean-Marc Petit , Farouk Toumani, Discovering interesting inclusion dependencies: application to logical database tuning, Information Systems, v.27 n.1, p.1-19, March 2002
Junhu Wang, Binary equality implication constraints, normal forms and data redundancy, Information Processing Letters, v.101 n.1, p.20-25, January 2007
Junhu Wang, Binary equality implication constraints, normal forms and data redundancy, Information Processing Letters, v.101 n.1, p.20-25, January 2007
Laura C. Rivero , Jorge H. Doorn , Viviana E. Ferraggine, Elicitation and conversion of hidden objects and restrictions in a database schema, Proceedings of the 2002 ACM symposium on Applied computing, March 11-14, 2002, Madrid, Spain
Millist W. Vincent , Jixue Liu , Chengfei Liu, Strong functional dependencies and their application to normal forms in XML, ACM Transactions on Database Systems (TODS), v.29 n.3, p.445-462, September 2004
Marcelo Arenas , Leonid Libkin, An information-theoretic approach to normal forms for relational and XML data, Journal of the ACM (JACM), v.52 n.2, p.246-283, March 2005
Marcelo Arenas , Leonid Libkin, An information-theoretic approach to normal forms for relational and XML data, Proceedings of the twenty-second ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, p.15-26, June 09-11, 2003, San Diego, California
Marcelo Arenas, Normalization theory for XML, ACM SIGMOD Record, v.35 n.4, December 2006
Solmaz Kolahi , Leonid Libkin, On redundancy vs dependency preservation in normalization: an information-theoretic study of 3NF, Proceedings of the twenty-fifth ACM SIGMOD-SIGACT-SIGART symposium on Principles of database systems, June 26-28, 2006, Chicago, IL, USA
Mark Levene , George Loizou, Why is the snowflake schema a good data warehouse design?, Information Systems, v.28 n.3, p.225-240, May | relational database design;functional dependency;normal forms;inclusion dependency |
628066 | Scalable Algorithms for Association Mining. | AbstractAssociation rule discovery has emerged as an important problem in knowledge discovery and data mining. The association mining task consists of identifying the frequent itemsets and then, forming conditional implication rules among them. In this paper, we present efficient algorithms for the discovery of frequent itemsets which forms the compute intensive phase of the task. The algorithms utilize the structural properties of frequent itemsets to facilitate fast discovery. The items are organized into a subset lattice search space, which is decomposed into small independent chunks or sublattices, which can be solved in memory. Efficient lattice traversal techniques are presented which quickly identify all the long frequent itemsets and their subsets if required. We also present the effect of using different database layout schemes combined with the proposed decomposition and traversal techniques. We experimentally compare the new algorithms against the previous approaches, obtaining improvements of more than an order of magnitude for our test databases. | Introduction
The association mining task is to discover a set of attributes shared among a large number of objects in
a given database. For example, consider the sales database of a bookstore, where the objects represent
customers and the attributes represent books. The discovered patterns are the set of books most frequently
bought together by the customers. An example could be that, \40% of the people who buy Jane Austen's
Pride and Prejudice also buy Sense and Sensibility." The store can use this knowledge for promotions, shelf
placement, etc. There are many potential application areas for association rule technology, which include
catalog design, store layout, customer segmentation, telecommunication alarm diagnosis, and so on.
The task of discovering all frequent associations in very large databases is quite challenging. The search
space is exponential in the number of database attributes, and with millions of database objects the problem
of I/O minimization becomes paramount. However, most current approaches are iterative in nature, requiring
multiple database scans, which is clearly very expensive. Some of the methods, especially those using some
form of sampling, can be sensitive to the data-skew, which can adversely aect performance. Furthermore,
most approaches use very complicated internal data structures which have poor locality and add additional
space and computation overheads. Our goal is to overcome all of these limitations.
In this paper we present new algorithms for discovering the set of frequent attributes (also called itemsets).
The key features of our approach are as follows:
1. We use a vertical tid-list database format, where we associate with each itemset a list of transactions in
which it occurs. We show that all frequent itemsets can be enumerated via simple tid-list intersections.
2. We use a lattice-theoretic approach to decompose the original search space (lattice) into smaller pieces
(sub-lattices), which can be processed independently in main-memory. We propose two techniques for
achieving the decomposition: prex-based and maximal-clique-based partition.
3. We decouple the problem decomposition from the pattern search. We propose three new search strategies
for enumerating the frequent itemsets within each sub-lattice: bottom-up, top-down and hybrid
search.
4. Our approach roughly requires only a a few database scans, minimizing the I/O costs.
We present six new algorithms combining the features listed above, depending on the database format,
the decomposition technique, and the search procedure used. These include Eclat (Equivalence CLAss
Transformation), MaxEclat, Clique, MaxClique, TopDown, and AprClique. Our new algorithms not only
minimize I/O costs by making only a small number of database scan, but also minimize computation costs
by using ecient search schemes. The algorithms are particularly eective when the discovered frequent
itemsets are long. Our tid-list based approach is also insensitive to data-skew. In fact the MaxEclat and
MaxClique algorithms exploit the skew in tid-lists (i.e., the support of the itemsets) to reorder the search, so
that the long frequent itemsets are listed rst. Furthermore, the use of simple intersection operations makes
the new algorithms an attractive option for direct implementation in database systems, using SQL. With the
help of an extensive set of experiments, we show that the best new algorithm improves over current methods
by over an order of magnitude. At the same time, the proposed techniques retain linear scalability in the
number of transactions in the database.
The rest of this paper is organized as follows: In Section 2 we describe the association discovery problem.
We look at related work in Section 3. In section 4 we develop our lattice-based approach for problem
decomposition and pattern search. Section 5 describes our new algorithms. Some previous methods, used
for experimental comparison, are described in more detail in Section 6. An experimental study is presented
in Section 7, and we conclude in Section 8. Some mining complexity results for frequent itemsets and their
link to graph-theory are highlighted in Appendix A.
Problem Statement
The association mining task, rst introduced in [1], can be stated as follows: Let I be a set of items, and D
a database of transactions, where each transaction has a unique identier (tid) and contains a set of items.
A set of items is also called an itemset. An itemset with k items is called a k-itemset. The support of an
itemset X , denoted (X), is the number of transactions in which it occurs as a subset. A k length subset of
an itemset is called a k-subset. An itemset is maximal if it is not a subset of any other itemset. An itemset
is frequent if its support is more than a user-specied minimum support (min sup) value. The set of frequent
k-itemsets is denoted F k .
An association rule is an expression A ) B, where A and B are itemsets. The support of the rule is
given as [B), and the condence as (i.e., the conditional probability that a transaction
contains B, given that it contains A). A rule is condent if its condence is more than a user-specied
minimum condence (min conf).
The data mining task is to generate all association rules in the database, which have a support greater
than min sup, i.e., the rules are frequent. The rules must also have condence greater than min conf, i.e.,
the rules are condent. This task can be broken into two steps [2]:
1. Find all frequent itemsets. This step is computationally and I/O intensive. Given m items, there can
be potentially 2 m frequent itemsets. Ecient methods are needed to traverse this exponential itemset
search space to enumerate all the frequent itemsets. Thus frequent itemset discovery is the main focus
of this paper.
2. Generate condent rules. This step is relatively straightforward; rules of the form XnY
are generated for all frequent itemsets X , provided the rules have at least minimum condence.
Consider an example bookstore sales database shown in Figure 1. There are ve dierent items (names
of authors the bookstore carries), i.e., I = fA; C; D;T ; Wg, and the database consists of six customers who
bought books by these authors. Figure 1 shows all the frequent itemsets that are contained in at least three
customer transactions, i.e., min It also shows the set of all association rules with min conf
100%. The itemsets ACTW and CDW are the maximal frequent itemsets. Since all other frequent
Conan Doyle
Sir Arthur D
Agatha Christie C
Jane Austen A
MarkTwain T
Wodehouse
P. G. W
A C D T W
A C D W
A C T W
C D W
A C T W246
Transaction Items
ITEMS
A
A
A
A (3/3)
AT
AT
AC
W, CW
A, D, T, AC, AW
CD, CT, ACW
100%
50% (3)
Itemsets
Support
CTW,
Maximal Frequent Itemsets:
AT, DW, TW, ACT, ATW
CDW, ACTW
CDW, ACTW
ASSOCIATION RULES
ACT
A (3/3)
AT CW (3/3)
Figure
1: a) Bookstore Database, b) Frequent Itemsets and Condent Rules
itemsets are subsets of one of these two maximal itemsets, we can reduce the frequent itemset search problem
to the task of enumerating only the maximal frequent itemsets. On the other hand, for generating all the
condent rules, we need the support of all frequent itemsets. This can be easily accomplished once the
maximal elements have been identied, by making an additional database pass, and gathering the support
of all uncounted subsets.
3 Related Work
Several algorithms for mining associations have been proposed in the literature [1, 2, 6, 15, 19, 20, 21, 23,
26, 27]. The Apriori algorithm [2] is the best known previous algorithm, and it uses an ecient candidate
generation procedure, such that only the frequent itemsets at a level are used to construct candidates at the
next level. However, it requires multiple database scans, as many as the longest frequent itemset. The DHP
algorithm [23] tries to reduce the number of candidates by collecting approximate counts in the previous
level. Like Apriori it requires as many database passes as the longest itemset. The Partition algorithm [26]
minimizes I/O by scanning the database only twice. It partitions the database into small chunks which can
be handled in memory. In the rst pass it generates the set of all potentially or locally frequent itemsets,
and in the second pass it counts their global support. Partition may enumerate too many false positives in
the rst pass, i.e., itemsets locally frequent in some partition but not globally frequent. If this local frequent
set doesn't t in memory, then additional database scans will be required. The DLG [28] algorithm uses a
bit-vector per item, noting the tids where the item occurred. It generates frequent itemsets via logical AND
operations on the bit-vectors. However, DLG assumes that the bit vectors t in memory, and thus scalability
could be a problem for databases with millions of transactions. The DIC algorithm [6] dynamically counts
candidates of varying length as the database scan progresses, and thus is able to reduce the number of
scans over Apriori. Another way to minimize the I/O overhead is to work with only a small sample of the
database. An analysis of the eectiveness of sampling for association mining was presented in [29], and [27]
presents an exact algorithm that nds all rules using sampling. The AS-CPA algorithm and its sampling
versions [20] build on top of Partition and produce a much smaller set of potentially frequent candidates. It
requires at most two database scans. Approaches using only general-purpose DBMS systems and relational
algebra operations have also been studied [14, 15]. Detailed architectural alternatives in the tight-integration
of association mining with DBMS were presented in [25]. They also pointed out the benets of using the
vertical database layout.
All the above algorithms generate all possible frequent itemsets. Methods for nding the maximal elements
include All-MFS [12], which is a randomized algorithm to discover maximal frequent itemsets. The
Pincer-Search algorithm [19] not only constructs the candidates in a bottom-up manner like Apriori, but
also starts a top-down search at the same time. This can help in reducing the number of database scans.
MaxMiner [5] is another algorithm for nding the maximal elements. It uses ecient pruning techniques
to quickly narrow the search space. Our new algorithms range from those that generate all the frequent
itemsets, to hybrid schemes that generate some maximal along with the remaining itemsets. It is worth
noting that since the enumeration task is computationally challenging a number of parallel algorithms have
also been proposed [3, 7, 13, 31]
4 Itemset Enumeration: Lattice-Based Approach
Before embarking on the algorithm description, we will brie
y review some terminology from lattice theory
(see [8] for a good introduction).
Denition 1 Let P be a set. A partial order on P is a binary relation , such that for all X;Y; Z 2 P ,
the relation is:
exive: X X.
The set P with the relation is called an ordered set.
Denition 2 Let P be an ordered set, and let X;Z;Y 2 P . We say X is covered by Y , denoted X < Y ,
there is no element Z of P with X < Z < Y .
Denition 3 Let P be an ordered set, and let S P . An element X 2 P is an upper bound (lower
bound) of S if s X (s X) for all s 2 S. The least upper bound, also called join, of S is denoted as
and the greatest lower bound, also called meet, of S is denoted as
S. The greatest element of P , denoted
>, is called the top element, and the least element of P , denoted ?, is called the bottom element.
Denition 4 Let L be an ordered set. L is called a join (meet) semilattice if X _ Y
all X;Y 2 L. L is called a lattice if it is both a join and meet semilattice, i.e., if X _ Y and exist
for all pairs of elements L. L is a complete lattice if
S and
S exist for all subsets S L. A
set M L is a sublattice of L if
A C D T W
CT
CD
AT
AD
ACW ADT ATW
ACDW
ACDT
ACDTW
CDT
ACT
ACD DTW
AC
ACTW
Maximal Frequent Itemsets: ACTW, CDW
Figure
2: The Complete Powerset Lattice P(I)
For set S, the ordered set P(S), the power set of S, is a complete lattice in which join and meet are given
by union and intersection, respectively:
_
i2I
i2I
The top element of P(S) is and the bottom element of P(S) is fg. For any L P(S), L
is called a lattice of sets if it is closed under nite unions and intersections, i.e., (L; ) is a lattice with the
partial order specied by the subset relation , X _
Figure
2 shows the powerset lattice P(I) of the set of items in our example database I = fA; C; D;T ; Wg.
Also shown are the frequent (grey circles) and maximal frequent itemsets (black circles). It can be observed
that the set of all frequent itemsets forms a meet semilattice since it is closed under the meet operation,
i.e., for any frequent itemsets X , and Y , X \ Y is also frequent. On the other hand, it doesn't form a
join semilattice, since X and Y frequent, doesn't imply X [ Y is frequent. It can be mentioned that the
infrequent itemsets form a join semilattice.
subsets of a frequent itemsets are frequent.
The above lemma is a consequence of the closure under meet operation for the set of frequent itemsets.
As a corollary, we get that all supersets of an infrequent itemset are infrequent. This observation forms the
basis of a very powerful pruning strategy in a bottom-up search procedure for frequent itemsets, which has
been leveraged in many association mining algorithms [2, 23, 26]. Namely, only the itemsets found to be
frequent at the previous level need to be extended as candidates for the current level. However, the lattice
formulation makes it apparent that we need not restrict ourselves to a purely bottom-up search.
Lemma 2 The maximal frequent itemsets uniquely determine all frequent itemsets.
This observation tells us that our goal should be to devise a search procedure that quickly identies the
maximal frequent itemsets. In the following sections we will see how to do this eciently.
4.1 Support Counting
Denition 5 A lattice L is said to be distributive if for all X;Y; Z 2 L,
Denition 6 Let L be a lattice with bottom element ?. Then X 2 L is called an atom if ? < X, i.e., X
covers ?. The set of atoms of L is denoted by A(L).
Denition 7 A lattice L is called a Boolean lattice if
It is distributive.
It has > and ? elements.
Each member X of the lattice has a complement.
We begin by noting that the powerset lattice P(I) on the set of database items I is a Boolean lattice,
with the complement of X 2 L given as InX . The set of atoms of the powerset lattice corresponds to the
set of items, i.e., We associate with each atom (database item) X its tid-list, denoted L(X),
which is a list of all transaction identiers containing the atom. Figure 3 shows the tid-lists for the atoms
in our example database. For example consider atom A. Looking at the database in Figure 1, we see that
A occurs in transactions 1, 3, 4, and 5. This forms the tid-list for atom A.41 13564
A
Figure
3: Tid-List for the Atoms
Lemma 3 ([8]) For a nite boolean lattice L, with X 2 L,
In other words every element of a boolean lattice is given as a join of a subset of the set of atoms. Since the
powerset lattice P(I) is a boolean lattice, with the join operation corresponding to set union, we get
Lemma 4 For any X 2 P(I), let
The above lemma states that any itemset can be obtained is a join of some atoms of the lattice, and the
support of the itemset can be obtained by intersecting the tid-list of the atoms. We can generalize this
lemma to a set of itemsets:
Lemma 5 For any X 2 P(I), let
This lemma says that if an itemset is given as a union of a set of itemsets in J , then its support is given as
the intersection of tid-lists of elements in J . In particular we can determine the support of any k-itemset by
simply intersecting the tid-lists of any two of its (k 1) length subsets. A simple check on the cardinality
of the resulting tid-list tells us whether the new itemset is frequent or not. Figure 4 shows this process
pictorially. It shows the initial database with the tid-list for each item (i.e., the atoms). The intermediate
tid-list for CD is obtained by intersecting the lists of C and D, i.e., Similarly,
Thus, only the lexicographically rst two subsets at the previous
level are required to compute the support of an itemset at any level.41 13564
Intersect
Intersect
A C D T W
CT
CD
AT
AD
ACW ADT ATW
ACDW
ACDT
ACDTW
CDT
ACT
ACD DTW
AC
ACTW
INITIAL DATABASE
OF TID-LISTS
Figure
4: Computing Support of Itemsets via Tid-List Intersections
Lemma 6 Let X and Y be two itemsets, with X Y . Then L(X) L(Y ).
Proof: Follows from the denition of support.
This lemma states that if X is a subset of Y , then the cardinality of the tid-list of Y (i.e., its support)
must be less than or equal to the cardinality of the tid-list of X . A practical and important consequence
of the above lemma is that the cardinalities of intermediate tid-lists shrink as we move up the lattice. This
results in very fast intersection and support counting.
4.2 Lattice Decomposition: Prex-Based Classes
If we had enough main-memory we could enumerate all the frequent itemsets by traversing the powerset
lattice, and performing intersections to obtain itemset supports. In practice, however, we have only a limited
amount of main-memory, and all the intermediate tid-lists will not t in memory. This brings up a natural
question: can we decompose the original lattice into smaller pieces such that each portion can be solved
independently in main-memory! We address this question below.
Denition 8 Let P be a set. An equivalence relation on P is a binary relation such that for all
the relation is:
exive: X X.
The equivalence relation partitions the set P into disjoint subsets called equivalence classes. The equivalence
class of an element X 2 P is given as [X g.
Dene a function length prex of X . Dene
an equivalence relation k on the lattice P(I) as follows: 8X;Y 2 P(I); X k Y , p(X;
That is, two itemsets are in the same class if they share a common k length prex. We therefore call k a
prex-based equivalence relation.
A C D T W
CT
CD
AT
AD
ACW ADT ATW
ACDW
ACDT
ACDTW
CDT
ACT
ACD CTW
AC
ACTW
ACDTW
ACDW
ACDT ACTW
A
AT
AD
AC
ACW ADT ATW
ACT
ACD
Figure
5: Equivalence Classes of a) P(I) Induced by 1 , and b) [A] 1 Induced by Final Lattice of
Independent Classes
Figure 5a shows the lattice induced by the equivalence relation 1 on P(I), where we collapse all itemsets
with a common 1 length prex into an equivalence class. The resulting set or lattice of equivalence classes
is f[A]; [C]; [D]; [T ]; [W ]g.
Lemma 7 Each equivalence class [X induced by the equivalence relation k is a sub-lattice of P(I).
Proof: Let U and V be any two elements in the class [X ], i.e., U; V share the common prex X . U _
implies that U _ V 2 [X ], and U implies that U
is a sublattice of P(I).
Each [X ] 1 is itself a boolean lattice with its own set of atoms. For example, the atoms of [A] 1 are
and the top and bottom elements are A. By the application of
Lemmas 4, and 5, we can generate all the supports of the itemsets in each class (sub-lattice) by intersecting
the tid-list of atoms or any two subsets at the previous level. If there is enough main-memory to hold
temporary tid-lists for each class, then we can solve each [X ] 1 independently. Another interesting feature
of the equivalence classes is that the links between classes denote dependencies. That is to say, if we want
to prune an itemset if there exists at least one infrequent subset (see Lemma 1), then we have to process the
classes in a specic order. In particular we have to solve the classes from bottom to top, which corresponds
to a reverse lexicographic order, i.e., we process [W ], then [T ], followed by [D], then [C], and nally [A].
This guarantees that all subset information is available for pruning.
In practice we have found that the one level decomposition induced by 1 is sucient. However, in
some cases, a class may still be too large to be solved in main-memory. In this scenario, we apply recursive
class decomposition. Let's assume that [A] is too large to t in main-memory. Since [A] is itself a boolean
lattice, it can be decomposed using 2 . Figure 5b shows the equivalence class lattice induced by applying
2 on [A], where we collapse all itemsets with a common 2 length prex into an equivalence class. The
resulting set of classes are f[AC]; [AD]; [AT ]; [AW ]g. Like before, each class can be solved independently,
and we can solve them in reverse lexicographic order to enable subset pruning. The nal set of independent
classes obtained by applying 1 on P(I) and 2 on [A] is shown in Figure 5c. As before, the links show the
pruning dependencies that exist among the classes. Depending on the amount of main-memory available
we can recursively partition large classes into smaller ones, until each class is small enough to be solved
independently in main-memory.
4.3 Search for Frequent Itemsets
In this section we discuss ecient search strategies for enumerating the frequent itemsets within each class.
The actual pseudo-code and implementation details will be discussed in Section 5.
4.3.1 Bottom-Up Search
The bottom-up search is based on a recursive decomposition of each class into smaller classes induced by
the equivalence relation k . Figure 6 shows the decomposition of [A] 1 into smaller classes, and the resulting
lattice of equivalence classes. Also shown are the atoms within each class, from which all other elements of a
class can be determined. The equivalence class lattice can be traversed in either depth-rst or breadth-rst
manner. In this paper we will only show results for a breadth-rst traversal, i.e., we rst process the classes
followed by the classes f[ACT ]; [ACW ]; [ATW ]g, and nally [ACTW ]. For computing
the support of any itemset, we simply intersect the tid-lists of two of its subsets at the previous level. Since
the search is breadth-rst, this technique enumerates all frequent itemsets.
AC AD AT AW
ADT
ACT
ACD
ACDT ACDW ADTW
ACDTW
A
ACTW
AC AW
AT
ACTW
ACT
Equivalence Classes
Atoms in each Class
Figure
4.3.2 Top-Down Search
The top-down approach starts with the top element of the lattice. Its support is determined by intersecting
the tid-lists of the atoms. This requires a k-way intersection if the top element is a k-itemset. The advantage
of this approach is that if the maximal element is fairly large then one can quickly identify it, and one can
avoid nding the support of all its subsets. The search starts with the top element. If it is frequent we are
done. Otherwise, we check each subset at the next level. This process is repeated until we have identied
all minimal infrequent itemsets. Figure 7 depicts the top-down search. This scheme enumerates only the
maximal frequent itemsets within each sub-lattice. However, the maximal elements of a sub-lattice may
not be globally maximal. It can thus generate some non-maximal itemsets. The search starts with the top
element ACDTW . Since it is infrequent we have to check each of its four length 4 subsets. Out of these only
ACTW is frequent, so we mark all its subsets as frequent as well. We then examine the unmarked length
3 subsets of the three infrequent subsets. The search stops when AD, the minimal infrequent itemset has
been identied.
ACDT ACDW ADTW
ADT
ACW
ACT
ACD
AD
AC AT AW
A
ACDTW
ACTW
Minimal Infrequent Itemset: AD
Figure
7: Top-Down Search (gray circles represent infrequent itemsets, black circle the maximal frequent,
and white circle the minimal infrequent set)
4.3.3 Hybrid Search
The hybrid scheme is based on the intuition that the greater the support of an frequent itemset the more
likely it is to be a part of a longer frequent itemset. There are two main steps in this approach. We begin
with the set of atoms of the class sorted in descending order based on their support. The rst, hybrid
phase starts by intersecting the atoms one at a time, beginning with the atom with the highest support,
generating longer and longer frequent itemsets. The process stops when an extension becomes infrequent.
We then enter the second, bottom-up phase. The remaining atoms are combined with the atoms in the rst
set in a breadth-rst fashion described above to generate all other frequent itemsets. Figure 8 illustrates
this approach (just for this case, to better show the bottom-up phase, we have assumed that AD and ADW
are also frequent). The search starts by reordering the 2-itemsets according to support, the most frequent
rst. We combine AC and AW to obtain the frequent itemset ACW . We extend it with the next pair
AT , to get ACTW . Extension by AD fails. This concludes the hybrid phase, having found the maximal
set ACTW . In the bottom-up phase, AD is combined with all previous pairs to ensure a complete search,
producing the equivalence class [AD], which can be solved using a bottom-up search. This hybrid search
strategy requires only 2-way intersections. It enumerates the \long" maximal frequent itemsets discovered
in the hybrid phase, and also the non-maximal ones found in the bottom-up phase. Another modication
of this scheme is to recursively substitute the second bottom-up search with a hybrid search, so that mainly
the maximal frequent elements are enumerated.
ACD
AC
ACW
AW AT AD
ACDTW
ACTW
Hybrid Phase
AT AD
AC
AC AD AT AW
Pairs
Sort on Support
Phase
Figure
8: Hybrid Search
4.4 Generating Smaller Classes: Maximal Clique Approach
In this section we show how to produce smaller sub-lattices or equivalence classes compared to the pure
prex-based approach, by using additional information. Smaller sub-lattices have fewer atoms and can save
unnecessary intersections. For example, if there are k atoms, then we have to perform k
intersections for
the next level in the bottom-up approach. Fewer atoms thus lead to fewer intersections in the bottom-up
search. Fewer atoms also reduce the number of intersections in the hybrid scheme, and lead to smaller
maximum element size in the top-down search.
Denition 9 Let P be a set. A pseudo-equivalence relation on P is a binary relation such that for
all the relation is:
exive: X X.
The pseudo-equivalence relation partitions the set P into possibly overlapping subsets called pseudo-equivalence
classes.
Denition 10 A graph consists of a set of elements called vertices, and a set of lines connecting pairs
of vertices, called the edges. A graph is complete if there is an edge between all pairs of vertices. A
complete subgraph of a graph is called a clique.
{12, 13, 14, 15, 16, 17, 18, 23, 25, 27, 28, 34, 35, 36, 45, 46, 56, 58, 68, 78}
Frequent 2-Itemsets
Maximal Cliques
Association Graph
Maximal-Clique-Based Classes
Figure
9: Maximal Cliques of the Association Graph; Prex-Based and Maximal-Clique-Based Classes
denote the set of frequent k-itemsets. Dene an k-association graph, given as G E), with the
vertex set
Zg. Let M k denote the set of maximal cliques in G k . Figure 9 shows the association graph G 1 for the
example shown. Its maximal clique set
Dene a pseudo-equivalence relation k on the lattice P(I) as follows: 8X;Y 2 P(I); X k Y , 9 C 2
k such that X;Y C and p(X; That is, two itemsets are related, i.e, they are in the same
pseudo-class, if they are subsets of the same maximal clique and they share a common prex of length k.
We therefore call k a maximal-clique-based pseudo-equivalence relation.
Lemma 8 Each pseudo-class [X induced by the pseudo-equivalence relation k is a sub-lattice of P(I).
Proof: Let U and V be any two elements in the class [X ], i.e., U; V share the common prex X and there
exists a maximal clique C 2 M k such that U; V C. Clearly, U [
implies that U _ V 2 [X ], and U implies that U
Thus, each pseudo-class [X ] 1 is a boolean lattice, and the supports of all elements of the lattice can be
generated by applying Lemmas 4, and 5 on the atoms, and using any of the three search strategies described
above.
Lemma 9 Let @ k denote the set of pseudo-classes of the maximal-clique-based relation k . Each pseudo-class
induced by the prex-based relation k is a subset of some class [X induced by k . Conversely,
each [X ] k , is the union of a set of pseudo-classes , given as [X
g.
Proof: Let (X) denote the neighbors of X in the graph G k . Then [X (X)gg. In
other words, [X ] consists of elements with the prex X and extended by all possible subsets of the neighbors
of X in the graph G k . Since any clique Y is a subset of fY; (Y )g, we have that [Y
a prex of X . On the other hand it is easy to show that [X
Y is a prex of Xg.
This lemma states that each pseudo-class of k is a renement of (i.e., is smaller than) some class of k .
By using the relation k instead of k , we can therefore, generate smaller sub-lattices. These sub-lattices
require less memory, and can be processed independently using any of the three search strategies described
above.
Figure
9 contrasts the classes (sub-lattices) generated by 1 and 1 . It is apparent that 1 generates
smaller classes. For example, the prex class class containing all the elements,
while the maximal-clique classes for 1568g. Each of these classes is much
smaller than the prex-based class. The smaller classes of k come at a cost, since the enumeration of
maximal cliques can be computationally expensive. For general graphs the maximal clique decision problem
is NP-Complete [10]. However, the k-association graph is usually sparse and the maximal cliques can be
enumerated eciently. As the edge density of the association graph increases the clique based approaches
may suer. k should thus be used only when G k is not too dense. Some of the factors aecting the edge
density include decreasing support and increasing transaction size. The eect of these parameters is studied
in the experimental section.
4.4.1 Maximal Clique Generation
We modied Bierstone's algorithm [22] for generating maximal cliques in the k-association graph. For a
class [x], and y 2 [x], y is said to cover the subset of [x], given by For each class C, we
rst identify its covering set, given as fy 2 Cjcov(y) 6= ;; and cov(y) 6 cov(z); for any z 2 C; z < yg. For
example, consider the class [1], shown in gure 9. Similarly, for our example,
since each [y] [1]. The covering set of [1] is given by the set f2; 3; 5g. The
item 4 is not in the covering set since, is a subset of shows
the complete clique generation algorithm. Only the elements in the covering set need to be considered while
generating maximal cliques for the current class (step 3). We recursively generate the maximal cliques for
elements in the covering set for each class. Each maximal clique from the covering set is prexed with the
class identier to obtain the maximal cliques for the current class (step 7). Before inserting the new clique,
all duplicates or subsets are eliminated. If the new clique is a subset of any clique already in the maximal
list, then it is not inserted. The conditions for the above test are shown in line 8.
1:for
4: for all cliq 2 [x].CliqList do
5:
7: insert (fig [ M) in [i].CliqList such that
8:
Figure
10: The Maximal Clique Generation Algorithm
Weak Maximal Cliques For some database parameters, the edge density of the k-association graph may
be too high, resulting in large cliques with signicant overlap among them. In these cases, not only does
the clique generation take more time, but redundant frequent itemsets may also be discovered within each
sublattice. To solve this problem we introduce the notion of weak maximality of cliques. Given any two
cliques X , and Y , we say that they are -related, if jX\Y j
, i.e., the ratio of the common elements to
the distinct elements of the cliques is greater than or equal to the threshold . A weak maximal clique,
g, is generated by collapsing the two cliques into one, provided they are -related. During clique
generation only weak maximal cliques are generated for some user specied value of . Note that for
we obtain regular maximal cliques, while for we obtain a single clique. Preliminary experiments
indicate that using an appropriate value of , most of the overhead of redundant cliques can be avoided. We
found 0:5 to work well in practice.
5 Algorithm Design and Implementation
In this section we describe several new algorithms for ecient enumeration of frequent itemsets. The rst
step involves the computation of the frequent items and 2-itemsets. The next step generates the sub-lattices
(classes) by applying either the prex-based equivalence relation 1 , or the maximal-clique-based pseudo-
equivalence relation 1 on the set of frequent 2-itemsets F 2 . The sub-lattices are then processed one at a
time in reverse lexicographic order in main-memory using either bottom-up, top-down or hybrid search. We
will now describe these steps in some more detail.
5.1 Computing Frequent 1-Itemsets and 2-Itemsets
Most of the current association algorithms [2, 6, 20, 23, 26, 27] assume a horizontal database layout, such
as the one shown in Figure 1, consisting of a list of transactions, where each transaction has an identier
followed by a list of items in that transaction. In contrast our algorithms use the vertical database format,
such as the one shown in Figure 3, where we maintain a disk-based tid-list for each item. This enables us to
check support via simple tid-list intersections.
Computing F 1 Given the vertical tid-list database, all frequent items can be found in a single database
scan. For each item, we simply read its tid-list from disk into memory. We then scan the tid-list, incrementing
the item's support for each entry.
Computing F 2 Let jIj be the number of frequent items, and A the average id-list size in bytes. A
naive implementation for computing the frequent 2-itemsets requires N
id-list intersections for all pairs of
items. The amount of data read is A N (N 1)=2, which corresponds to around N=2 data scans. This is
clearly inecient. Instead of the naive method one could use two alternate solutions:
1. Use a preprocessing step to gather the counts of all 2-sequences above a user specied lower bound.
Since this information is invariant, it has to be computed once, and the cost can be amortized over the
number of times the data is mined.
2. Perform a vertical to horizontal transformation on-the-
y. This can be done quite easily. For each
item i, we scan its tid-list into memory. We insert item i in an array indexed by tid for each t 2 L(i).
For example, consider the id-list for item A, shown in Figure 3. We read the rst tid 1, and then
insert A in the array indexed by transaction 1. We repeat this process for all other items and their
tidlists. Figure 11 shows how the inversion process works after the addition of each item and the
complete horizontal database recovered from the vertical item tid-lists. This process entails only a
trivial amount of overhead. In fact, Partition performs the opposite inversion from horizontal to
vertical tid-list format on-the-
y, with very little cost. We also implemented appropriate memory
management by recovering only a block of database at a time, so that the recovered transactions t
in memory. Finally, we optimize the computation of F 2 by directly updating the counts of candidate
pairs in an upper triangular 2D array.
The experiments reported in Section 7 use the horizontal recovery method for computing F 2 . As we shall
demonstrate, this inversion can be done quite eciently.
Add C Add D Add W
Add T
Add A246246246
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
A
Figure
11: Vertical-to-Horizontal Database Recovery
5.2 Search Implementation
Bottom-Up Search Figure 12 shows the pseudo-code for the bottom-up search. The input to the procedure
is a set of atoms of a sub-lattice S. Frequent itemsets are generated by intersecting the tid-lists of
all distinct pairs of atoms and checking the cardinality of the resulting tid-list. A recursive procedure call is
made with those itemsets found to be frequent at the current level. This process is repeated until all frequent
itemsets have been enumerated. In terms of memory management it is easy to see that we need memory to
store intermediate tid-lists for at most two consecutive levels. Once all the frequent itemsets for the next
level have been generated, the itemsets at the current level can be deleted.
for all atoms A i 2 S do
for all atoms A j 2 S, with j > i do
if (R) min sup then
Figure
12: Pseudo-code for Bottom-Up Search
Since each sub-lattice is processed in reverse lexicographic order all subset information is available for
itemset pruning. For fast subset checking the frequent itemsets can be stored in a hash table. However, in
our experiments on synthetic data we found pruning to be of little or no benet. This is mainly because of
Lemma 6, which says that the tid-list intersection is especially ecient for large itemsets. Nevertheless, there
may be databases where pruning is crucial for performance, and we can support pruning for those datasets.
Top-Down Search The code for top-down search is given in Figure 13. The search begins with the
maximum element R of the sub-lattice S. A check is made to see if the element is already known to be
frequent. If not we perform a k-way intersection to determine its support. If it is frequent then we are done.
Otherwise, we recursively check the support of each of its (k 1)-subsets. We also maintain a hash table
HT of itemsets known to be infrequent from previous recursive calls to avoid processing sub-lattices that
have already been examined. In terms of memory management the top-down approach requires that only
the tid-lists of the atoms of a class be in memory.
if R 62 F jRj then
if (R) min sup then
else
for all Y R, with jY
Figure
13: Pseudo-code for Top-Down Search
Hybrid Search Figure 14 shows the pseudo-code for the hybrid search. The input consists of the atom
set S sorted in descending order of support. The maximal phase begins by intersecting atoms one at a time
until no frequent extension is possible. All the atoms involved in this phase are stored in the set S 1 . The
remaining atoms S enter the bottom-up phase. For each atom in S 2 , we intersect it with each atom
in S 1 . The frequent itemsets form the atoms of a new sub-lattice and are solved using the bottom-up search.
This process is then repeated for the other atoms of S 2 . The maximal phase requires main-memory only for
the atoms, while the bottom-up phase requires memory for at most two consecutive levels.
Hybrid(S sorted on support):
for all A i 2 S, i > 1 do /* Maximal Phase */
if (R) min sup then
else break;
do /* Bottom-Up Phase */
Figure
14: Pseudo-code for Hybrid Search
5.3 Number of Database Scans
Before processing each sub-lattice from the initial decomposition all the relevant item tid-lists are scanned
into memory. The tid-lists for the atoms (frequent 2-itemsets) of each initial sub-lattice are constructed by
intersecting the item tid-lists. All the other frequent itemsets are enumerated by intersecting the tid-lists
of the atoms using the dierent search procedures. If all the initial classes have disjoint set of items, then
each item's tid-list is scanned from disk only once during the entire frequent itemset enumeration process
over all sub-lattices. In the general case there will be some degree of overlap of items among the dierent
sub-lattices. However only the database portion corresponding to the frequent items will need to be scanned,
which can be a lot smaller than the entire database. Furthermore, sub-lattices sharing many common items
can be processed in a batch mode to minimize disk access. Thus we claim that our algorithms will usually
require a small number of database scans after computing F 2 .
5.4 New Algorithms
The dierent algorithms that we propose are listed below. These algorithms dier in the the search strategy
used for enumeration and in the relation used for generating independent sub-lattices.
1. Eclat: It uses prex-based equivalence relation 1 along with bottom-up search. It enumerates all
frequent itemsets.
2. MaxEclat: It uses prex-based equivalence relation 1 along with hybrid search. It enumerates the
\long" maximal frequent itemsets, and some non-maximal ones.
3. Clique: It uses maximal-clique-based pseudo-equivalence relation 1 along with bottom-up search. It
enumerates all frequent itemsets.
4. MaxClique: It uses maximal-clique-based pseudo-equivalence relation 1 along with hybrid search.
It enumerates the \long" maximal frequent itemsets, and some non-maximal ones.
5. TopDown: It uses maximal-clique-based pseudo-equivalence relation 1 along with top-down search.
It enumerates only the maximal frequent itemsets. Note that for top-down search, using the larger
sub-lattices generated by 1 is not likely to be ecient.
6. AprClique: It uses maximal-clique-based pseudo-equivalence relation 1 . However, unlike the algorithms
described above, it uses horizontal data layout. It has two main steps:
possible subsets of the maximum element in each sub-lattice are generated and inserted in hash
trees [2], avoiding duplicates. There is one hash tree for each length, i.e., a k-subset is inserted in the
tree C k . An internal node of the hash tree at depth d contains a hash table whose cells point to nodes
for all sub-lattices S i induced by 1 do
for all k > 2 and k jRj do
Insert each k-subset of R in C k ;
for all transactions t 2 D do
for all k-subsets s of t, with k > 2 and k jtj do
Set of all frequent itemsets =
Figure
15: Pseudo-code for AprClique Algorithm
at depth d + 1. All the itemsets are stored in the leaves. The insertion procedure starts at the root,
and hashing on successive items, inserts a candidate in a leaf.
ii) The support counting step is similar to the Apriori approach. For each transaction in the database
D, we form all possible k-subsets. We then search that subset in C k and update the count if it is
found.
The database is thus scanned only once, and all frequent itemset are generated. The pseudo-code is
shown in Figure 15.
6 The Apriori and Partition Algorithms
We now discuss Apriori and Partition in some more detail, since we will experimentally compare our new
algorithms against them.
Apriori Algorithm Apriori [2] is an iterative algorithm that counts itemsets of a specic length in a given
database pass. The process starts by scanning all transactions in the database and computing the frequent
items. Next, a set of potentially frequent candidate 2-itemsets is formed from the frequent items. Another
database scan is made to obtain their supports. The frequent 2-itemsets are retained for the next pass, and
the process is repeated until all frequent itemsets have been enumerated. The complete algorithm is shown
in gure 16. We refer the reader to [2] for additional details.
There are three main steps in the algorithm:
1. Generate candidates of length k from the frequent (k 1) length itemsets, by a self join on F k 1 . For ex-
ample, if F
BCD;BCE;BDEg.
2. Prune any candidate with at least one infrequent subset. As an example, ACD will be pruned since
CD is not frequent. After pruning we get a new set C
3. Scan all transactions to obtain candidate supports. The candidates are stored in a hash tree to facilitate
fast support counting (note: the second iteration is optimized by using an array to count candidate
pairs of items, instead of storing them in a hash tree).
ffrequent 1-itemsets
for all transactions t 2 D
for all k-subsets s of t
Set of all frequent itemsets =
Figure
16: The Apriori Algorithm
Partition Algorithm Partition [26] logically divides the horizontal database into a number of non-overlapping
partitions. Each partition is read, and vertical tid-lists are formed for each item, i.e., list of all
tids where the item appears. Then all locally frequent itemsets are generated via tid-list intersections. All
locally frequent itemsets are merged and a second pass is made through all the partitions. The database is
again converted to the vertical layout and the global counts of all the chosen itemsets are obtained. The size
of a partition is chosen so that it can be accommodated in main-memory. Partition thus makes only two
database scans. The key observation used is that a globally frequent itemset must be locally frequent in at
least one partition. Thus all frequent itemsets are guaranteed to be found.
7 Experimental Results
Our experiments used a 200MHz Sun Ultra-2 workstation with 384MB main memory. We used dierent
synthetic databases that have been used as benchmark databases for many association rules algorithms
[1, 2, 6, 15, 19, 20, 23, 26, 30]. We wrote our own dataset generator, using the procedure described in [2].
Our generator produces longer frequent itemsets for the same parameters (code is available by sending email
to the author).
These datasets mimic the transactions in a retailing environment, where people tend to buy sets of items
together, the so called potential maximal frequent set. The size of the maximal elements is clustered around
a mean, with a few long itemsets. A transaction may contain one or more of such frequent sets. The
transaction size is also clustered around a mean, but a few of them may contain many items.
Let D denote the number of transactions, T the average transaction size, I the size of a maximal
potentially frequent itemset, L the number of maximal potentially frequent itemsets, and N the number of
items. The data is generated using the following procedure. We rst generate L maximal itemsets of average
size I , by choosing from the N items. We next generate D transactions of average size T by choosing from
the L maximal itemsets. We refer the reader to [4] for more detail on the database generation. In our
experiments we set are conducted on databases with dierent values
of D, T , and I . The database parameters are shown in Table 1.
Database T I D Size
Table
1: Database Parameter Settings
Figure
17 shows the number of frequent itemsets of dierent sizes for the databases used in our ex-
periments. The length of the longest frequent itemset and the total number of frequent itemsets for each
database are shown in Table 2. For example, T30:I16:D400K has a total of 13480771 frequent itemsets of
various lengths. The longest frequent itemset is of size 22 at 0.25% support!
Database Longest Freq. Itemset Number Freq. Itemsets
T30.I16.D400K (0.5% minsup) 22 13480771
Table
2: Maximum Size and Number of Frequent Sequences (0.25% Support)
Number
of
Frequent
Itemsets
Frequent Itemset Size
Min Support: 0.25%
Figure
17: Number of Frequent Itemsets of Dierent Sizes
Comparative Performance In Figure 18 and Figure 19 we compare our new algorithms against Apriori
and Partition (with 3 and 10 database partitions) for decreasing values of minimum support on the dierent
databases. As the support decreases, the size and the number of frequent itemsets increases. Apriori thus
has to make multiple passes over the database (22 passes for T30:I16:D400K), and performs poorly.
Partition performs worse than Apriori for high support, since the database is scanned only a few times
at these points. The overheads associated with inverting the database on-the-
y dominate in Partition.
However, as the support is lowered, Partition wins out over Apriori, since it only scans the database twice.
These results are in agreement with previous experiments comparing these two algorithms [26]. One problem
with Partition is that as the number of partitions increases, the number of locally frequent itemsets, which
are not globally frequent, increases (this can be reduced somewhat by randomizing the partition selection).
Partition can thus spend a lot of time in performing these redundant intersections. For example, compare
the time for Partition3 and Partition10 on all the datasets. Partition10 typically takes a factor of 1.5 to 2
times more time than Partition3. For T30:I16 (at 1% support) it takes 13 times more! Figure 20, which
shows the number of tid-list intersections for dierent algorithms on dierent datasets, makes it clear that
Partition10 is performing four to ve times more intersections than Partition3.
AprClique scans the database only once, and out-performs Apriori and Partition for higher support values
on the T10 and T20 datasets. AprClique is very sensitive to the quality of maximal cliques (sub-lattices)
that are generated. For small support, or with increasing transaction size T for xed I , the edge density of
the k-association graph increases, consequently increasing the size of the maximal cliques. AprClique doesn't
Time
(sec)
Minimum Support
Partition3
AprClique
Topdown
Eclat
Clique
MaxEclat
MaxClique40801200.25%
0.5%
0.75%
1.0%
Time
(sec)
Minimum Support
Partition3
AprClique
Topdown
Eclat
Clique
MaxEclat
MaxClique10010000
0.5%
0.75%
1.0%
Time
(sec)
Minimum Support
Partition3
AprClique
Topdown
Eclat
Clique
MaxEclat
MaxClique1000.25%
0.5%
0.75%
1.0%
Time
(sec)
Minimum Support
Partition3
AprClique
Topdown
Eclat
Clique
MaxEclat
MaxClique10010000
0.5%
0.75%
1.0%
Time
(sec)
Minimum Support
Partition3
AprClique
Topdown
Eclat
Clique
MaxEclat
MaxClique100100000.25%
0.5%
0.75%
1.0%
Time
(sec)
Minimum Support
Partition3
AprClique
Topdown
Eclat
Clique
MaxEclat
MaxClique
Figure
Total Execution Time
Time
(sec)
Minimum Support
Partition3
Eclat
Clique
MaxEclat
MaxClique
Figure
19: Total Execution Time
perform well under these conditions. TopDown usually performs better than AprClique, but shares the same
characteristics as AprClique, i.e., it is better than both Apriori and Partition for higher support values,
except for the T30 and T40 datasets. At lower support the maximum clique size, in the worst case, can
become as large as the number of frequent items, forcing TopDown to perform too many k-way intersections
to determine the minimal infrequent sets.
Eclat performs signicantly better than all these algorithms in all cases. It usually out-performs Apriori
by more than an order of magnitude, Partition3 by a factor of two, and Partition10 by a factor of four.
Eclat makes only a few database scans, requires no hash trees, and uses only simple intersection operations
to generate frequent itemsets. Further, Eclat is able to handle lower support values in dense datasets (e.g.
T20:I12 and T40:I8), where both Apriori and Partition run out of virtual memory at 0.25% support.
We now look at the comparison between the remaining four methods, which are the main contributions
of this work, i.e., between Eclat, MaxEclat, Clique and MaxClique. Clique uses the maximal-clique-based
decomposition, which generates smaller classes with fewer number of candidates. However, it performs only
slightly better than Eclat. Clique is usually 5-10% better than Eclat, since it cuts down on the number of
tidlist intersections, as shown in Figure 20. Clique performs anywhere from 2% to 46% fewer intersections
than Eclat. The dierence between these methods is not substantial, since the savings in the number of
intersections doesn't translate into a similar reduction in execution time.
The graphs for MaxEclat and MaxClique indicate that the reduction in search space by performing the
hybrid search provides signicant gains. Both the maximal clique-based strategies outperform their prex-1e+07
Number
of
Intersections
Min Support: 0.25%
Partition3
TopDown
Eclat
Clique
MaxEclat
MaxClique
Figure
20: Number of Tid-list Intersections (0.25% Support)
based counterparts. MaxClique is always better than MaxEclat due to the smaller classes. The biggest
dierence between these methods is observed for T20:I12, where MaxClique is twice as fast as MaxEclat.
An interesting result is that for T40:I8 we could not run the clique-based methods on 0.25% support, while
the prex-based methods, Eclat and MaxEclat, were able to handle this very low support value. The reason
why clique-based approaches fail is that whenever the edge density of the association graph increases, the
number and size of the cliques becomes large and there is a signicant overlap among dierent cliques. In
such cases the clique based schemes start to suer.
The best scheme for all the databases we considered is MaxClique since it benets from the smaller sublattices
and the hybrid search scheme. Figure 20 gives the number of intersections performed by MaxClique
compared against other methods. As one can see, MaxClique cuts down the candidate search space drastically,
by anywhere from a factor of 3 (for T20:I4) to 35 (for T40:I8) over Eclat. It performs the fewest intersections
of any method. In terms of raw performance MaxClique outperforms Apriori by a factor of 20-30, Partition10
by a factor of 5, and Eclat by as much as a factor of 10 on T20:I12. Furthermore, it is the only method
that was able to handle support values of 0.5% on T30:I16 (see Figure 19) where the longest frequent
itemset was of size 22. All bottom-up search methods would have to enumerate at least 2 22 subsets, while
MaxClique only performed 197601 intersections, even though there were 13480771 total frequent itemsets
(see
Table
2). MaxEclat quickly identies the 22 sized long itemset and also other long itemsets, and thus
avoids enumerating all subsets. At 0.75% support MaxClique takes 69 seconds while Apriori takes 22963
seconds, a factor of 332, while Partition10 ran out of virtual memory.
To summarize, there are several reasons why the last four algorithms outperform previous approaches:
1. They use only simple join operation on tid-lists. As the length of a frequent sequence increases, the
size of its tid-list decreases, resulting in very fast joins.
2. No complicated hash-tree structure is used, and no overhead of generating and searching of customer
subsequences is incurred. These structures typically have very poor locality [24]. On the other hand
the new algorithms have excellent locality, since a join requires only a linear scan of two lists.
3. As the minimum support is lowered, more and larger frequent sequences are found. Apriori makes
a complete dataset scan for each iteration. Eclat and the other three methods, on the other hand,
restrict themselves to usually only few scan, cutting down the I/O costs.
4. The hybrid search approaches are successful by quickly identifying long itemsets early, and are able to
avoid enumerating all subsets. For long itemsets of size 19 or 22, only the hybrid search methods are
able to run, while other methods run out of virtual memory.101000
Relative
Time
Number of Transactions
T10.I4, Min Support 0.25%
Partition
AprClique
TopDown
Eclat
Clique
MaxEclat
Relative
Time
Transaction Size
Min Support: 250
Eclat
Clique
MaxEclat
MaxClique
Figure
21: Scale-up Experiments: a) Number of Transactions, b) Transaction Size
Scalability The goal of the experiments below is to measure how the new algorithms perform as we increase
the number of transactions and average transaction size.
Figure
shows how the dierent algorithms scale up as the number of transactions increases from 100,000
to 5 million. The times are normalized against the execution time for MaxClique on 100,000 transactions. A
minimum support value of 0.25% was used. The number of partitions for Partition was varied from 1 to 50.
While all the algorithms scale linearly, our new algorithms continue to out-perform Apriori and Partition.
Figure
shows how the dierent algorithms scale with increasing transaction size. The times are
normalized against the execution time for MaxClique on transactions. Instead of a
percentage, we used an absolute support of 250. The physical size of the database was kept roughly the
same by keeping a constant T D value. We used
The goal of this setup is to measure the eect of increasing transaction size while keeping other parameters
constant. We can see that there is a gradual increase in execution time for all algorithms with increasing
transaction size. However the new algorithms again outperform Apriori and Partition. As the transaction
size increases, the number of cliques increases, and the clique based algorithms start performing worse than
the prex-based algorithms.0.20.611.4Memory
Usage
2Mean
Time ->
Eclat
Figure
22: Eclat Memory Usage
Memory Usage Figure 22 shows the total main-memory used for the tid-lists in Eclat as the computation
of frequent itemsets progresses on T20.I6.D100K. The mean memory usage is less than 0.018MB, roughly
2% of the total database size. The gure only shows the cases where the memory usage was more than twice
the mean. The peaks in the graph are usually due to the initial construction of all the (2-itemset) atom
tid-lists within each sub-lattice. This gure conrms that the sub-lattices produced by 1 and 1 are small
enough, so that all intermediate tid-lists for a class can be kept in main-memory.
Conclusions
In this paper we presented new algorithms for ecient enumeration of frequent itemsets. We presented a
lattice-theoretic approach to partition the frequent itemset search space into small, independent sub-spaces
using either prex-based or maximal-clique-based methods. Each sub-problem can be solved in main-memory
using bottom-up, top-down, or a hybrid search procedure, and the entire process usually takes only a few
database scans.
Experimental evaluation showed that the maximal-clique-based decomposition is more precise and leads
to smaller classes. When this is combined with the hybrid search, we obtain the best algorithm MaxClique,
which outperforms current approaches by more than an order of magnitude. We further showed that the
new algorithms scale linearly in the number of transactions.
--R
Mining association rules between sets of items in large databases.
Fast discovery of association rules.
Parallel mining of association rules.
Fast algorithms for mining association rules.
Dynamic itemset counting and implication rules for market basket data.
A fast distributed algorithm for mining association rules.
Introduction to Lattices and Order.
Arboricity and bipartite subgraph listing algorithms.
Computers and Intractability: A Guide to the Theory of NP- Completeness
Data mining
Discovering all the most speci
Scalable parallel data mining for association rules.
A perspective on databases and data mining.
Generation of maximum independent sets of a bipartite graph and maximum cliques of a circular-arc graph
Interpretation on graphs and complexity characteristics of a search for speci
Some zarankiewicz numbers.
A new algorithm for discovering the maximum frequent set.
Mining association rules: Anti-skew algorithms
Fast sequential and parallel algorithms for association rule mining: A comparison.
Corrections to bierstone's algorithm for generating cliques.
Memory placement techniques for parallel association mining.
Integrating association rule mining with databases: alternatives and implications.
Sampling large databases for association rules.
Evaluation of sampling for data mining of association rules.
New algorithms for fast discovery of association rules.
Parallel algorithms for fast discovery of association rules.
--TR
--CTR
Xiu-Li Ma , Yun-Hai Tong , Shi-Wei Tang , Dong-Qing Yang, Efficient incremental maintenance of frequent patterns with FP-tree, Journal of Computer Science and Technology, v.19 n.6, p.876-884, November 2004
Zengyou He , Xiaofei Xu , Shengchun Deng, Mining top-k strongly correlated item pairs without minimum correlation threshold, International Journal of Knowledge-based and Intelligent Engineering Systems, v.10 n.2, p.105-112, April 2006
Valerie Guralnik , George Karypis, Parallel tree-projection-based sequence mining algorithms, Parallel Computing, v.30 n.4, p.443-472, April 2004
Peiyi Tang , Li Ning , Ningning Wu, Domain and data partitioning for parallel mining of frequent closed itemsets, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Toon Calders , Bart Goethals , Michael Mampaey, Mining itemsets in the presence of missing values, Proceedings of the 2007 ACM symposium on Applied computing, March 11-15, 2007, Seoul, Korea
Bart Goethals, Memory issues in frequent itemset mining, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Alexandros Nanopoulos , Apostolos N. Papadopoulos , Yannis Manolopoulos, Mining association rules in very large clustered domains, Information Systems, v.32 n.5, p.649-669, July, 2007
A Support-Ordered Trie for Fast Frequent Itemset Discovery, IEEE Transactions on Knowledge and Data Engineering, v.16 n.7, p.875-879, July 2004
Yudho Giri Sucahyo , Raj P. Gopalan, CT-ITL: efficient frequent item set mining using a compressed prefix tree with pattern growth, Proceedings of the fourteenth Australasian database conference, p.95-104, February 01, 2003, Adelaide, Australia
Raj P. Gopalan , Yudho Giri Sucahyo, Efficient mining of long frequent patterns from very large dense datasets, Design and application of hybrid intelligent systems, IOS Press, Amsterdam, The Netherlands,
Nele Dexters , Paul W. Purdom , Dirk Van Gucht, A probability analysis for candidate-based frequent itemset algorithms, Proceedings of the 2006 ACM symposium on Applied computing, April 23-27, 2006, Dijon, France
Jau-Ji Shen , Po-Wei Hsu, A robust associative watermarking technique based on similarity diagrams, Pattern Recognition, v.40 n.4, p.1355-1367, April, 2007
Jie Dong , Min Han, BitTableFI: An efficient mining frequent itemsets algorithm, Knowledge-Based Systems, v.20 n.4, p.329-335, May, 2007
Mohammad El-Hajj , Osmar R. Zaane, COFI approach for mining frequent itemsets revisited, Proceedings of the 9th ACM SIGMOD workshop on Research issues in data mining and knowledge discovery, June 13, 2004, Paris, France
Bassem Sayrafi , Dirk Van Gucht , Paul W. Purdom, On the effectiveness and efficiency of computing bounds on the support of item-sets in the frequent item-sets mining problem, Proceedings of the 1st international workshop on open source data mining: frequent pattern mining implementations, p.46-55, August 21-21, 2005, Chicago, Illinois
Son N. Nguyen , Maria E. Orlowska, A further study in the data partitioning approach for frequent itemsets mining, Proceedings of the 17th Australasian Database Conference, p.31-37, January 16-19, 2006, Hobart, Australia
Yaochun Huang , Hui Xiong , Weili Wu , Ping Deng , Zhongnan Zhang, Mining maximal hyperclique pattern: A hybrid search strategy, Information Sciences: an International Journal, v.177 n.3, p.703-721, February, 2007
Congnan Luo , Anil L. Pereira , Soon M. Chung, Distributed Mining of Maximal Frequent Itemsets on a Data Grid System, The Journal of Supercomputing, v.37 n.1, p.71-90, July 2006
Mohammed J. Zaki , Ching-Jui Hsiao, Efficient Algorithms for Mining Closed Itemsets and Their Lattice Structure, IEEE Transactions on Knowledge and Data Engineering, v.17 n.4, p.462-478, April 2005
Mohammed J. Zaki , Karam Gouda, Fast vertical mining using diffsets, Proceedings of the ninth ACM SIGKDD international conference on Knowledge discovery and data mining, August 24-27, 2003, Washington, D.C.
Geller , Xuan Zhou , Kalpana Prathipati , Sripriya Kanigiluppai , Xiaoming Chen, Raising data for improved support in rule mining: How to raise and how far to raise, Intelligent Data Analysis, v.9 n.4, p.397-415, July 2005
Mukund Deshpande , George Karypis, Using conjunction of attribute values for classification, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
Charu C. Aggarwal, Towards long pattern generation in dense databases, ACM SIGKDD Explorations Newsletter, v.3 n.1, July 2001
P. Valtchev , R. Missaoui , P. Lebrun, A partition-based approach towards constructing Galois (concept) lattices, Discrete Mathematics, v.256 n.3, p.801-829, 28 October 2002
Massimo Coppola , Marco Vanneschi, Parallel and distributed data mining through parallel skeletons and distributed objects, Data mining: opportunities and challenges, Idea Group Publishing, Hershey, PA,
Claudio Silvestri , Salvatore Orlando, Distributed approximate mining of frequent patterns, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Doug Burdick , Manuel Calimlim , Jason Flannick , Johannes Gehrke , Tomi Yiu, MAFIA: A Maximal Frequent Itemset Algorithm, IEEE Transactions on Knowledge and Data Engineering, v.17 n.11, p.1490-1504, November 2005
Bart Goethals , Mohammed J. Zaki, Advances in frequent itemset mining implementations: report on FIMI'03, ACM SIGKDD Explorations Newsletter, v.6 n.1, June 2004
R. J. Kuo , S. Y. Lin , C. W. Shih, Mining association rules through integration of clustering analysis and ant colony system for health insurance database in Taiwan, Expert Systems with Applications: An International Journal, v.33 n.3, p.794-808, October, 2007
Gosta Grahne , Jianfei Zhu, Fast Algorithms for Frequent Itemset Mining Using FP-Trees, IEEE Transactions on Knowledge and Data Engineering, v.17 n.10, p.1347-1362, October 2005
John D. Holt , Soon M. Chung, Parallel mining of association rules from text databases, The Journal of Supercomputing, v.39 n.3, p.273-299, March 2007
Toon Calders , Bart Goethals, Non-derivable itemset mining, Data Mining and Knowledge Discovery, v.14 n.1, p.171-206, February 2007
Chih-Ming Chen, Incremental personalized web page mining utilizing self-organizing HCMAC neural network, Web Intelligence and Agent System, v.2 n.1, p.21-38, August 2004
Chih-Ming Chen, Incremental personalized web page mining utilizing self-organizing HCMAC neural network, Web Intelligence and Agent System, v.2 n.1, p.21-38, January 2004
Michihiro Kuramochi , George Karypis, An Efficient Algorithm for Discovering Frequent Subgraphs, IEEE Transactions on Knowledge and Data Engineering, v.16 n.9, p.1038-1051, September 2004
Taneli Mielikinen, Frequency-based views to pattern collections, Discrete Applied Mathematics, v.154 n.7, p.1113-1139, 1 May 2006
Aaron Ceglar , John F. Roddick, Association mining, ACM Computing Surveys (CSUR), v.38 n.2, p.5-es, 2006
Mukund Deshpande , Michihiro Kuramochi , Nikil Wale , George Karypis, Frequent Substructure-Based Approaches for Classifying Chemical Compounds, IEEE Transactions on Knowledge and Data Engineering, v.17 n.8, p.1036-1050, August 2005 | data mining;equivalence classes;frequent itemsets;association rules;lattices;maximal cliques |
628075 | Constructing Bayesian Networks for Medical Diagnosis from Incomplete and Partially Correct Statistics. | AbstractThe paper discusses several knowledge engineering techniques for the construction of Bayesian networks for medical diagnostics when the available numerical probabilistic information is incomplete or partially correct. This situation occurs often when epidemiological studies publish only indirect statistics and when significant unmodeled conditional dependence exists in the problem domain. While nothing can replace precise and complete probabilistic information, still a useful diagnostic system can be built with imperfect data by introducing domain-dependent constraints. We propose a solution to the problem of determining the combined influences of several diseases on a single test result from specificity and sensitivity data for individual diseases. We also demonstrate two techniques for dealing with unmodeled conditional dependencies in a diagnostic network. These techniques are discussed in the context of an effort to design a portable device for cardiac diagnosis and monitoring from multimodal signals. | Introduction
Bayesian networks (BN) have emerged as some of the most successful tools for medical
diagnostics and many have been deployed in real medical environments or implemented
in off-the-shelf diagnostic software [Spi87, Sho93, WSB97]. Constructing
large BNs for such applications is in many ways similar to the knowledge-engineering
process employed in the creation of expert systems (ES). However, building a BN is
usually more difficult than building an ES, because a BN contains a lot of additional
quantitative (numeric) information which is essential to its proper operation. The
statistics for filling in this numeric information are not always readily available and
often the designer of the system has to make use of indirect statistics instead. For
example, it is not always clear how combinations of diseases determine the outcome
of a particular diagnostic test - what is usually available are only the positive and
negative predictive likelihoods of that test for each individual disease, regardless of
what other diseases might be present.
This questions, estimating conditional probability tables for the joint effects
of combinations of diseases, requires the use of partial statistics. The problem is of
necessity ill-posed - more numbers have to be estimated than are available. In order
to find a reasonable solution, some constraints have to be introduced, which usually
depend on the problem domain. We propose a solution for the situation outlined
above and discuss its advantages and limitations.
Another related problem in the area of constructing BNs for medical diagnostics
is the incorporation of partial knowledge about the way the network should behave. In
particular, overconfident diagnoses can usually be dealt with by introducing additional
nodes, which represent hidden pathological states. The behavior of these nodes can
sometimes be explained in terms of Boolean functions such as AND and OR, or
compositions thereof. Expressing this knowledge in terms of CPTs for the nodes,
however, is not straightforward. We propose one solution based on the use of generic
logical nodes. We also propose an alternative technique that does not require the
inclusion of new intermediate nodes.
Section 2 reviews briefly Bayesian networks and the process of their design.
Section 3 discusses the problem of constructing networks from partial statistics and
proposes solutions for the cases of finding the impact of several diseases on a single
lab test. Section 4 proposes two methods for dealing with overconfident diagnosis,
and section 5 summarizes the proposed solutions and concludes the paper.
Constructing Bayesian Networks
A Bayesian network is an efficient factorization of the joint probability distributions
(JPD) over a set of variables each variable X i has a domain of
possible values. The JPD specifies a probability for each possible combination of
values for all variables. If the JPD is known, inference can be performed by computing
posterior probabilities. For example, if three variable p, q, and r exist in a domain of
interest, we can compute the query
q; r)
(1)
The summands in the numerator and denominator are all probabilities of atomic
events and can be read off from the representation of the JPD. One possible way to
represent the JPD is by means of a multidimensional joint probability table (JPT),
the key into which is the combination of values for the variables. Reasoning with joint
probability tables, however, is exponential in both space and time. The size of the
JPT is exponential in the number of variables. The number of atomic probabilities
in the numerator and denominator of equation (1) is exponential in the number of
variables not mentioned in the query, and just performing the summation over them
takes exponential time. If probabilistic reasoning is to be used in a practical system,
an efficient form of representation and reasoning is necessary. Bayesian networks and
the associated reasoning algorithms serve exactly this purpose.
The key to efficient representation of JPTs is in reducing the number of probabilities
that are stored. This can be accomplished by introducing simplifying assumptions
about the underlying JPD. One such assumption is that of conditional
independence. Two variables A and B are said to be conditionally independent with
respect to a third variable C if P (AjC; we know the value of C,
further knowledge of the value of B does not change our belief in A.
One convenient way of representing assumptions of conditional independence is
by means of directed acyclic graphs (DAG), augmented by local conditional probability
tables (LCPTs). Such graphs are called Bayesian or probabilistic networks. Three
small networks are shown in Fig.1, each of them representing a different independence
assumption.
After identifying the variables of interest in the problem domain, the next step
in building a Bayesian network is to identify which of those variables are conditionally
independent. It is possible to create the DAG of a Bayesian net from lists of
statements of independence; in practice, however, a much simpler procedure is used.
The DAG of a Bayesian net is closely related to the DAG of a set of propositional
(a) (b) (c)
Figure
1: Three graphical representations of conditional independence. In (a), X 1 is independent
of X 3 given X 2 . In (b), X 2 and X 3 are marginally independent, but conditionally
dependent given X 1 . In (c), it is just the opposite - X 2 and X 3 are marginally dependent,
but conditionally independent given X 1 .
rules like those employed in rule-based expert systems.
The next stage in constructing a Bayesian net is to specify the numbers in the
LCPTs of the graph. This task is much harder than constructing the DAG. Ideally,
the designer of the net has access to statistical estimates of the probabilities in the
LCPT. Some medical studies, for example [DF79], contain all the numbers required for
the construction of the LCPT. Most often, however, such estimates are not available.
The following sections discusses several ways to deal with this problem.
3 Constructing complete CPTs from partial statistics
A major issue in building successful Bayesian net applications is how to determine
the entries in the local CPT tables for the nodes in the net. Ideally, those numbers
can be estimated statistically or obtained via expert judgment. Most often, however,
the numbers that are known are not in the form that is required for the LCPTs.
For example, two diseases D 1 and D 2 might cause the same finding F to appear.
Usually, we can obtain the sensitivity P
D) for the two dis-
eases; for the LCPT, however, we need joint
etc.
Determining those joint conditional probabilities from the simple conditionals is
an ill-posed problem. If n diseases are influencing a finding, the LCPT of the finding
node has 2 n entries. At the same time, there are only 2n sensitivities and specificities
for the n diseases. Inferring 2 n numbers from 2n givens is an ill-posed problem. Most
commercial belief reasoning packages would require all of these 2 n point probabilities.
The only way to obtain them is to introduce additional constraints - bias towards
certain values for the entries, more assumptions of independence, or ad hoc techniques.
These constraints depend on the particular type of interactions between the random
variables at hand.
As noted above, the problem of LCPT compilation arises when a single finding
is caused by two or more diseases. The LCPT of the finding node has to contain probabilities
that the finding will be present, for all possible combinations of the parent
nodes (diseases). For n parents of a finding node, 2 n numbers have to be specified.
What we usually have instead is only sensitivities and specificities of the finding.
The sensitivity of the finding P is a measure of the prevalence of the finding
F among those people that have the disease D. The specificity P ( -
D) shows how
the absence of the finding indicates the absence of the disease. For two findings with
equal sensitivity, their respective specificities will determine how unambiguously the
presence of the finding indicates the presence of the disease. The finding with higher
specificity is a better indicator for the disease than the one with lower specificity.
Sensitivity and specificity data can be obtained from prevalence statistics.
There are several reasons why sensitivity and specificity numbers are the easiest
to obtain. They are modular and represent objective relationship between a finding
and a disease regardless of which other diseases are modeled. The entries in the
LCPT, on the other hand, depend on all diseases that the designer of the network
has decided to model. In addition, physicians relate easily to this type of information
and can provide fairly good estimates. Pearl [Pea88] (pp. 15 and 33) has argued that
those numbers are the ones most likely to be available from experiential knowledge
and probably are represented explicitly in cognitive structures - for a particular
disease, physicians seem to keep in memory a frame describing its characteristics,
including the sensitivity and specificity estimates for various symptoms and findings.
For n parent nodes representing diseases, there are only 2n sensitivities and
specificities available. Determining all 2 n entries in the LCPT is an ill-posed problem.
The mapping from diseases to findings, however, has a specific structure, which can
be exploited to turn the problem from under-determined to well-posed. It can be
assumed that diseases act independently to produce a finding F , and the net effect of
all diseases can be combined to produce the cumulative probability that the finding
will be present. The finding can be caused by any disease, similarly to a logical
gate. The relationship, though, is not deterministic - each of the diseases D i acting
alone can cause the finding to appear with probability
This nondeterministic OR gate is known under the name noisy-OR and has
been studied extensively [Sri93, HB94, RD98]. The probabilities p i are also called link
probabilities of the noisy-OR gate. Then, any entry in the LCPT can be obtained as
Y
where H is a particular truth assignment to the parents of F , and H + is the
subset of those nodes in H that are set to true. If H + is empty, the product term
is assumed to be one and the resulting entry in the LCPT is zero. This result is
rarely true of any real problem domain - even when no diseases are known to be
present, there might be other, unmodeled diseases that can cause the finding. In
addition, there is always measurement error and certain number of false positives in
the detection of the finding is inevitable.
In order to represent the effect of unmodeled diseases and measurement errors, a
leak term pL is usually employed in a noisy-OR model. Conceptually, all unmodeled
causes for the finding are represented by a disease node L that is always present.
The leak probability pL can be used directly in equation (2) just like any other link
probability; L is always in H + .
In order to specify the LCPT, only are needed (n link and
one leak probabilities). Determining those numbers from the 2n sensitivities
and specificities is no longer an ill-posed problem; instead, it changed to an over-determined
one.
We propose a procedure for estimating link and leak probabilities. The first
step is to estimate the link probabilities for each disease. A link probability is a characteristic
of a particular disease and does not depend on sensitivities and specificities
of other diseases. We can determine it from the sensitivity and specificity for this
disease only. The second step is to estimate the leak probability for the whole model,
based on all link probabilities determined in the first step. The leak probability is a
characteristic of the whole gate - if we decide to add more diseases later, the leak
probability should be updated.
In order to estimate the link probability p i , we can employ again the leaky
noisy-OR model, with two possible causes for F : the disease D i , on one hand, and
the combined action of all other factors that can lead to the appearance of F , such as
other modeled or unmodeled diseases, measurement errors, etc. Conceptually, we can
think of all of them as one leak factor L all that is always present, and a corresponding
leak probability p all . Then, following equation (2) for two causes:
The second equation expresses the fact that P
disease D i represents exactly the combined effect of all other diseases when D i is not
present. From those two equations:
For all cases of practical interest, this equation is well behaved. If a disease D i
causes F indeed, then P
Once all link probabilities p i are known, we can estimate the leak probability for
the whole node. Let's consider P
the specificity for a disease i. It expresses
the probability that the finding will not be present when D i is known not to be
present. We can consider a world in which D i is permanently absent and decompose
the specificity of D i across all possible such hypothesis:
F jH)P (H)
Each of the LCPT entries P ( -
F jH) can be expressed in terms of the already
found link probabilities p i and the unknown leak term p i
L according to the noisy-OR
Solving for p i
L , we obtain
We can produce an estimate p i
L for each i, and combine them in an appropriate
manner if they are not consistent, either by simple or weighted averaging. The weights
can be, for example, the negated priors P ( -
of the diseases
The justification for this choice of weights is that the leak term p i
L is needed
when disease D i is not present. At any rate, the separate estimates p i
L should be
consistent - if they are not, probably the sensitivity and specificity data are suspect,
or a noisy-OR model is not appropriate for this particular set of diseases.
The following numerical example illustrates the operation of the algorithm.
Suppose that we have two diseases: tuberculosis (D 1 ) and bronchitis (D 2 ). Both of
them cause dyspnea (breathlessness) (F). We are given the incidence of these two
diseases, as well as the sensitivity and specificity of the finding with respect to each
of them:
The first step is to find the link probabilities of each disease:
0:986
The next step is to estimate the leak probability of the noisy-OR gate from
each of the two specificities and the already found link probabilities:
We can see that the two estimates of the leak probability are fairly consistent
and we can use either of them or their average in a noisy-OR gate.
4 Dealing with conditional dependence and over-confidence
Assumptions of conditional independence are the main factor for simplifying a JPD
and representing it efficiently in a Bayesian network. Strict conditional independence,
however, rarely exists in distributions coming from the real world. If two variables
are nearly conditionally independent, the effect of making the assumption is usually
not significant. There are, however, many situations when making an unwarranted
assumption of conditional independence severely impairs the performance of the reasoning
system.
One such case is observed when two findings are not completely independent
given a disease. Over-confidence in the diagnosis is the end effect of failing to represent
explicitly their dependency [YHL+88]. The reasons for over-confidence can be
explained with the following example:
Two symptoms of the disease cardiac tamponade (D) are dyspnea (breathless-
ness, breathing (B) (over 20 inhalations per minute). Clearly, the two
findings are related - if a patient has cardiac tamponade and has difficulty breath-
ing, it would be very likely that the same patient is breathing rapidly and trying to
compensate for the air shortage. If we fail to represent the dependency between the
two findings, any time both of them are reported to be present, the posterior odds
O(DjA;B) of the disease will increase by the product of the likelihoods L(AjD) and
L(BjD) of the two findings:
Multiplying the two likelihoods is clearly not right, because after we know that
A is present, establishing B does not bring substantial new information. If we fail to
recognize this effect, the posterior odds O(DjA;B) become unreasonably high - the
system is said to be overconfident in its diagnosis, and is not likely to be trusted by
the users in the long run.
Reliable methods have been established in the medical community for the
detection of over-confidence or under-confidence (diffidence) in diagnostic systems
[HHB78a, HHB78b]. These methods require a dataset of diagnostic cases with known
diagnoses.
There are several strategies for dealing with conditional dependency between
findings. The simplest of all is to disregard all findings but the most diagnostic one.
For example, the system Iliad uses this strategy in some cases [WSB97]. This is
guaranteed to avoid over-confidence, indeed, but at the expense of ignoring useful
diagnostic information.
Another strategy is to introduce intermediate nodes that represent pathophysiological
states; they are also called clusters of findings [YHL+88]. This solution is
shown in Fig. 2(a). In this scheme, the findings determine the truth value of the
intermediate node, and only the latter influences the disease node directly. The problem
with this approach is that the probabilities for the subnetwork have to be known,
which is rarely the case - pathophysiological states are usually unobservable.
4.1 Expressing Boolean clusters
One practically important situation when the probabilities in the subnetwork are easy
to find is that of Boolean clusters. For example, the intermediate node I representing
the pathophysiological state "myocardial arrhythmias" should be on whenever one
of the nodes corresponding to various arrhythmias is on. In other words, the node
I is a simple OR function of other nodes, and the designer of the network uses only
qualitative information about the behavior of the network, similarly to the principles
of constructing Qualitative Bayesian Networks [Wel90, DH93].
Constructing the LCPT for an OR gate is trivial, but the direction of the edges
of the graph is incompatible with that of the rest of the network (Fig. 2(b)). In
Fig.2(b), the node I is a simple OR of the nodes F 1 and F 2 . However, D is also a
parent of I, and the LCPT of I includes mixed entries between the findings F 1 and
F 2 and the disease node I. It is not at all obvious what those entries should be.
Even if it is possible to construct them, the network would include both causal and
anti-causal edges. In general, we would like to have consistent arrows in the DAG of
the network - either in the causal or in the diagnostic direction, but not both.
One solution is to invert the direction of the Boolean rule and make it compatible
with the rest of the network. The inversion is a mechanical procedure that
manipulates the local structure and probability tables in a network, and is available
as a feature in development environments such as Netica [Nor97a, Nor97b].
The resulting network represents the same JPD as the original one, but this factorization
is different. It should be noted that when inverting Boolean rules with
extremal entries in the LCPTs (0 and 1), degeneracies might occur. For example,
one way to represent the rule I is by means of the following LCPT:
When the topology
is inverted (Fig. 2(c)), the parents of F 1 are I and F 2 (note that the inversion
introduces a new link from F 2 to F 1 ). The LCPT of F 1 then will include the entry
Clearly, the truth assignment -
I and F 2 is impossible under the OR rule,
and the entry P
cannot be filled in by a reversal algorithm. (Certainly,
any value can be filled in, because it will never be used, but nevertheless reversal
algorithms fail in this degenerate case.) A better choice for the LCPT of the original
network would be, for example,
(a) (c)
(b)
I
I
I
Figure
2: (a)Introduction of an intermediate node. (b)Mixing causal and anti-causal links
is difficult. (c)Link reversal might introduce many unwanted links between findings.
number ffl. If extremal values are avoided, the reversal
is well behaved.
The technique of link reversal can be useful for relatively small networks - not
more than five antecedents in a rule. For larger rules, the number of probabilities
in the LCPTs becomes prohibitively large. During the reversal of a link, new links
are added between the nodes in the Markov blanket of the nodes connected by that
link. (The Markov blanket of a node is the set of nodes that are parents, children, or
parents of the children of that node [Pea88].) For example, the inversion of an OR
rule with 11 antecedents results in a network with 8192 numbers in its LCPTs. The
computational effort is hardly worth the benefits of using the OR rule.
A question we can ask is whether all those numbers are really necessary for
representing a simple logical rule. It would be very desirable if we could obtain the
same or similar computational results with a smaller Bayesian net, for example the
one shown in Fig. 2(a).
It turns out that this is possible. Fig. 2(a) shows a gate with edges pointing
in the causal (generative) direction and no links between the leaves of the net. In
theory, it is not possible to construct a net that implements a precise OR or AND
gate in this direction. In practice, it is satisfactory to build a net that exhibits similar
behavior within arbitrarily close error bounds.
Since AND and OR are commutative operators, it is reasonable to expect that
the LCPTs of all child nodes in 2(a) should be identical and it is enough to specify
one of them. An AND gate can be represented with the following LCPT: P
number ffl. Analogously, the LCPT of an OR node can be
The only difference between the two types of nodes is
in how much the LCPT entries differ from the extremal values 0 and 1. The smaller
the number ffl, the closer the behavior of the gate is to the true logical function. It
should be at least as small as the smallest prior probability of the consequent node
D, and preferably smaller.
A NOT gate can be trivially expressed by the LCPT P
By means of the three gates AND, OR, and NOT, arbitrary propositional rules can
be expressed. Furthermore, rules represented by the proposed Bayesian subnetworks
are able to propagate virtual (uncertain) evidence, while the original propositional
rules cannot.
MI
JR
MI
MI 0.3 0.7
MI
Figure
3: A segment of a network for diagnosing myocardial infarction (MI) based on
detection of ST segment depression (STD) and five cardiac arrhythmias: sinus tachycardia
(ST), sinus bradycardia (SB), atrial flutter (AFL), atrial fibrillation (AFB), and junctional
rhythm (JR). Overconfident diagnosis is avoided by introducing the intermediate node CA,
denoting any cardiac arrhythmia. The LCPTs of all arrhythmia nodes are identical to that
of ST.
Figure
3 illustrates the use of an intermediate OR cluster to avoid overconfident
diagnosis of myocardial infarction (MI) when several cardiac arrhythmias are
detected. Arrhythmias are weak and inconclusive indications of MI and have supplemental
role in diagnosis. The other finding, depression of the ST segment of the electrocardiogram
(STD) is a much stronger finding, as reflected in its LCPT. However,
if the intermediate node CA is not present, the effect of several cardiac arrythmias
on diagnosis might be much stronger than that of STD. If the node CA is present,
no matter how many arrythmias are detected, their combined effect is the same as if
only one is detected. This prevents overconfidence and places more emphasis on the
more predictive finding, STD.
4.2 Controlling the degree of conditional dependence
Inversion of logical rules helps in situations when beliefs on intermediate nodes are
expressed as logical functions. When they are not, it is hard to determine the LCPTs
of the intermediate nodes. Instead of introducing intermediate nodes, then, we can
try to model explicitly the dependence between the findings. The result of such
modification is shown in Fig. 4(a). At least one of the finding nodes has as parents
both a disease node and another finding. The corresponding mixed entries in the
LCPT of that node are hard to obtain from statistics. What is usually available is
only the sensitivities and specificities of the two findings. In addition, the designer of
the system usually has a qualitative idea of how dependent the two findings are.1
(a) (b) (c)
Figure
4: (a)Explicit control over dependency. (b) An overconfident net is created. (c) An
inverse net is built by sampling the conditional distribution P (DjF represented by the
previous net. The entries of the LCPT are corrected and the net is re-inverted.
This type of information is in fact sufficient to construct the subnetwork in
Fig. 4(a). A special type of double network inversion can be employed to control
explicitly the degree of dependency that two findings have. The idea of this technique
is to obtain a network of marginally independent findings with edges in the
diagnostic direction - opposite to the one in which sensitivity and specificity parameters
are available. The LCPT of this network contains the posterior probabilities of
the disease given all possible combinations of findings. All cases of over-confidence
can be identified and corrected directly in that LCPT. After satisfactory diagnosis
is achieved, the corrected network can be inverted again to its original generative
direction (edges from diseases to findings). During the second inversion, additional
links appear between findings, and the corresponding LCPTs are generated by the
link reversal algorithm.
It should be noted that a standard link reversal algorithm can be employed
only for the second inversion, but not for the first one. Such link reversal algorithms
change the direction of the edges so that the resulting network represents the same
joint probability distribution [Nor97a]. In doing so, they have to introduce links
between findings, because findings have been marginally dependent in the original
network.
In contrast to that, we need a link reversal algorithm that produces a network
with marginally independent finding nodes and preserves the conditional probability
distribution not necessarily the joint probability distribution
The former can be represented by a network with marginally
independent finding nodes (no links between them), but the latter cannot.
Inverting the conditional probability distribution P (DjF can be
done by instantiating the nodes of the original overconfident network in Fig. 4(b) to a
particular combination H of truth values for the finding nodes, doing belief updating,
reading off the posterior probability P (DjH) of node D, and entering that number in
the LCPT of node D in the inverted net in Fig. 4(c). This process has to be repeated
for all possible truth assignments of the finding nodes - each instantiation will produce
one entry of the LCPT in the new network. Manual sampling of the original network
is straightforward, but tedious and error-prone. In order to facilitate the inversion, a
utility can be employed that samples an existing overconfident network and creates
another one that represents the same conditional probability distribution. The utility
we used was built by means of the Netica API library [Nor97b]. After the LCPT of
the inverted net in Fig. 4(c) is corrected to avoid over-confidence, it is re-inverted
back to the causal direction, which results in the appearance of the additional link in
Fig. 4(a).
This procedure of double inversion will be illustrated with the following numerical
example, involving the aforementioned problem of diagnosing cardiac tamponade
D given two findings: breathlessness breathing The data we
have available are the incidence of the disease and the sensitivity and specificity of
the two findings:
In addition, we have the knowledge that the two findings are not conditionally
independent and an approximate idea of their conditional dependency.
The first step is to build a network of the type shown in Fig. 4(b), using the
disease prior and sensitivity and specificity information about the two findings. By
means of belief updating in this network we can obtain the marginal probabilities of
the findings:
These probabilities are approximate and are in this case very close to P
D)
and P
D), respectively, because the disease is very rare.
This network can be used for diagnostic reasoning, by instantiating the finding
nodes to all possible combinations of truth values:
We can clearly see that this network is over-confident. When only the first
finding is present, the posterior probability of the disease is only five times the prior
probability. Similarly, when only the second finding is present, the posterior probability
is three times the prior one. However, when both findings are present, the posterior
is 14.8 times the prior, even though the second finding does not add substantial new
diagnostic information. This unreasonable increase in posterior probability might
lead to imprecise diagnosis, and should be eliminated.
To this end, we build another network, this time of the type shown in Fig. 4(c).
The priors of that network are the marginal probabilities of the findings, which were
found in the first step. The LCPT of the disease node is filled with the posterior
probabilities of the disease node from the first network, after appropriate corrections.
In this particular case, the following corrections seem reasonable:
The other two entries, P (Dj -
remain the same. After
this network is built, its links are inverted by a general link reversal algorithm implemented
in a package for belief reasoning [Nor97a]. The result is a network of the
type shown in Fig. 4(a). An extra edge from F 2 to F 1 has been inserted in order to
reflect the fact that the two findings are not conditionally independent. Moreover, the
LCPTs of the finding nodes contain the appropriate numbers, which are otherwise
hard to come by:
5 Conclusion
Building and using probabilistic networks for medical diagnosis requires reliable methods
for knowledge engineering. Some of these methods can be borrowed from the
general practice of constructing probabilistic systems; others have to be adapted to
the problem domain. This paper described several such techniques. Methods were
described for compiling local conditional probability tables from partial statistics for
mappings from diseases to findings. The proposed solution to the problem of estimation
of the parameters of leaky noisy-OR gates is significantly easier to use than other
methods for assessment, and does not require statistics other than the specificity and
sensitivity of the findings.
Over-confidence in diagnosis is a major problem in probabilistic diagnostic systems
and handling it successfully is a crucial factor to the acceptance of diagnostic
systems by medical decision makers. Two methods were proposed to deal with the
major reason for over-confidence - unmodeled conditional dependence. These methods
can be used to correct the numbers in the local conditional probability tables of
a Bayesian network, if unmodeled conditional dependence is suspected. By means
of a double link-reversal technique, the amount of conditional dependence between
findings can be controlled precisely. Also, if findings are clustered by means of logical
rules, the rules can be represented efficiently by the proposed Bayesian subnetworks
that serve as AND, OR, and NOT logical gates.
--R
Analysis of probability as an aid in the clinical diagnosis of coronary-artery disease
Efficient reasoning in qualitative probabilistic networks.
A tractable inference algorithm for diagnosing multiple diseases.
A new look at causal independence.
Netica application user's guide.
Netica API programmer's library.
Probabilistic Reasoning in Intelligent Systems.
On the impact of causal independence.
The adolescence of AI in medicine: will the field come of age in the
Probabilistic expert systems in medicine.
A generalization of the noisy-OR model
Probabilistic diagnosis using a reformulation of the INTERNIST-1/QMR knowledge base
Knowledge Engineering in Health Informatics.
Fundamental concepts of qualitative probabilistic networks.
Clustered knowledge representation: Increasing the reliability of computerized expert systems.
--TR
--CTR
Ferat Sahin , M. etin Yavuz , Ziya Arnavut , nder Uluyol, Fault diagnosis for airplane engines using Bayesian networks and distributed particle swarm optimization, Parallel Computing, v.33 n.2, p.124-143, March, 2007
Fabio Gagliardi Cozman , Cassio Polpo de Campos , Jaime Shinsuke Propositional and relational Bayesian networks associated with imprecise and qualitative probabilistic assessments, Proceedings of the 20th conference on Uncertainty in artificial intelligence, p.104-111, July 07-11, 2004, Banff, Canada | medical diagnosis;bayesian networks;combining risk factors;leaky noisy-OR nodes |
628087 | An Approach to Active Spatial Data Mining Based on Statistical Information. | AbstractSpatial data mining presents new challenges due to the large size of spatial data, the complexity of spatial data types, and the special nature of spatial access methods.Most research in this area has focused on efficient query processing of static data. This paper introduces an active spatial data mining approach that extends the current spatial data mining algorithms to efficiently support user-defined triggers on dynamically evolving spatial data. To exploit the locality of the effect of an update and the nature of spatial data, we employ a hierarchical structure with associated statistical information at the various levels of the hierarchy and decompose the user-defined trigger into a set of subtriggers associated with cells in the hierarchy. Updates are suspended in the hierarchy until their cumulative effect might cause the trigger to fire. It is shown that this approach achieves three orders of magnitude improvement over the naive approach that reevaluate the condition over the database for each update, while both approaches produce the same result without any delay. Moreover, this scheme can support incremental query processing as well. | Introduction
Spatial data mining, i.e., discovery of interesting characteristics and patterns that may implicitly exist in spatial
databases, plays an important role in understanding spatial data and in capturing intrinsic relationships between spatial
and non-spatial data. Efficiency is a crucial challenge in spatial data mining due to the large size of spatial data and
the complexity of spatial data types and spatial access methods.
There have been many contributions in this field during recent years [Kno96a] [Kno96b] [Kop96] [Est97] [Gal97]
[Han97] [Hor97] [Wan97] [Est98a] [Est98b] [Gru98] [Zai98]. However, most approaches have focused on issues in
processing spatial data mining queries. Our focus in this paper is to extend current spatial data mining techniques to
support user-defined triggers, i.e., active spatial data mining. In this paper, we assume point objects unless otherwise
specified. Each object has a spatial attribute, its location, and some non-spatial attributes. In many applications,
clusters formed by objects with some specified attribute values are the main features of interest. Some examples are
the following.
1. Military Deployment. For example, armor deployment in some region can be located via satellite images. The
movement of armored vehicles can be traced and if specified patterns of concentration or movement are detected,
further investigation may be triggered.
2. Cellular Phone Service. Each mobile object has an ID, associated personal information, and can move from one
area to another. Patterns in time, length, distance, and location of phone calls can be mined. Such knowledge
can be used to dynamically allocate bandwidth for better service and price policy planning for maximum profit.
3. Situation Awareness and Emergency Response. Situation awareness and emergency response are two important
information centered applications that require support for a large number of geographically distributed mobile
users collaborating on a common mission and with interest in a common situation domain. The spatial location
of a user plays a dominant rule in determining the user profile. Clustering of users performing the same task in
close spatial proximity can be used to balance the tradeoff between minimizing the overall bandwidth requirement
and maximizing the relevance of multicast channels to users. Triggers specified in terms of changes to
clusters (area, density, location, etc.) may be used to signal the need to adjust routing assignments.
Introducing spatial data mining triggers has the following advantages.
1. From the user's point of view, it is not necessary to submit the same query repeatedly in order to monitor the
appearance of some condition. Instead, he (or she) may specify a trigger and code the pattern as the trigger
condition. Moreover, instead of waiting until the next execution of the same query, interesting patterns can be
detected immediately as they appear if triggers are used. In addition, if the pattern only exists for a short period
between two executions of the query, the user may miss the pattern using the reposting query method.
2. From the system's point of view, it is usually much more efficient to handle a trigger incrementally than re-execute
the same query on the entire database many times.
3. Data mining tasks involving special characteristics associated with spatial data, such as cluster emergence,
movement, splitting, merging, and vanishing, cannot be supported by traditional database triggers efficiently
(if at all). This is due to the fact that the class membership of an object is not only determined by its non-spatial
attributes but also by the attributes of objects in its neighborhood. As an extension of both spatial data
mining techniques and active rules, spatial data mining triggers are designed to handle such complicated tasks
efficiently.
In this paper, we introduce an approach to active spatial data mining, called STING+, which takes advantage of
the rich research results of active database systems and the efficient algorithms in STING [Wan97] for passive spatial
data mining. Instead of the traditional Event-Condition-Action paradigm [Wid96] [Zan97], triggers in STING+ do not
require the Event specification and thus fall into the Condition-Action paradigm [Wid96]. Any condition allowed in a
spatial data mining query can be specified as the Condition of a spatial data mining trigger. This enables the user to
specify a complicated condition and the action taken upon satisfaction of some condition without considering which
might cause this condition to become true. This feature is important because a data mining task is usually
very complicated. It is very difficult (if not impossible) for a user to specify all events that might affect the trigger
condition. In contrast, in STING+, all events that may cause the trigger condition to be satisfied are accounted for by
the system during trigger evaluation.
Evaluating a user-defined trigger T usually involves two aspects, where C T is the trigger condition specified in T .
(1) Find a set of composite events 1 . E(s) such that if C T transitions from false to true then this transition must have
occurred due to some composite event in E(s), where s and E(s) are the state of the database, and a subset of all
composite events that can cause C T to become true, respectively. (2) Each time some composite event in E(s) occurs,
check the status (false or true) of C T given that C T was false previously. In general, E(s) is a function of the database
state s. In spatial databases, object insertion, deletion, and update are primitive events to form a composite event.
For example, a composite event could be 100 object insertions in a specified region. The effect of such a composite
event depends largely on conditions in such a region. For example, as illustrated in Figure 1(a), deleting object
event e is a set of events fe1 (k1 ); e2 (k2 en (kn )g where e 1 en are different types of primitive events. e is said
to occur only if every event e i in the set occurs at least k i times. For example, if is said to occur when
20 insertions and 15 updates occur. The occurrence of a composite event e is monitored by sub-triggers explained in a later section. Once e occurs,
the counters used in those sub-triggers are reset.
from the region which a cluster occupies could potentially cause this cluster to shrink whereas deleting object o 2 from
some other place has no effect on this cluster. As a side effect of the occurrence of some composite event, the set of
composite events E(s) that could cause C T to transition from false to true might also evolve over time. For example,
the dot cluster and cross cluster overlap with each other in Figure 1(b). If we want to monitor whether these two
clusters become disjoint, then E(s) would contain some composite events, which in turn consist of object deletions
from the shaded area since these events can cause the clusters to separate. However, if a number of objects (denoted
by bold cross) are inserted later but before any composite event in E(s) occurs as shown in Figure 1(c), then these two
clusters would separate only when objects within both shaded areas are deleted. E(s) therefore changes with time and
needs to be updated.+
.
x
x
(a) (b) (c)
Figure
1: Effect of Events
Therefore, there are two sets of composite events we need to consider: (1) the set of composite events that can
cause C T to become true, E(s); (2) the set of composite events that can cause a change to E(s), call it F (s). If a
composite event in E(s) occurs, we need to re-evaluate C T ; whereas if a composite event in F (s) happens, we have
to update E(s). To keep E(s) updated, one approach is to update E(s) on each primitive event. This is appropriate in
those applications where updating E(s) falls out naturally from processing the event and hence little overhead will be
introduced. However, if recalculating E(s) requires a significant overhead, then we need to consider other approaches.
An alternative is to keep E(s) updated by triggering updating of E(s) when a composite event in F (s) occurs. This
approach is preferable if, by tracking the changes to E(s), the overhead of checking C T is reduced by more than the
overhead introduced by checking whether a composite event belong to F (s).
In order to achieve optimal or near optimal performance, STING+ postpones the condition evaluation until the
accumulated effect of some composite event (i.e., data updates) might cause either the trigger condition to become
true or the composite event set E(s) to evolve. Moreover, due to the fact that the effect of an event is usually local to
its neighborhood. STING+ employs a hierarchical structure with associated statistical information at the various levels
of the hierarchy and decomposes the user-defined trigger into a set of sub-triggers associated with cells in the hierarchy.
These sub-triggers are used to monitor composite events in E(s) and change accordingly when E(s) evolves. The
minimum accumulated amount of updates necessary to satisfy the trigger condition is maintained incrementally using
statistical information associated with the hierarchy so that updates are suspended at some level in the hierarchy until
such time that the cumulative effect of these updates might cause the trigger condition to become satisfied. Actions
defined in the trigger will be executed automatically once the condition is satisfied. Moreover, this scheme can also be
used to support incremental query processing efficiently. Due to space limitations, we only focus on trigger processing.
This paper is organized as follows. Related work is reviewed in Section 2. Section 3 and Section 4 discuss the
trigger types supported by STING+ and the STING+ structure. In Section 5, algorithms for trigger evaluation are
presented. Experimental results and discussions on integrating query processing with trigger evaluation are presented
in Section 6 and Section 7. Finally, we draw our conclusions in Section 8.
Related Work
2.1 Spatial Data Mining Systems
Much work has been done in this area in recent years. [Ng94] [Zha96] [Est96] proposed algorithms to cluster large
data sets. [Kno96a] [Kno96b] [Kno97] focused on extraction of proximity relationships between clusters and features
and boundary shape matching. [Han97] presents a system prototype for spatial data mining, GeoMiner. The major
features of GeoMiner include mining several kinds of knowledge rules in spatial databases, the integration of data
mining and data warehousing technologies, interactive mining of multi-level rules, and integration with commercial
relational databases and GIS. [Est97] [Est98b] develop a system to detect spatial trends and spatial characterizations.
A spatial trend describes a regular change of one or more non-spatial attributes when moving away from a given start
point, whereas spatial characterization of a set of target objects is a description of the spatial and non-spatial properties
which are typical for the target objects but not for the whole database. Not only the properties of the target objects
but also the properties of their neighbors are considered during this process. A neighborhood graph is employed to
facilitate the mining process.
2.2 STING
STING (STatistical INformation Grid), proposed in [Wan97], is a statistical information grid-based approach to spatial
data mining. A pyramid-like structure is employed (shown in Figure 2), in which the spatial area is divided recursively
into rectangular cells down to certain granularity determined by the data distribution and resolution required by appli-
cations. Statistical information for each cell is calculated in a bottom-up manner and is used to answer queries. When
processing a query, the hierarchical structure is examined in a top-down manner. Cells are marked as either relevant
or not relevant with certain confidence level using standard statistical tests. Only children cells of relevant cells are
examined at next level. The final result is formed as the union of qualified leaf level cells.
STING has the following advantages [Wan97]:
ffl It is a query-independent approach to storage structure since the statistical information exists independently of
queries. This structure is a summary representation of the data in each grid cell, which can be used to facilitate
answering a large class of queries.
ffl The computational complexity is O(K) where K is the number of leaf cells. Usually, K !! N where N is the
number of objects.
ffl Query processing algorithms using this structure are trivial to parallelize.
ffl When data is updated, we do not need to recompute all information in the cell hierarchy. Instead, updates can
be handled in an incremental manner.
2.3 Active Data Mining
Active data mining has emerged recently. For example, incremental algorithms have been proposed for mining dynamic
databases. [Fel97] presents an incremental algorithm for mining association rule in a dynamic database, which
achieves a speed-up of several orders of magnitude compared to the non-incremental algorithm with small space over-
head. An algorithm for incremental clustering for mining in a data warehousing environment is proposed in [Est98a].
Updates are collected and only affected objects are re-examined during the next query evaluation. This yields significant
speed-up over the non-incremental version. However, these algorithms only aim at a predefined task. User-defined
triggers are not supported. [Agr95] outlines a paradigm for active data mining in temporal databases. Data is continuously
mined at a desired frequency. As rules are discovered, they are added to a rulebase. Users can specify a history
pattern in a trigger which is fired when such a pattern is exhibited.
3 Spatial Data Mining Triggers
supports spatial data mining triggers monitoring both spatial regions that satisfy some condition and attribute
values of objects within some spatial regions. Looking ahead, STING+ employs a hierarchical structure. Space
is recursively partitioned into grid cells down to a specified granularity and is organized via the inherent pyramid
hierarchy. Statistical information associated with each cell is stored to facilitate the trigger evaluation. A region in
STING+ is defined as a set of adjacent leaf level cells. In addition, object density and attribute conditions in STING+
are defined in terms of leaf level cells as well 2 . More specifically, the density of a leaf level cell is defined as the ratio
of the number of objects in this cell divided by the area of this cell. A region is said to have a certain density c iff the
density of every leaf level cell in this region is at least c. Conditions on attribute values are defined in a similar manner.
Two kinds of conditions can be specified by the user. One condition is an absolute condition, i.e., the condition is
satisfied when a certain state is reached. For instance, one can specify the condition to be "there is a region where
at least 10 cellular phones are in use per squared mile with total area at least 10 squared miles". The other type of
condition is a relative condition, i.e., the condition is satisfied when a certain degree of change has been detected. For
example, the condition can be specified as "when the cellular phone usage drops by 20% in the region where at least
cellular phones are in use per squared mile with total area at least 10 squared miles compared to now". Therefore,
four categories of triggers are supported by our system.
1. region-trigger: absolute condition on certain regions,
2. attribute-trigger: absolute condition on certain attributes,
3. region-ffi-trigger: relative condition on certain regions,
4. attribute-ffi-trigger: relative condition on certain attributes.
A conjunction of the above four categories of triggers can also be specified. However, due to space limitations,
we do not elaborate in detail on conjunctions in this paper. Since it is trivial to decompose a conjunction trigger into
several primitive triggers and then process each one using the same algorithms as for primitive triggers, we will omit
the algorithms for evaluating a conjunction trigger.
provides users with the capability to specify the active period of a trigger, i.e., a time period during which
the trigger is active. The optional SINCE and UNTIL clauses are used for this purpose (Example 3.3, 3.4). The
default values are SINCE NOW and UNTIL FOREVER, respectively. Sometimes, the user wants to be notified only
if the specified pattern exists for a certain amount of time. Such a condition can be specified in the VALID clause
(Example 3.1).
The region(s) on which a trigger is defined can either be fixed through the active period or change with time.
STING+ requires the conditions for fixed region(s) and for variable region(s) to be specified in different clauses
since they need to be handled differently. The fixed region(s) are specified in the LOCATION clause (Example 3.1),
whereas the variable regions are specified as region conditions in the WHERE clause (Example 3.2). Note that the fixed
region(s) can be the result of another region query (Example 3.3). The default region is the entire space (Example 3.4).
Larger units, corresponding to some level of the pyramid closer to the root, can be used to define density and attribute condition and can be
supported by the STING+ structure with minor change of the algorithm for trigger evaluation. Due to simplicity of explanation, we always assume
leaf level cell is the basic unit for density and attribute conditions.
Usually, the action in a trigger (defined in the do-clause) will be executed only once; namely the first time the
condition is satisfied during the active period. The default declaration has this semantics. However, in some cases, the
existence of old patterns is not as interesting as the appearance of new patterns. The user may want to be informed only
when a new pattern satisfying the condition first appears. For example, a user may only want to be notified whenever
a new region where cellular phones are heavily used emerges in order to re-allocate the bandwidth. In this case, the
user needs to define the trigger as a REPEAT TRIGGER.
The BNF of the query language is given in Appendix A. Below are several examples, one for each category of
trigger.
Example 3.1 Region-trigger: Trigger a market study when there exists a continuous region within California for at
least 120 minutes, where at least 10 cellular phones are in use per squared mile and at least 70% of the them are long
distance calls (distance larger than 100 miles) with total area at least 50 squared miles with 90% confidence.
ON cellular-phone
WHEN SELECT REGION
AND distance WITH PERCENT (70, 100) IN RANGE (100, 1)
AND AREA IN RANGE (50, 1)
AND WITH CONFIDENCE 0.9
LOCATION California
DO market-investigation
Example 3.2 Attribute-trigger: Trigger a market study when the average call length is greater than 10 minutes
within the region where at least 10 cellular phones are in use per squared mile and at least 70% of the them are long
distance calls (distance larger than 100 miles) with total area at least 50 squared miles with 90% confidence.
ON cellular-phone
AND distance WITH PERCENT (70, 100) IN RANGE (100, 1)
AND AREA IN RANGE (50, 1)
AND WITH CONFIDENCE 0.9
DO market-investigation
Example 3.3 Attribute-ffi-trigger: Trigger bandwidth re-allocation when the average conversation length increases
by 20% from now until December 31, 1998 in those region(s) where at least 10 cellular phones are in use per squared
mile and all of them are long distance calls (distance larger than 100 miles) with total area at least 100 squared miles
right now.
ON cellular-phone
LOCATION SELECT REGION
AND MIN(distance) IN RANGE (100, 1)
AND AREA IN RANGE (100, 1)
DO bandwidth-reallocation
Example 3.4 Region-ffi-trigger: Trigger bandwidth reallocation when the total area occupied by those regions where
at least 10 cellular phones are in use per squared mile and all of them are long distance calls (distance larger than
100 miles) with total area at least 50 squared miles increases by at least 10 squared miles from August 1, 1998 to July
1999.
ON cellular-phone
WHEN SELECT SIZE(REGION) INCREASE RANGE (10, 1)
AND AREA IN RANGE (50, 1)
AND WITH CONFIDENCE 0.9
DO bandwidth-reallocation
To facilitate trigger processing in STING+, we adopt a similar hierarchical structure to that used in [Wan97]. This
structure can be regarded as a summary representation of data at different levels of granularity to allow triggers to be
evaluated efficiently without recourse to the individual objects. The root of the hierarchy is at level 1 and corresponds
to the whole spatial area. Its children cells are at level 2, etc. A cell at level i corresponds to the union of the areas of
its children at level i + 1. The size of leaf level cells is dependent on the density of the objects. As a rule of thumb, we
choose a granularity for the leaf level such that the average number of objects in each cell is in the range from several
dozens to several thousands. Figure 2 illustrates the hierarchical structure in the two dimensional case.
ith level
(has k children
at level i+1)
(i+1)th level
1st level
(top level)
has k children
at level 2
2nd level
high
level
low
level
Figure
2: Hierarchical Structure
For each cell c m at level l in a m dimensional space, its neighbors along dimension i (1 - i - m) are those cells at
level l whose projections on all other dimension j(1 - j - m; j 6= i) coincide with that of c m and whose projections
on dimension i are adjacent to c m . Note that a cell can have as many as two neighbors in each dimension or a total of
2m neighbors. For any region r, a leaf level cell x is an interior boundary cell of r iff x is within r but at least one
of x's neighbor is outside r. In this paper, we assume that our space is of two dimensions unless otherwise specified.
However, all results can be generalized to higher dimensional space with minor modification. Note that, in the cases
where the dimensionality is very high and/or the data distribution is extremely skewed, this structure may become less
efficient. In such a scenario, a more sophisticated indexing structure, such as PK-tree [Wan98b], may be employed to
serve as the underlying structure 3 . For example, in Figure 3(a), C is a cell in two dimensional space. Its neighbors in
sibling cells may be of different size, the definitions of cell adjacency and region have to be relaxed to accommodate it. We will not
discuss this issue further in this paper.
dimension 1 are A and B while D and E are C's neighbors in dimension 2. A, B, D, and E are all C's neighbors. In
Figure
3(b), the shaded cells are the interior boundary cells of the region whose contour is the bold line.
dimension 1
dimension(a) (b)
Figure
3: Neighborhood and Region Interior Boundary
For each cell in the hierarchy, a set of statistical parameters are maintained. The choice of parameter may be
application-dependent. In this paper, we assume the following parameters are maintained.
ffl attribute-independent parameter:
number of objects in this cell
ffl attribute-dependent parameters for each numerical attribute:
mean of all values of the attribute in this cell
standard deviation of all values of the attribute in this cell
the minimum value of the attribute in this cell
the maximum value of the attribute in this cell
distribution - the type of distribution that the attribute value in this cell follows
The parameter distribution is of enumeration type. Potential distribution types are: normal, uniform, exponential,
and so on. The value NONE is assigned if the distribution type is unknown. The distribution type will determine a
"kernel" calculation in the generic algorithm.
We generate the hierarchy of cells with their associated parameters when the data is loaded into the database.
Parameters n, m, s, min, and max of bottom level cells are calculated directly from data. The value of distribution
could be either assigned by the user if the distribution type is known before hand or obtained by hypothesis tests such
as -test. Parameters of higher level cells can be easily calculated from parameters of lower level cell. Let n, m, s,
min, max, dist be parameters of current cell and n dist i be parameters of corresponding
lower level cells, respectively. The n, m, s, min, and max can be calculated as follows.
The determination of dist for a parent cell is a bit more complicated. First, we set dist as the distribution type followed
by most points in this cell. This can be done by examining dist i and n i . Then, we estimate the number of points, say
conf l, that conflict with the distribution determined by dist, m, and s according to the following rule:
1. If dist i l is increased by an amount of n
2. If dist i 6= dist, but either m s is not satisfied, then set conf l to n (This enforces dist will be set
to NONE later);
3. If dist l is not changed;
4. If dist s is not satisfied, then conf l is set to n.
Finally, if conf l
n is greater than a threshold t (This threshold is a small constant, say 0.05, which is set before the
hierarchical structure is built), then we set dist as NONE; otherwise, we keep the original type. For example, if the
parameters of each of four lower level cells are as shown in Table 1, then the parameters of the current cell would be
dist i NORMAL NORMAL NORMAL NONE
Table
1: Parameters of Children Cells
The distribution type is still NORMAL based on the following reason: There are 210 points whose distribution
type is NORMAL, dist is first set to NORMAL. After examining dist of each lower level cell, we find
that conf l = 10. So, dist is kept as NORMAL ( conf l
Note that we only need to go through the data set once in order to calculate the parameters associated with the
grid cells at the bottom level, the overall compilation time is linearly proportional to the number of objects with a
small constant factor. Once the structure has been generated, it can be maintained incrementally to accommodate new
updates.
In STING+, user-defined triggers are decomposed into sub-triggers associated with individual cells in order to
benefit from the statistical information during trigger evaluation. These sub-triggers monitor composite events in E(s)
as well as the change of E(s). When a composite event in E(s) occurs, the trigger condition C T will be checked. At
the same time, if E(s) changes, some sub-triggers might be removed or added accordingly. As we will see later in
this section, when a region evolves, sub-triggers on this region are modified accordingly. Two pairs of sub-triggers
can be set on a cell to control whether or not insertion/deletion/updates need to be forwarded to higher levels in the
hierarchy by monitoring density and attribute conditions, respectively. These sub-triggers have parameters that are
calculated when they are activated; and these will be described shortly. Sub-triggers set on leaf level cells also keep
track of changes to region(s) satisfying the condition in C T whereas intermediate level sub-triggers only account for
propagation of updates to higher levels when some conditions are met.
Insertion-sub-triggers and deletion-sub-triggers are sub-trigger types (referred to as density-sub-triggers) used to
monitor density changes of a cell, i.e., whether the number of objects in this cell reaches a threshold n t from below
or above, respectively. The minimum number of insertions (or deletions) needed to make the trigger condition true,
referred to as n ins (or n del ), is stored as a parameter. It is obvious that only object insertion/deletion 4 can affect
the condition of these two sub-triggers; a change of attribute value has no effect on object density. Another pair
of sub-triggers (referred to as attribute-sub-triggers) are inside-sub-trigger and outside-sub-trigger. They monitor
whether an aggregate (such as MIN, MAX, AVERAGE, etc.) or a certain percentage (e.g., 80%) of attribute values
enters or leaves a range [r l ; r u ], respectively. The accumulated amount of updates to this attribute necessary to
achieve this goal,
P ffiattr, is stored as a parameter. Different parameters are used for different types of aggregates.
Object insertions/deletions and updates on attribute values affect these two sub-triggers. Upon satisfaction of sub-
trigger conditions, the actions taken by the inside-sub-trigger and the outside-sub-trigger are the same as those taken
by the insertion-sub-trigger and deletion-sub-trigger, respectively. The main difference between density-sub-triggers
and attribute-sub-triggers is that the associated parameters are different: one focuses on number of objects and the
other accounts for attribute values as well. Therefore, we only explain the management of density-sub-triggers in
detail. Density-sub-triggers can be used to monitor both "dense" region (i.e., the density is above some threshold) and
"sparse" region (i.e., the density is below some threshold). Since the procedures for these two cases are analogous in
spirit, we only elaborate the procedure to monitor a dense region in this section.
Density-sub-triggers can be set on cells at any level except the root. We first discuss the leaf level density-sub-
triggers. Given region(s) R with density at least c, the leaf level density-sub-triggers are used to monitor any change
to the area of R. Events that could affect R are deleting objects from R and inserting objects into the neighborhood
area of R. Note that inserting objects in R or deleting objects from a location outside R does not change R. In turn,
deletion-sub-triggers are always set on those leaf level cells within R whereas insertion-sub-triggers are always set on
those cells which are outside but adjacent to R. Parameters for these sub-triggers are calculated from the statistical
information associated with the leaf cells. They are n del respectively, where a b is
the area of a leaf level cell. Once the condition of a deletion-sub-trigger is satisfied on a leaf level cell (i.e., the density
of this cell goes below c), the following procedure is executed.
1. Replace the deletion-sub-trigger with an insertion-sub-trigger on this cell.
2. If any neighbor cell, x, which has an insertion-sub-trigger is no longer adjacent to R, remove the insertion-sub-
trigger on x.
3. Adjust the area of the involved region. (If a region splits, the area of each subregion has to be recalculated.)
On the other hand, actions triggered by an insertion-sub-trigger involves the following steps.
1. Replace the insertion-sub-trigger with an deletion-sub-trigger on this cell.
4 Update that changes spatial location can be viewed as a deletion followed by an insertion.
2. Adjust the area of the involved region(s).
3. If any of its neighbors do not have a density-sub-trigger yet, set an insertion-sub-trigger on that neighbor.
Note that the sub-trigger placed on a cell may vary over time. An example is shown in Figure 4. C 24 , C
Deletion-sub-trigger
(a) Insertion-sub-triggers and
deletion-sub-triggers are set.
43 C 44
43 C 44
43 C 44
Insertion-sub-trigger
(c) The insertion-sub-trigger on
C is triggered. As a result,
this insertion-sub-trigger is replaced
by a deletion-sub-trigger;
- an insertion-sub-trigger is placed on
- the area of this region is increased by 1.13
(b) The deletion-sub-trigger on
C is triggered. As a result,
this deletion-sub-trigger is replaced by
a
- The insertion-sub-triggers on
C and C are removed;
- The area of this region is decreased by 1.45 54
Figure
4: Leaf Level Insertion and Deletion Sub-triggers
C 42 , and C 44 are cells whose densities are at least c. These cells are shown in dark shading in the figure. Deletion-
sub-triggers are placed on these cells. The surrounding cells in light shade are those cells that are outside the region
(i.e., whose density is below c) but adjacent to at least one interior boundary cell. Insertion-sub-triggers are placed on
these cells. This is illustrated in Figure 4(a). Figure 4(b) and (c) show the scenario that C 44 's density goes below c,
then C 23 's density goes above c, respectively.
It is obvious that at any time the number of deletion-sub-triggers on leaf level is the number of leaf level cells
within R, N(R), and the number of leaf level insertion-sub-triggers is between NB(R)
Figure
5(b)) in a two dimensional space, where NB(R) is the number of interior boundary cells of R.
Moreover, since 4 \Theta d
the total number of density-sub-triggers set on R is between
Figure
For each intermediate level cell, an insertion-sub-trigger (deletion-sub-trigger) is set iff one of its child cells has
an insertion-sub-trigger (deletion-sub-trigger). The value of the parameters n ins and n del are set to be the minimum
of that of its children cells. The action of a sub-trigger on an intermediate level cell is to propagate the update held at
this cell to its children at next higher level and then recalculate the sub-trigger parameters. An insertion- (deletion-)
sub-trigger is removed if all insertion- (deletion-) sub-triggers on its children are removed.
There are 12 leaf level deletion-sub-triggers
and 26 leaf level insertion-sub-triggers.
Insertion-sub-trigger
and level insertion-sub-triggers.
There are 16 leaf level deletion-sub-triggers
(a)
Deletion-sub-trigger
Figure
5: Number of Sub-triggers
Moreover, STING+ allows two additional types of sub-triggers expand-sub-trigger and shrink-sub-trigger to be
set only on leaf level cells to track expansion and shrinking of region(s) Q from a given time, t 1 , respectively. Two
variables A shr and A exp , which are updated by the shrink-sub-triggers and expand-sub-triggers respectively, are used
to keep track of the area of is the region(s) that evolved
from Q. The initial value of A shr and A exp are zero at time t 1 . Shrink-sub-triggers are set on leaf level cells within Q
at time t 1 since these cells have the potential to leave Q and hence cause Q to shrink. Once the shrink-sub-triggers are
set, they will never be removed or replaced by expand-sub-triggers during the active period. Also, no new shrink-sub-
trigger will be activated on other cells after t 1 . This enables STING+ to track the shrinkage of an evolving region Q 0 (at
the current time) against the original region Q (at time t 1 ). Expand-sub-triggers are used to monitor the expansion
of against Q. Therefore, these expand-sub-triggers are set on cells which are outside the original region(s) Q but
which have the potential to join the region and cause it to expand. At time t 1 , expand-sub-triggers are placed on those
cells outside but adjacent to Q. Unlike shrink-sub-triggers, expand-sub-triggers can be set and removed dynamically.
When a leaf level cell y with a shrink-sub-trigger leaves the region(s), the value of A shr is increased. Expand-
sub-triggers on y's neighbors which are not adjacent to the region any longer are removed. However, y still keeps the
shrink-sub-trigger. Later, if y again joins the region, the value of A shr is decreased and expand-sub-triggers are set on
its neighbors which do not have shrink- or expand-sub-triggers. When a leaf level cell x with an expand-sub-trigger
joins the region, the value of A exp is increased and expand-sub-triggers are set on all its neighbors that do not have
shrink/expand-sub-triggers. In the case that a leaf level cell with expand-sub-trigger leaves the region again, the value
of A exp is decreased and the expand-sub-triggers on its neighbors which have expand-sub-trigger but are not adjacent
to the region any longer will be removed.
An example is shown in Figure 6. The solid bold line contours the region Q. Shrink-sub-triggers and expand-sub-
triggers are represented by dark shade and light shade on a cell respectively.
(a) at time t
(b) leaves the region; region splits;
increases; expand-sub-trigger on
is removed.
A shr
(c) joins the region; region merges;
increases; an expand-sub-trigger
is set on
A exp
(d) joins back the region,
decreases; an expand-sub-trigger
is set on
leaves the region, decreases
expand-sub-trigger on is removed
A exp
Shrink-sub-trigger
Expand-sub-trigger
Figure
Shrink and Expand Sub-triggers
Since more than one trigger defined by users may co-exist at the same time, there may be more than one sub-
trigger on a cell. All insertion/deletion/updates forwarded by its parent cell are collected by this cell until one of the
sub-triggers set on this cell becomes valid or in an extreme case there is no space to hold more suspended updates.
From our experiments, the storage overhead of sub-triggers in STING+ is 10% compared to STING on average.
5 Trigger Evaluation
In order to handle triggers efficiently, we evaluate the trigger condition incrementally when an update occurs. Triggers
are decomposed into sub-triggers associated with cells in the hierarchy. Updates to the database will not be forwarded
to higher levels in the hierarchy until certain conditions are met. This is based on the observation that the effect of a
single update (or small number of updates) usually is not enough to make the trigger condition valid. It is therefore
not necessary to apply the update immediately and re-evaluate the trigger condition every time an update occurs. If
the database is not used to serve other applications, it is more efficient to postpone updates to the database 5 and also
evaluations of trigger conditions until it is possible that the accumulative effect of updates collected can make the
trigger condition become true. Sub-triggers in STING+ are used for this purpose. The associated parameters are
calculated based on statistical information from the current database and the condition specified in triggers. This
enables the system to determine whether a trigger condition can transition to true by examining only a few cells.
The usually complicated condition specified in a user-defined trigger makes trigger decomposition crucial to efficient
evaluation. STING+ employs a "step-by-step" strategy based on the observation that a trigger condition is a
conjunction of predicates and can not become true if one predicate is false 6 . It is therefore not
necessary to evaluate all predicates at the same time. Instead, predicates can be evaluated in a certain order: the ith
predicate is tested only when all previous predicates are true. Moreover, costs for evaluating different predicates
may be different, the order should be chosen in such a way that the total cost of evaluation is minimum. That is,
should be evaluated in the order P k1 , P k2
is a permutation of
cost(Y jX) are the average cost to evaluate predicate X and that of Y given X is true, respectively, and t i is the
number of times P k i needs to be evaluated. In particular, the trigger evaluation process in STING+ employs the order
flocation, density condition, attribute condition,.g and is therefore divided into phases, one for each predicate. The
reason for choosing this order is that the location only needs to be evaluated once and the cost is at most the cost to
answer a region query in STING. It can be regarded as constant in the trigger evaluation cost function. Moreover,
if the location is fixed, unnecessary sub-triggers set on cells outside the location can be avoided and hence save the
evaluation cost of other predicates. Therefore, STING+ always computes the location first. Evaluation of both the
density condition and the attribute condition involves activation of sub-triggers on cells. Looking ahead, sub-triggers
set during an earlier phases will exist longer than those set in a later phase. Thus, it is better to first evaluate the
predicate that takes less time to handle. It is shown in a later section that attribute-sub-triggers are more costly than
density-sub-triggers. Therefore, STING+ evaluates the density condition before the attribute condition.
The first three phases are common for all four types of triggers. The fourth phase is used to account for the
SELECT clause and hence is different for different trigger types.
Given the following trigger,
ON map
WHENSELECT !select-clause?
5 Applying updates as a batch will save disk I/O if the structure is stored on disk.
6 Triggers with disjunctive conditions can always be rewritten to an equivalent set of triggers with only conjunctive conditions.
AND !attr-func? IN RANGE (a, b)
AND AREA IN RANGE (A, 1)
LOCATION !location?
DO alarm
STING+ evaluates the LOCATION clause in the first phase. !location? specifies the fixed region(s) on which the
trigger is defined. There are two cases: the first case is that !location? is a list of names and/or polygons whereas in
the second case the !location? is another region query. In the second case, we need to find the region by processing
this region query in the same manner as done in STING [Wan97].
Once we determine the spatial area S over which the trigger is defined, the evaluation process enters the second
phase where the DENSITY clause is evaluated. The evaluation of attribute conditions is delayed until the DENSITY
condition is satisfied. For each leaf level cell in S, we calculate the number of objects which need to be inserted in
order to achieve the required density in this cell, which is maxf0; c \Theta a b \Gamma ng. Then, we calculate the minimum
number of objects t needed overall in order to have dA=a b e leaf level cells with density at least c 7 . The density-trigger
is set to monitor whether there is a region satisfying both the DENSITY and the AREA conditions. A density-trigger is
set on the root in such a way that updates to higher levels are postponed until at least t insertions have been submitted.
Immediately after applying a set of t insertions, the value of t is recalculated. This process continues until t is below a
threshold t 0 . Then, density-sub-triggers (i.e., insertion-sub-triggers and deletion-sub-triggers) are set on those regions
with density c and area at least A t to monitor the area change, where A a threshold set by the
user or the system. All the remaining space is still accounted for by the density-trigger at the root. Note that the large
regions and small regions are handled differently.
The above procedure continues until there is at least one region with density c and area at least A. Let R be the set
of such qualified regions. The trigger process then enters the third phase on R. Note that the remaining space is still
in the second phase.
The third phase is analogous to the second phase except that the attribute attr is considered instead of the density.
In this phase, the accumulated amount of updates (on attr value) necessary to satisfy the attribute condition is calculated
for each leaf level cell within R. For example, if the !attr-func? defined by the user is AVERAGE(attr), then
7 A heap is used to calculate t.
the accumulated amount of updates u is (a \Gamma m) \Theta n + a \Theta ffin - u - (b \Gamma m) \Theta n where ffin is the change
in the number of objects. Then we calculate the total accumulated amount of updates on attr, t u
needed in order to have dA=a b e cells within R satisfy the attribute condition. An aggregate attr -trigger is set at the
root to suspend all updates until t u (attr) amount of updates on attr occurs. t u (attr) is recalculated every time when
all updates are forwarded to leaf level. This procedure continues until t u (attr) is below a threshold. Attribute-sub-
triggers are set on those regions satisfying the attribute condition and with area at least A t within R to monitor the
area change until there is a region satisfying both attribute condition with area at least A within R. Such qualified
region(s), referred to as Q, will enter the fourth phase.
Note that during this phase, all sub-triggers of the previous phase are still active. If the density condition is violated
again after a region enters the third phase, all attribute-sub-triggers placed on this region are suspended until this region
becomes qualified again.
The fourth phase accounts for the SELECT clause and hence is different for each type of trigger.
5.1 Region-trigger
For region-triggers, no additional requirement is specified in the SELECT clause. Therefore, this phase is omitted
when processing a region-trigger.
5.2 Attribute-trigger
Since the attribute-trigger is used to detect certain attribute patterns, the SELECT clause is usually defined as "SELECT
!attr2-func? IN RANGE (e, f )". The minimum accumulated amount of updates overall on attr2, t u (attr2), needed
to cause the condition to become true is calculated in the same manner as in the third phase. An aggregate attr2 -trigger
is set at the root to suspend all updates until t u (attr2) amount of updates on attr2 occurs. t u (attr2) is recalculated
every time updates are applied to higher levels. The procedure continues until t u (attr2) reaches zero.
5.3 Region-ffi-trigger
There are two common ways to quantify a region change. One is to use size difference and the other is based on boundary
distance [Hor97]. STING+ supports both and enables users to choose their favorite measure. If SIZE(REGION)
is used, the region change will be measured using area of the symmetric difference between the original region and
the current region. For example, in Figure 7(a), the original region is denoted by a solid contour whereas the current
region is denoted by dashed contour. The symmetric difference between these two regions is shown as the shaded
region whose area is used as a measure of change.
(a)
c x
(b)
c y
(c)
Figure
7: Size Difference and Boundary Distance of Region
Another measure of region difference is the boundary distance, which is defined as the smallest number dis such
that every point on the boundary of the evolved region is within a distance dis of some point on the boundary of
the original region and vice versa. In Figure 7(b)(c), assume that the size of each leaf level cell is c x \Theta c y and
Euclidean distance is used. Then the boundary distance in Figure 7(b) is
y . However, the boundary distance
in
Figure
7(c) is 2 \Theta c y because points on some portion of the boundary of the original region (denoted by bold line in
the
Figure
7(c)) are at least 2 \Theta c y distance away from any point on the boundary of the evolved region even though
every point on the boundary of evolved region is within
y distance of some point on the boundary of the
original region.
Suppose that the SELECT clause is specified as "SELECT [SIZE j BOUNDARY](REGION) CHANGE RANGE
(e, f )" in the trigger. Shrink-sub-triggers and expand-sub-triggers are set on Q to monitor the evolution of Q via either
the symmetric difference or the boundary distance until time 2 or A shr +A exp reaches e.
Besides the above two measures, STING+ also supports pure area change without concern for the region move-
ment. In this case we only take into account the area difference of the original region and the region to which it evolves.
In this case, the change could be either positive or negative depending on whether the region expands or shrinks in
size.
5.4 Attribute-ffi-trigger
Each attribute-ffi-trigger can always be rewritten as equivalent attribute triggers. For example,
is equivalent to
e, AV ERAGE time1 (attr)
where AV ERAGE time1 (attr) is the average age at time 1 . When an attribute-ffi-trigger is submitted, it is first translated
into equivalent attribute trigger and then this attribute trigger is processed by STING+ as usual. If the key-word
CHANGE is used instead of INCREASE or DECREASE, the attribute-ffi-trigger is equivalent to two attribute-ffi-
triggers: one for INCREASE and one for DECREASE. So, it is transformed into two attribute triggers and processed
by STING+ simultaneously. Therefore, the fourth phase of attribute-ffi-trigger is the same as that of an attribute-trigger.
6 Experimental Results
We implemented a prototype of STING+ in C and all experiments were performed on a SPARC10 workstation with
SUNOS 5.5 operating system and 208 MB main memory. In this section, we will analyze three aspects of the performance
of STING+: the average number of CPU cycles for handling a sub-trigger, the average number of sub-triggers
for a region, and the average update costs in STING+.
In all performance measurements, we have 200,000 synthetically generated 2-dimensional data points. There
are three kinds of data location distributions, i.e., uniform distribution, normal distribution with small variance, and
normal distribution with large variance. There are also three kinds of attribute value distributions: uniform, normal
with small variance, and normal with large variance. For each combination of location and attribute value distribution,
we generate two sets of data, each with a different random number generator. Therefore, we have eighteen sets of data.
We chose twelve triggers (three in each category) to test the performance. We insert a set of 10,000 randomly selected
data points with different attribute values, then delete 10,000 randomly selected data points, and update the attribute
values of another 10,000 data points. In these experiments, there are seven levels in STING+, and the bottom level
consists of 4096 cells.
Since STING+ suspends updates and stores the parameters of each sub-trigger, there is some storage overhead.
After generating STING+ structures, we found that the average size of STING+ structure is 0.75MB and overhead of
STING+ is about 10% compared to STING.
6.1 Cost for Handling a Sub-Trigger
There are six types of sub-triggers, and the costs for handling them are different. Therefore, we will analyze them
separately. Table 2 shows the cost of handling a sub-trigger. Since the only action for an intermediate level sub-trigger
is forwarding the update down or suspending it, the CPU cycles consumed for this function is minimal. In addition, the
actions taken by all four types of sub-triggers are the same, thus, the costs for each of intermediate level sub-triggers
are the same. (Expand- and shrink- sub-triggers are set only on the leaf level.)
However, the leaf level sub-triggers are much more expensive (excluding expand-sub-trigger and shrink-sub-
trigger) as in Table 2 because it has to remove or set other sub-triggers and recalculate the parameters. Furthermore,
the costs for the different types of sub-triggers are significantly different. It is more expensive to handle a leaf level
insertion-sub-trigger than a leaf level deletion-sub-trigger because new leaf level sub-triggers on neighboring cells
are created when an insertion-sub-trigger is triggered, while no new sub-triggers will be created on the neighboring
cells when a leaf level deletion-sub-trigger is triggered. The cost of creating a new sub-trigger is much larger than
that of removing a sub-trigger. For the same reason, handling a leaf level inside-sub-trigger is much more expensive
than handling an leaf level outside-sub-trigger. Furthermore, the attribute-sub-triggers are more expensive than
density-sub-triggers because calculating the parameters of attribute-sub-triggers is more expensive than that of density-
sub-triggers. Expand- and shrink-sub-triggers are much simpler than attribute-sub-triggers and density-sub-triggers,
and therefore take much less time to handle.
During an update, the overall number of intermediate level sub-triggers involved is at most the height of the
pyramid. And it remains more or less constant. The largest cost is the evaluation of leaf level sub-triggers.
However, for each update, at most one leaf level sub-trigger is evaluated. Therefore, the overall cost of handling
sub-triggers in an update is small.
insertion- deletion- inside- outside- expand- shrink-
Intermediate Level Sub-triggers 3812 3803 3789 3807 N/A N/A
Leaf Level Sub-triggers 8055 5775 11212 8164 2126 2087
Table
2: Average CPU cycles for handling each type of sub-trigger
6.2 Average Number of Leaf Level Sub-Triggers for a Region
The number of sub-triggers created for a trigger is not only dependent on the number of regions and the size of regions
but is also related to the shape of regions as well. Since the cost of handling a leaf level sub-trigger is much higher
than that of an intermediate level sub-trigger, we focus on the average number of leaf level sub-triggers created for
a given region. For a given trigger, the number of significantly large regions is highly dependent on the data and the
trigger condition so that we will not focus on this aspect.
For regions that have similar area and shape we found that the number of leaf level sub-triggers used for different
Maximum number of
Leaf Level Subtriggers
Average number of
Leaf Level Subtriggers
Minimum number of
Leaf Level Subtriggers
Number
of
Leaf
Level
Sub-triggers
Region Size (in number of leaf level cells)
Figure
8: Number of Leaf Level Sub-triggers for a Given Size Region
categories of triggers is similar. As a result, we do not distinguish among the four categories of triggers. The number
of large regions varies significantly from one trigger to another and from one data set to another. The total number of
sub-triggers for all these tests is about three thousand. We group regions according to their size. Figure 8 illustrates
the number of leaf level sub-triggers placed on a given size region. The top line, middle line, and bottom line indicate
the maximum number, average number, and minimum of leaf level sub-triggers for a given range of region size. The
variation of the number of sub-triggers for a given size region is due to different shapes of the regions. We provided
the theoretical bounds on the number of leaf level sub-triggers. From Figure 8, we can see that the average number of
sub-triggers is much closer to the minimum number because it is very rare that a region will have the shape that yields
the maximum number of sub-triggers (e.g., a flat shape (Figure 5(a))).
6.3 Update Cost
Cost varies for each update. If at the time the root is accessed, it can be determined that an update can not possibly
make the trigger condition valid, then the update will be suspended at the root. The cost of this type of update is very
minimal. However, when an update comes in to the root and it is deemed that the condition in the trigger may be
all suspended updates will be applied to a higher level. Then this type of update may be very expensive. We
compare the average update cost of STING+ to the average update cost of STING. The result is that STING+ only
consumes slightly more CPU cycles than STING on average for an update. Furthermore, since STING+ can bundle
updates and apply them to the database as a batch, it can potentially reduce the number of I/O's.
When an update comes in, STING has to apply it into the leaf level cells and update all parameters for all relative
cells. However, STING+ is much more flexible and does not apply updates unless it is deemed that this update
may cause the trigger condition to become true. In addition, if the resources become available, e.g., CPU becomes
idle, STING+ then can apply all suspend updates to the bottom level cells and make proper changes for all relative
parameters. This can reduce the effective cost of updates by avoiding consuming CPU cycles and I/O bandwidth when
the system is highly loaded. We found from experiments that the average overhead incurred by trigger maintenance is
between 10% and 15%.
As mentioned before, one alternative to STING+ is to query the data periodically until a pattern appears. One
natural question to ask is what is the maximum frequency of to submitting queries to STING so that it consumes
less CPU cycles than STING+ overall. We choose the number of updates (include insertions, deletions, and attribute
updates) as the measurement of period. Then the break-even point of a period should be Qsting
Usting+ \GammaU sting
where Q sting
is the average cost of answering a query in STING, and U sting+ and U sting are the average cost of an update in
STING+ and STING, respectively. From our experiments, it is only profitable for STING if the period is set to be
larger than 4000 updates. However, this could raise problems such as severe delay and missing of the pattern which
appears and disappears quickly. Note that STING+ can produce the result without any delay. In order to achieve the
same goal by periodic query re-submission on STING, the query condition has to be checked for each update. In such
a case, STING+ achieves three orders of magnitude improvement over it.
7 Discussion: Integration of Query Processing with Trigger Evaluation
In previous sections, we focused on efficient evaluation of trigger condition in a spatial database environment using
the sub-trigger technique. Once the condition is satisfied, the action defined in the trigger is executed. This action
might be a pure system procedure call or a human interactive process. In many cases, a spatial data mining query
is specified as the trigger action, which may require a significant amount of computing resources. By doing so, the
user intends to acquire knowledge related to certain aspects of the database once some given event occurs. In many
real-time applications, the result of such query may be required to be returned to the user immediately or within some
short period. As a result, performing query process upon satisfaction of trigger condition would not be able to meet
such a deadline. In this case, some degree of precomputation of the query (before the trigger condition is fulfilled)
becomes necessary.
For the simplicity of explanation, we assume that the entire system dedicates to the trigger processing and at most
one trigger is processed at any time 8 . Assume that it takes t Q time to process a query Q on the entire database but the
result of the query has to be returned within time t R upon the satisfaction of the trigger condition, where t Q ?? t R . In
order to meet this deadline, we have to begin the query evaluation before the trigger condition becomes true. If the size
of the database does not change significantly over time, we can regard t Q as a constant number. The spatial locality
8 Interested readers please refer to [Wan98] for analysis of more complicated scenarios.
property (i.e., the effect of an update is usually local to its neighborhood) provides the foundation for achieving such
a goal. This property indicates that an obsolete result of a query is still "valid" partially if only a small portion of
the data updates. For a given query Q and a database D, the query result R is essentially a function of Q and D T
where D T is the database status at time T . In the remainder of this section, we use R T to denote the query result
corresponding to the database status at time T . Let - be the average time to integrate a new update with an obsolete
query result. If an obsolete result R exists, then at a later time T 2 , unless there are more than t Q
new updates, it
would be more efficient to use R T1 as the basis to generate the new result R T2 than to process the query from sketch.
Based on this observation, we devise the following twofold algorithm:
1. Before the trigger condition is satisfied (say, at time TQ ), process the query on the entire database to obtain the
result R TQ 9 .
2. Once the trigger condition becomes satisfied (say, at time T s ), process updates posted from time TQ to time T s
and combine with R TQ to generate the query result R Ts 10 .
This is illustrated in Figure 9(a). The first step takes t Q time to finish whereas the time consumed by the second step,
referred to as t U , depends on the number of updates occur within the time interval from TQ to T s . Assume that updates
occur at a rate -. Then, the number of updates posted between TQ and T s , referred to as NU , is
We can see that the earlier we execute the query, the more updates we need to accommodate in the second step.
Therefore, as far as we can still guarantee to return R Ts by the time prefer to execute the query as late as
possible in order to minimize the computing resources consumed. In other words, we want to maximize TQ such that
still holds. We have
which is the optimal value of TQ (Figure 9(b)).
However, since in most cases we are unable to obtain the exact value of T s ahead of time, it is very difficult to
compute the optimal value of TQ . We propose a heuristic to estimate TQmax . Since the sub-trigger technique is used
to evaluate the trigger condition, the minimum number of updates (referred to as Nmin ) to make the trigger condition
9 Note that R T Q will become obsolete if some new update occurs after TQ .
Note that this step has to be done within time t R .
time
s
(b) Optimal Scenario
(a) General Scenario
time
Figure
9: Time Topology for Query Processing
true can be obtained at any time as a byproduct. It is obvious that, at any time T , We can estimate the earliest time the
trigger condition can become true as
Nmin
Then,
Nmin
As time elapses (i.e., T increases), Nmin changes as well 11 . Our objective is to determine whether the current time T
is the estimated TQmax , referred to as d
TQmax . Obviously, we want d
TQmax to be the largest possible
which still satisfies the above inequality. Otherwise, there would be a chance that we might miss the deadline. In fact,
d
TQmax is the time when both following conditions are true.
1. Inequality 1 still holds.
2. A future update might cause Nmin to decrease, which in turn, would violate Inequality 1.
11 Note that the value of Nmin can either increase or decrease as a result of the occurrence of some update along the time.
Therefore, we have
Once Nmin falls into this interval, we begin to process the query. Intuitively, when Nmin remains greater than or equal
to t Q \Gammat R
1\Gamma-\Theta- \Theta -+ 1, we don't have to start the query processing. However, if Nmin ! t Q \Gammat R
1\Gamma-\Theta- \Theta -, it is possible that we
would not be able to obtain the result in time.
Because we use Nmin and T smin during the estimation, it might be the case that NU ? Nmin and hence T s ?
T smin . This happens when some updates (occur after TQ ) do not contribute to the fulfillment of the trigger condition.
Indeed, NU might be very large in some cases. Note that we can process at most t R
updates after the trigger condition
becomes true. If NU exceeds this limit, we will not be able to generate R Ts in time. To avoid such a scenario, we
have to guarantee that at most t R
updates need to be examined to compute R Ts at time T s . A naive solution is that,
after the initial query result R TQ is generated, we upgrade the previous result to an up-to-date result R T 0
for every
new updates, where T 0
Q is the time every t R
th update occurs. So, at any time from TQ to T s , the number of
outstanding updates 12 is at most t R
- . This provides a sufficient (but not necessary) condition for our requirement. An
alternative solution would be to estimate the time we need to generate an up-to-date result to reflect recent updates via
a similar strategy as we estimate TQ . Let N
Qmax be the number of outstanding updates cumulated so far and
the time we have to generate the new result, respectively. Let t 0
Q be the time consumed to perform this procedure. Note
that we can choose to either process the entire database from sketch (which requires t Q time) or update the previous
result to reflect recent data change (which requires N time), depending on which one is more efficient. That is,
g. Then, at any time T (T ? TQ ), we have
Nmin
(2)
and we want to determine whether T is an optimal estimation of T 0
Qmax . By a similar analysis presented before, we
obtain that every time N o increases to
we need to upgrade the previous result to an up-to-date version. This process continues until the trigger condition is
Figure 10 illustrates a general scenario of this process.
8 Conclusion
In this paper, we proposed a new approach to active spatial data mining, called STING+. In STING+, users can define
triggers to monitor the change of spatial data (also non-spatial attribute values) efficiently. If certain changes occur,
An outstanding update is an update posted after the previous result was generated.
R
s
time
Figure
10: Heuristic Strategy
the trigger will be fired immediately and actions defined by the user will be taken. Moreover, this approach can be
easily extended to handle reposted queries efficiently.
STING+ employs a set of sub-triggers to monitor the change of the data. It creates and removes these sub-triggers
dynamically according to the insertions/deletions/updates in the system. Furthermore, STING+ evaluates these sub-
triggers in a certain order to yield least cost. It is also shown via experiments that the STING+ has insignificant
overhead.
--R
Active data mining.
Improving adaptable similarity query processing by using approximations.
A density-based algorithm for discovering clusters in large spatial databases with noise
Spatial data mining: a database approach.
Incremental clustering for mining in a data warehousing environment.
Algorithms for characterization and trend detection in spatial databases.
Efficient algorithms for discovering frequent sets in incremental databases.
Continuous change in spatial regions.
The DEDALE system for complex spatial queries.
GeoMiner: a system prototype for spatial data mining.
Qualitative representation of change.
Finding aggregate proximity relationships and commonalities in spatial data mining.
Extraction of spatial proximity patterns by concept generalization.
Finding boundary shape matching relationships in spatial data.
Spatial data mining: progress and challenges.
Efficient and effective clustering methods for spatial data mining.
STING: a statistical information grid approach to spatial data mining.
PK-tree: a spatial index structure for high dimensional point data.
Active Database Systems
MultiMediaMiner: a system prototype for multimedia data mining.
Advanced Database Systems
BIRCH: an efficient data clustering method for very large databases.
"["
--TR | active data mining;spatial databases;incremental trigger evaluation;spatial data mining |
628088 | A Database Approach for Modeling and Querying Video Data. | AbstractIndexing video data is essential for providing content-based access. In this paper, we consider how database technology can offer an integrated framework for modeling and querying video data. As many concerns in video (e.g., modeling and querying) are also found in databases, databases provide an interesting angle to attack many of the problems. From a video applications perspective, database systems provide a nice basis for future video systems. More generally, database research will provide solutions to many video issues, even if these are partial or fragmented. From a database perspective, video applications provide beautiful challenges. Next generation database systems will need to provide support for multimedia data (e.g., image, video, audio). These data types require new techniques for their management (i.e., storing, modeling, querying, etc.). Hence, new solutions are significant. This paper develops a data model and a rule-based query language for video content-based indexing and retrieval. The data model is designed around the object and constraint paradigms. A video sequence is split into a set of fragments. Each fragment can be analyzed to extract the information (symbolic descriptions) of interest that can be put into a database. This database can then be searched to find information of interest. Two types of information are considered: 1) the entities (objects) of interest in the domain of a video sequence, and 2) video frames which contain these entities. To represent this information, our data model allows facts as well as objects and constraints. The model consists of two layers: 1) Feature & Content Layer (or Audiovisual Layer), intended to contain video visual features such as colors, contours, etc., 2) Semantic Layer, which provides the (conceptual) content dimension of videos. We present a declarative, rule-based, constraint query language that can be used to infer relationships about information represented in the model. Queries can refer to the form dimension (i.e., information of the Feature & Content Layer), to the content dimension (i.e., information of the Semantic Layer), or to both. A program of the language is a rule-based system formalizing our knowledge of a video target application and it can also be considered as a (deductive) video database on its own right. The language has both a clear declarative and operational semantics. | Introduction
With recent progress in compression technology, it is
possible for computer to store huge amount of pictures, audio
and even video. If such media are widely used in today's
communication (e.g., in the form of home movies, education
and training, scholarly research, and corporate enterprise
solutions), efficient computer exploitation is still lack-
ing. Many databases should be created to face the increasing
development of advanced applications, such as video
on demand, video/visual/multimedia databases, monitoring,
virtual reality, internet video, interactive TV, video conferencing
and video email, etc. Though only a partial list, these
advanced applications need to integrate video data for complex
manipulations.
Video analysis and content retrieval based on semantics
require multi-disciplinary research effort in areas such
as computer vision, image processing, data compression,
databases, information systems, etc. (see [26]). Therefore,
video data management poses special challenges which
call for new techniques allowing an easy development of
applications. Facilities should be available for users to
view video material in a non-sequential manner, to navigate
through sequences, to build new sequences from oth-
ers, etc. To facilitate retrieval, all useful semantic objects
and their features appearing in the video must be appropriately
indexed. The use of keywords or free text [28] to describe
the necessary semantic objects is not sufficient [9].
Additional techniques are needed. As stated in [9], the issues
that need to be addressed are: (1) the representation
of video information in a form that facilitates retrieval and
interaction, (2) the organization of this information for efficient
manipulation, and (3) the user-friendly presentation
of the retrieved video sequences. Being able to derive an
adequate content description from a video, however, does
not guarantee a satisfactory retrieval effectiveness, it is only
a necessary condition to this end. It is mandatory the video
data model be powerful enough to allow both the expression
of sophisticated content representation and their proper usage
upon querying a video database. For example, the time-dependent
nature of video is of considerable importance in
developing adequate data models and query languages.
Many features of database systems seem desirable in a
video context: secondary storage management, persistence,
transactions, concurrency control, recovery, versions, etc.
In addition, a database support for video information will
help sharing information among applications and make it
available for analysis. The advantages (in general and for
video in particular [22, 15]) of database technology, such
as object-oriented databases, are (1) the ability to represent
complex data, and (2) more openness to the external
world (e.g., Web, Java, CORBA, languages bindings) than
traditional database systems. However, as existing database
technology is not designed to manage digital video as first
class media, new techniques are required for organizing,
storing, manipulating, retrieving by content, and automatic
processing and presentation of visual content. Although
some tools for video exploitation are available, their use
often amounts to displaying video in sequence, and most
modeling methods have been developed for specific needs.
In many cases, query languages concentrate on extraction
capabilities. Queries over video data are described only by
means of a set of pre-defined, ad hoc operators, often incorporated
to SQL, and are not investigated in theoretical
framework. One can argue that logic-based database query
languages appropriately designed to support video specific
features should form a sound basis for query languages.
From database point of view, video data presents an interesting
challenge. Future database systems must cover the
range of tasks associated with the management of video
content including feature extraction, indexing, querying,
and developing representation schemes and operators. For
example, the data model should be expressive enough to
capture several characteristics inherent to video data, such
as movements, shapes, variations, events, etc. The query
language should allow some kind of reasoning, to allow, for
example, virtual editing [19], and should be able to perform
exact as well as partial or fuzzy matching (see [4]).
Despite the consensus of the central role video databases
will play in the future, there is little research work on
finding semantic foundations for representing and querying
video information. This paper is a contribution in this direc-
tion. The framework presented here integrates formalisms
developed in constraint, object and sequence databases. We
propose a hybrid data model for video data and a declara-
tive, rule-based, constraint query language, that has a clear
declarative and operational semantics. We make the following
contributions:
1. We develop a simple video data model on the basis
of relation, object and constraint paradigms. Objects
of interest and relationships among objects can
be attached to a generalized interval 1 either through
attribute/value pairs or relations.
2. We propose a declarative, rule-based, constraint query
language that can be used to infer relationships from
information represented in the model, and to intentionally
specify relationships among objects. It allows a
high level specification of video data manipulations.
The model and the query language use the point-based
approach to represent periods of time associated with generalized
intervals. First-order queries can then be conveniently
asked in a much more declarative and natural way
[27]. There has been some previous research on the power
of constraints for the implicit specification of temporal data
[8].
The model and the query language will be used as a core
of a video document archive prototype by both a television
channel and a national audio-visual institute. To the best
of our knowledge, this is the first proposal of a formal
rule-based query language for querying video data.
Paper outline: This paper is organized as follows. Section
related work. Section 3 presents some useful
definitions. Section 4 formally introduces the video data
model. Section 5 describes the underlying query language.
Section 6 draws conclusions.
2. Related Work
With the advent of multimedia computers (PCs and
workstations), the world-wide web, and standard and powerful
compression techniques 2 , it becomes possible to digitize
and store common human media, such as pictures,
generalized interval is a set of pairwise non overlapping fragments
in a video sequence.
Such as MPEG-I [10] [11] and its successors.
sounds, and video streams worldwide. Nevertheless, storing
is the minimal function we are to expect from a com-
puter, its power should also be aimed at content indexing
and retrieval. Two main approaches have been experi-
enced: fully automated content indexing approach, and the
approach based on human-machine interaction. Some fully
automated research systems have been developed, among
others, VIOLONE [29] and JACOB [18]. However, because
of the weakness of content analysis algorithms, they focus
on a very specific exploitation. On the other hand, much
more aided video content indexing systems have been de-
signed, among others, OVID [22], AVIS [1], or VideoStar
[14].
In the context of image and video data, queries can be
formulated using several techniques, which fall broadly into
two categories: textual and visual. Several systems have
been developed to retrieve visual data based on color, shape,
size, texture, image segments, keyword, relational opera-
tors, objects, and bibliographic data (see, among others,
[6, 7, 13, 16, 21]). In this paper, we focus on textual languages
The work presented here is closest to and complements
the ones in [20, 1, 22, 14].
Meghini [20] proposed a retrieval model for images
based on first-order logical language which spans along
four main dimensions: visual, spatial, mapping and content.
Queries on images can address anyone of these dimensions
or any combination of them. In the proposed model, objects
cannot be characterized by attributes. Every entity is
described by means of relations (predicates). For example,
objects' shapes cannot be stated in a declarative and equational
manner, as it is the case in our model.
Oomoto and Tanaka [22] proposed a schema-less video-object
data model. They focus on the capabilities of object-oriented
database features (their extension) for supporting
schema evolution and to provide a mechanism for sharing
some descriptive data. A video frame sequence is modeled
as an object with attributes and attribute values to describe
its contents. A semantically meaningful scene is a sequence
of (not always continuous) video frames. An interval is described
by a pair of a starting frame and an ending frame. It
denotes a continuous sequence of video frames. They introduced
the notion of inheritance based on the interval inclusion
relationship. By means of this notion different video-
objects may share descriptional data. Several operations,
such as interval projection, merge and overlap are defined
to compose new objects from other objects. They provide
the user with the SQL-based query language VideoSQL for
retrieving video-objects. This model does not allow the
description and the definition of relationships among objects
within a video-object. The content of a video-object is
described in terms of attribute-values of this video-object.
Semantic objects are considered as values of attributes of
video-objects.
Adali et al. [1] have developed a formal video data
model, and they exploit spatial data structures for storing
such data. They emphasized some kinds of human-level information
in video: objects of interest, activities, events and
roles. A given video is divided into a sequence of frames
which constitute logical divisions of the video. Associated
with each object/event is a set of frame-sequences. Each
frame sequence can be viewed as a frame segment. Events
are characterized by a set of attributes describing their con-
text. For example, the event give party may be characterized
by the multi-valued attribute host whose values are
Philip and Brandon and the attribute guest whose value is
Rupert. In this framework, objects other than events have
no complex structure. The only relationships among objects
are those given implicitly through the description of
events. They also developed a simple SQL-like video query
language which can be used to retrieve videos of interest
and extracts from them the relevant segments of the video
that satisfy the specified query conditions.
These two proposals provide an interval-based approach
to represent the periods of time associated with frames of
interest in a video sequence.
Hjelsvold and Midtstraum [14] proposed a generic
video data model. Their proposal combines ideas from
the stratification [2] and the segmentation [9] approaches.
The objective is to develop a framework where structuring,
annotations, sharing and reuse of video data become
possible. Their model is built upon an enhanced-ER model.
A simple SQL-like video query language with temporal
interval operators (e.g., equals, before, etc.) is provided.
This work concentrates on the structural part of a video
in order to support video browsing. Thematic indexing is
based on annotations, which give a textual description of
the content of frame sequences. In contrast, we allow a
more elaborated and structured description of the content
of frame sequences.
With regard to the modeling, we extended these works by
allowing the description of the contents of video sequences
by means of first class citizen objects of the data model and
by relating them to each others either through attributes or
through explicit relation names, leading to more expressive
relationships to link objects. Hence, video frames (which
we call generalized intervals) as well as semantic objects
(objects of interest in a generalized interval) are modeled
and manipulated at the same level. Special queries, like
spatial and temporal ones, can be expressed in a much more
declarative manner. Generalized intervals, as well as semantic
objects and relationships among these elements can
be described, making a video sequence appropriate for different
applications.
3. Basic Definitions
This section provides the preliminary concepts that will
be used to design the video data model and the underlying
rule-based, constraint query language.
ffl the domain dom(D),
ffl a set of predicate symbols pred(D), where each predicate
pred(D) is associated with an arity
n and an n-ary relation P D ' dom(D) n ,
An example of a concrete domain is the set of (nonnega-
tive) integers with comparisons (=; !; -; ?).
In the following, we assume entailment of conjunctions
or disjunctions over pred(D) is decidable.
(Dense Linear Order Inequality Con-
straints) Dense order inequality constraints are all formulas
of the form x'y and x'c, where x, y are variables, c
is a constant, and ' is one of =; !; - (or their negation
We assume that these constants are interpreted
over a countably infinite set D with a binary relation which
is a dense order. Constants, =, !, and - are interpreted
respectively as elements, equality, the dense order, and the
irreflexive dense order of the concrete domain D.
Complex constraints are built from primitive (atomic)
constraints by using logical connectives. We use the special
symbol, ), to denote the entailment between constraints,
that is, if c 1
and c 2
are two constraints, we
only if the
constraint c 1 - :c 2 is unsatisfiable.
Techniques for checking satisfiability and entailment for
order constraints over various domains have been studied.
Regarding expressive power and complexity of linear constraint
query languages, see [12].
Definition 3 (Set-Order Constraints) Let D be a domain.
A set-order constraint is one of the following types:
Y
where c is a constant of type D, s is a set of constants of type
D, and e
Y denote set variables that range over finite sets
of elements of type D.
Our set-order constraints are a restricted form of set constraints
involving 2, ', and ', but no set functions such
as [ and ".
Note that the constraint c 2 e
X is a derived form since it
can be rewritten as fcg ' e
X.
Satisfaction and entailment of conjunctions of set-order
constraints can be solved in polynomial-time using a quantifier
elimination algorithm given in [25].
This class of constraints play an important role in declaratively
constraining query answers.
Definition 4 (Time Interval) An interval i is considered
as an ordered pair of real numbers This
definition refers to the predicate - of the concrete domain
IR. If t is a time variable, then an interval can be
represented by the conjunction of the two primitive dense
linear order inequality constraints x 1 - t and t - x 2 .
Definition 5 (Generalized Time Interval) A generalized
time interval, or simply a generalized interval, is a set of
non overlapping intervals. Formally, a generalized
time interval can be represented as a disjunction of time
intervals.
4. Video Data Model
To naturally capture the entities and relationships among
entities within a video sequence, we resort to the following
basic paradigms:
ffl Objects and object identity Objects are entities of interest
in a video sequence. In our model, we refer
to objects via their logical object identities, which are
nothing but syntactic terms in the query language. Any
logical oid uniquely identifies an object. In this paper,
we will be using the word "object identity" (or even
"object") to refer to ids at logical level. We have essentially
two types of objects: (1) generalized interval
objects, which are abstract objects resulting from
splitting a given video sequence into a set of smaller
sequences; (2) semantic objects which are entities of
interest in a given video sequence.
Attributes Objects are described via attributes. If an
attribute is defined for a given object, then it also has a
value for that object.
ffl Relations It has been argued many times that objects
do not always model real world in the most natural
way, and there are situations when the use of relations
combined with objects leads to more natural represen-
tation. Although relations can be encoded as objects,
this is not the most natural way of handling relations
and so we prefer to have relations as first-class language
constructs.
We assume the existence of the following countably infinite
and pairwise disjoint sets of atomic elements:
ffl relation names
ffl attributes
ffl (atomic) constants
object identities or oid's :g. In the
following, we distinguish between object identities for
entities and object identities for generalized intervals.
Furthermore, in order to be able to associate a time interval
to a generalized interval object, we allow a restricted
form of dense linear order inequality constraints to be values
of attributes. We define the set ~
C whose elements are:
Primitive (atomic) constraints of the form t'c where t
is a variable, c is a constant, and ' is one of !; =; ?;
ffl conjunctions, and disjunctions of primitive constraints.
Definition 6 (value) The set of values is the smallest set
containing D[ID[ ~
C and such that, if
are values, then so is fv g.
Definition 7 (Video Object) A video object (denoted v-
object) consists of a pair (oid; v) where:
ffl oid is an object identifier which is an element of ID;
ffl v is an m-tuple
are distinct attribute names in A and v i
are values.
then attr(o) denotes the set of all attributes in v (i.e.
An g), and value(o) denotes the value v, that is,
value(o). The value v i is denoted by o:A i .
5. Rule-Based, Constraint Query Language
In this section, we present the declarative, rule-based
query language that can be used to reason with facts and
objects in our video data model. The language consists of
two constraint languages on top of which relations can be
defined by means of definite clauses.
This language has a model-theoretic and fix-point semantics
based on the notion of extended active domain of
a database. The extended domain contains all generalized
interval objects and their concatenations. The language has
an interpreted function symbol for building new generalized
intervals from others (by concatenating them). A constructive
term has the form I
I 2 and is interpreted as the concatenation
of the two generalized intervals I 1 and I 2 .
The extended active domain is not fixed during query
evaluation. Instead, whenever a new generalized interval
object is created (by the concatenation
operator,\Omega ), the new
object and the ones resulting from its concatenation with
already existing ones are added to the extended active domain
5.1. Syntax
To manipulate generalized intervals, our language has
an interpreted function symbol for constructing 3 complex
term. Intuitively, if I 1 and I 2 are generalized intervals, then
I
1\Omega I 2 denotes the concatenation of I 1 and I 2 .
The language of terms uses three countable, disjoint sets:
1. A set D of constant symbols. This set is the union of
three disjoint sets:
set of atomic values,
set of entities, also called object entities,
set of generalized interval objects.
2. A set V of variables called object and value variables,
and denoted by
3. A set ~
V of variables called generalized interval vari-
ables, and denoted by
If I 1 and I 2 denote generalized interval objects, generalized
interval variables, or constructive interval terms, then I
is a constructive interval term.
In the following, the concatenation operator is supposed
to be defined on D 3 , that is 8e
The structure of the resulting element
defined
from the structure of e 1 and e 2 as follows:
such
Here we follow the idea of [17] that
the object id of the object generated from e 1 and e 2
should be a function of id 1 and id 2 .
Note that I
This means that if I is obtained
from the concatenation of I 1 and I 2 , then the result of the
concatenation of I with I 1 or I 2 is I . This leads to the
termination of the execution of constructive rules (see below
the definition of constructive rule).
Definition 8 (Predicate symbol) We define the following
predicate symbols:
3 For concatenating generalized intervals.
ffl each P 2 R with arity n is associated with a predicate
symbol P of arity n,
ffl a special unary predicate symbol Interval. It can be
seen as the class of all generalized interval objects.
ffl a special unary predicate symbol Object. It can be
seen as the class of all objects other than generalized
interval objects.
Definition 9 (Atom) If P is an n-ary predicate symbol
are terms, then P(t is an atom.
If O and O 0 denote objects or object variables, Att and
Att 0 are attribute names, and c is a constant value, then
O:Att'c and O:Att ' O 0 :Att 0 where ' is one of =; !; -
(or their negation 6=; -; ?) are called inequality atoms.
rule in our language has the form:
where H is an atom, n; m - 0, are (positive)
literals, and c are constraints.
Optionally, a rule can be named as above, using the prefix
r is a constant symbol. We refer to A as
the head of the rule and refer to L
the body of the rule.
Note that we impose the restriction that constructive
terms appear only in the head of a rule, and not in the body.
A rule that contains a constructive term in its head is called
a constructive rule.
Recall that we are interested in using order constraints,
that is arithmetic constraints involving !; ?, but no arithmetic
functions such as +; \Gamma; , and set-order constraints,
a restricted form of set constraints involving 2; ', and ',
but no set functions such as [ and ".
Definition 11 (Range-restricted Rule) A rule r is said to
be range-restricted if every variable in the rule occurs in
a body literal. Thus, every variable occurring in the head
occurs in a body literal.
Definition 12 (Program) A program is a collection of
range-restricted rules.
query is of the form:
where q is referred to as the query predicate, and -
s is a tuple
of constants and variables.
Example Let us give some simple examples of queries.
In the following, uppercase letters stand for variables and
lowercase letters stand for constants.
The query "list the objects appearing in the domain of a
given sequence g" can be expressed by the following rule:
In this example, g is a constant and O is the output vari-
able. Here, we suppose that for a given generalized interval,
the set-valued attribute "entities" gives the set of semantic
objects of interest in that generalized interval. This query
involves an atomic (primitive) constraint. To compute the
answer set to the query, we need to check the satisfiability
of the constraint O 2 g:entities after O being instantiated.
The query "list all generalized Intervals where the object
appears" can be expressed as:
G:entities
The query "does the object o appear in the domain of a
given temporal frame [a; b]" can be expressed as:
G:duration
where t is a temporal variable. This query involves one
primitive constraint G:entities, and a complex arithmetic
constraint G:duration To compute
the answer set to the query, we need to check satisfiability
of these two constraints.
The query "list all generalized intervals where the objects
appear together" can be expressed as:
G:entities
or equivalently by:
G:entities
The query "list all pairs of objects, together with their
corresponding generalized interval, such that the two
objects are in the relation "Rel" within the generalized
interval", can be expressed as:
The query "find the generalized intervals containing an object
O whose value for the attribute A is val" can be expressed
as:
5.2. Inferring new relationships
Rules can be used to infer (specify) new relationships,
as facts, between existing objects.
Example Suppose we want to define the relation contains,
which holds for two generalized interval objects G 1 and G 2
if the time interval associated with G 1 overlaps the time interval
associated with G 2 . This can be expressed as follows:
G 1 and G 2 are in the relation contains if the constraint
(duration-f iller) associated with G 2 entails the one
associated with G 1 .
If we want to define the relation same-object-in of all
pairs of generalized intervals with their common objects,
we write the following rule:
O 2 G1 :entities; O 2 G2 :entities
The following rule constructs concatenations of generalized
intervals that have some objects, say
Gintervals(G1\Omega G2) /
Our language has a declarative model-theoretic and an
equivalent fix-point semantics.
For Datalog with set order constraints, queries are shown
to be evaluable bottom-up in closed form and to have
DEXPTIME-complete data complexity [24]. For a rule
language with arithmetic order constraints, the answer to
a query can be computed in PTIME data complexity [25].
As a consequence, we obtain a lower bound complexity for
query evaluation in our rule based query language.
6. Conclusion and Future Work
There is a growing interest in video databases. We believe
that theoretical settings will help understanding related
modeling and querying problems. This will lead to the development
of powerful systems for managing and exploiting
video information.
In this paper, we have addressed the problem of developing
a video data model and a formal, rule-based, constraint
query language that allow the definition and the retrieval
by content of video data. The primary motivation of
this work was that objects and time intervals are relevant
in video modeling and the absence of suitable supports for
these structures in traditional data models and query languages
represent a serious obstacle.
The data model and the query language allow (a) an abstract
representation of the visual appearance of a video able
to support modeling and retrieval techniques; (b) a semantic
data modeling styled representation of the video content,
independent from how the content information is obtained;
(c) a relational representation of the association between objects
within a video sequence.
This paper makes the following contributions. (1) We
have developed a simple and useful video data model that
integrates relations, objects and constraints. Objects allow
to maintain an object-centered view inherent to video data.
Attributes and relations allow to capture relationships between
objects. It simplifies the indexing of video sequences.
(2) We have developed a declarative, rule-based, constraint
query language to reason about objects and facts, and to
build new sequences from others. This functionality can
be useful in virtual editing for some applications. The language
provides a much more declarative and natural way to
express queries.
Due to the complex nature of video queries, the query
language presents a facility that allows a user to construct
queries based on previous queries. In addition, as all properties
inherent to image data are also part of video data, the
framework presented here naturally applies to image data.
There are many interesting directions to pursue.
ffl An important direction of active research is to extend
our framework to incorporate abstraction mechanisms
such as classification, aggregation, and generalization.
ffl Another important direction is to study the problem of
sequence presentation. Most existing research systems
use template-based approach [3] to provide the automatic
sequencing capability. With this approach, a set
of sequencing templates is predefined to confine the
user's exploration to a certain sequencing order. The
problem is that this approach is domain-dependent and
relies on the availability of a suitable template for a
particular query. We believe that a framework based on
declarative graphical (visual) languages [23] will offer
more possibilities and flexibility in the specification of
sequence presentations.
We are investigating these important research directions.
--R
Parsing movies in context.
Video query formu- lation
Solving systems of set constraints (extended abstract).
A three-dimensional iconic environment for image database query- ing
Picture query languages for pictorial data-base systems
Relational specifications of infinite query answers.
A video retrieval and sequencing system.
Mpeg: a video compression standard for multi-media applications
The mpeg compression algorithm: a review.
Linear constraint query languages expressive power and complexity.
Query by visual example.
Modelling and querying video data.
An experimental video database management system based on advanced object-oriented techniques
A high level query language for pictorial database management.
A logic for object-oriented logic programming (maier's o-logic revisited). In Proceedings of the 1989 Symposium on Principles of Database Systems (PODS'89)
Just a content-based query system for video databases
Virtual video editing in interactive multimedia applications.
Towards a logical reconstruction of image re- trieval
The qbic project: Querying images by content using color
Design and implementation of a video-object database system
A declarative graphical query languages.
queries of set constraint databases.
Constraint objects.
Special issue in video information systems.
Point vs. interval-based query languages for temporal databases
Video handling based on structured information for hypermedia systems.
Video retrieval by motion exam- ple
--TR
--CTR
Timothy C. Hoad , Justin Zobel, Fast video matching with signature alignment, Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, November 07-07, 2003, Berkeley, California
Chrisa Tsinaraki , Panagiotis Polydoros , Fotis Kazasis , Stavros Christodoulakis, Ontology-Based Semantic Indexing for MPEG-7 and TV-Anytime Audiovisual Content, Multimedia Tools and Applications, v.26 n.3, p.299-325, August 2005
Mehmet Emin Dnderler , Ediz aykol , Umut Arslan , zgr Ulusoy , Uur Gdkbay, BilVideo: Design and Implementation of a Video Database Management System, Multimedia Tools and Applications, v.27 n.1, p.79-104, September 2005
Rokia Missaoui , Roman M. Palenichka, Effective image and video mining: an overview of model-based approaches, Proceedings of the 6th international workshop on Multimedia data mining: mining integrated media and complex data, p.43-52, August 21-21, 2005, Chicago, Illinois
Elisa Bertino , Mohand-Sad Hacid , Farouk Toumani, Retrieval of semistructured Web data, Intelligent exploration of the web, Physica-Verlag GmbH, Heidelberg, Germany,
Mohand-Sad Hacid , Farouk Toumani , Ahmed K. Elmagarmid, Constraint-Based Approach to Semistructured Data, Fundamenta Informaticae, v.47 n.1-2, p.53-73, January 2001
Elisa Bertino , Ahmed K. Elmagarmid , Mohand-Sad Hacid, Ordering and Path Constraints over Semistructured Data, Journal of Intelligent Information Systems, v.20 n.2, p.181-206, March | rule-based query language;object-oriented modeling;content-based access of video;video indexing;video query;video database;video representation;constraint query language |
628095 | Enhancing Disjunctive Datalog by Constraints. | AbstractThis paper presents an extension of Disjunctive Datalog (${\rm{DATALOG}}^{\vee,}{^\neg}$) by integrity constraints. These are of two types: strong, that is, classical integrity constraints and weak, that is, constraints that are satisfied if possible. While strong constraints must be satisfied, weak constraints express desiderata, that is, they may be violatedactually, their semantics tends to minimize the number of violated instances of weak constraints. Weak constraints may be ordered according to their importance to express different priority levels. As a result, the proposed language (call it, ${\rm{DATALOG}}^{\vee,}{^{\neg,}}{^c}$) is well-suited to represent common sense reasoning and knowledge-based problems arising in different areas of computer science such as planning, graph theory optimizations, and abductive reasoning. The formal definition of the language is first given. The declarative semantics of ${\rm{DATALOG}}^{\vee,}{^{\neg,}}{^c}$ is defined in a general way that allows us to put constraints on top of any existing (model-theoretic) semantics for ${\rm{DATALOG}}^{\vee,}{^\neg}$ programs. Knowledge representation issues are then addressed and the complexity of reasoning on ${\rm{DATALOG}}^{\vee,}{^{\neg,}}{^c}$ programs is carefully determined. An in-depth discussion on complexity and expressiveness of ${\rm{DATALOG}}^{\vee,}{^{\neg,}}{^c}$ is finally reported. The discussion contrasts ${\rm{DATALOG}}^{\vee,}{^{\neg,}}{^c}$ to ${\rm{DATALOG}}^{\vee,}{^\neg}$ and highlights the significant increase in knowledge modeling ability carried out by constraints. | Introduction
Disjunctive Datalog (or DATALOG ;: ) programs [18, 15] are nowadays widely recognized as
a valuable tool for knowledge representation and commonsense reasoning [2, 34, 25, 20]. An
important merit of DATALOG ;: programs over normal (that is, disjunction-free) Datalog
programs is their capability to model incomplete knowledge [2, 34]. Much research work has
been done so far on the semantics of disjunctive programs and several alternative proposals
have been formulated [5, 20, 40, 46, 47, 48, 51, 59]. One which is widely accepted is the
extension to the disjunctive case of the stable model semantics of Gelfond and Lifschitz.
According to this semantics [20, 46], a disjunctive program may have several alternative
models (possibly none), each corresponding to a possible view of the reality. In [11, 15],
Eiter, Gottlob and Mannila show that DATALOG ;: has a very high expressive power, as
(under stable model semantics) the language captures the complexity class \Sigma P(i.e., it allows
us to express every property which is decidable in non-deterministic polynomial time with
an oracle in NP).
In this paper, we propose an extension of DATALOG ;: by constraints. In particular, besides
classical integrity constraints (that we call trongconstraints), we introduce the notion
of weak constraints, that is, constraints that should possibly be satisfied. Contrary to strong
constraints, that express conditions that must be satisfied, weak constraints allow us to express
desiderata. The addition of (both weak and strong) constraints to DATALOG ;: makes
the language (call it DATALOG ;:;c ) well-suited to represent a wide class of knowledge-based
problems (including, e.g., planning problems, NP optimization problems, and abductive rea-
soning) in a very natural and compact way.
As an example, consider the problem SCHEDULING which consists in the scheduling of
examinations for courses. That is, we want to assign course exams to time slots in such a
way that no two exams are assigned with the same time slot if the respective courses have
a student in common (we call such courses ncompatible"). Supposing that there are
three time slots available, namely, ts 1 , ts 2 and ts 3 , we express the problem in DATALOG ;:;c
by the following program
ts 1 ) assign(X; ts 2 ) assign(X; ts 3 ) / course(X)
Here we assumed that the courses and the pair of incompatible courses are specified by a
number of input facts with predicate course and incompatible, respectively. Rule r 1 says
that every course is assigned to either one of the three time slots ts 1 , ts 2 or ts 3 ; the strong
constraint s 1 (a rule with empty head) expresses that no two incompatible courses can be
overlapped, that is, they cannot be assigned to the same time slot. In general, the presence
of strong constraints modifies the semantics of a program by discarding all models which do
not satisfy some of them. Clearly, it may happen that no model satisfies all constraints. For
instance, in a specific instance of problem above, there could be no way to assign courses to
time slots without having some overlapping between incompatible courses. In this case, the
problem does not admit any solution. However, in real life, one is often satisfied with an
approximate solution, that is, one in which constraints are satisfied as much as possible. In
this light, the problem at hand can be restated as follows (APPROX SCHEDULING): assign
courses to time slots trying to not overlap incompatible courses". To express this problem
we resort to the notion of weak constraint, as shown by the following program P a sch :
ts 1 ) assign(X; ts 2 ) assign(X; ts 3 ) / course(X)
From a syntactical point of view, a weak constraint is like a strong one where the implication
symbol / is replaced by (. The semantics of weak constraints minimizes the number of
violated instances of constraints. An informal reading of the above weak constraint w 1 is:
"preferably, do not assign the courses X and Y to the same time slot if they are incompati-
ble". Note that the above two programs P sch and P a sch have exactly the same models if all
incompatible courses can be assigned to different time slots (i.e., if the problem admits an
"exact" solution).
In general, the informal meaning of a weak constraint, say, ( B, is try to falsify Bor "B
is preferably false", etc. (thus, weak constraints reveal to be very powerful for capturing the
concept of preference n commonsense reasoning).
Since preferences may have, in real life, different priorities, weak constraints in DATALOG ;:;c
can be assigned with different priorities too, according to their mportance". 1 For ex-
ample, assume that incompatibilities among courses may be either strong or weak (e.g., basic
courses with common students can be considered strongly incompatible, while complementary
courses give weak incompatibilities). Consider the following problem (SCHEDULING
WITH PRIORITIES): chedule courses by trying to avoid overlapping between strongly
incompatible courses first, and by trying to avoid overlapping between weakly incompatible
courses then"(i.e., privilege the elimination of overlapping between strongly incompatible
courses). If strong and weak incompatibilities are specified through input facts with
predicates strongly incompatible and weakly incompatible, respectively, we can represent
SCHEDULING WITH PRIORITIES by the following program P
ts 1 ) assign(X; ts 2 ) assign(X; ts 3 ) / course(X)
strongly incompatible(X; Y )
where the weak constraint w 2 is defined tronger than"w 3 . The models of the above program
are the assignments of courses to time slots that minimize the number of overlappings
between strongly incompatible courses and, among these, those which minimize the number
of overlappings between weakly incompatible courses.
The main contributions of the paper are the following:
ffl We add weak constraints to DATALOG ;: and provide a formal definition of the language
DATALOG ;:;c . The semantics of constraints is given in a general way that
1 Note that priorities are meaningless among strong constraints, as all of them must be satisfied.
allows us to define them on top of any existing model-theoretic semantics for Disjunctive
Datalog.
ffl We show how constraints can be profitably used for knowledge representation and
reasoning, presenting several examples of DATALOG ;:;c encoding. DATALOG ;:;c
turns out to be both general and powerful, as it is able to represent in a simple and
coincise way hard problems arising in different domains ranging from planning, graph
theory, and abduction.
ffl We analyze the computational complexity of reasoning with DATALOG ;:;c (propo-
sitional case under stable model semantics). The analysis pays particular attention
to the impact of syntactical restrictions on DATALOG ;:;c programs in the form of
limited use of weak constraints, strong constraints, disjunction, and negation. It appears
that, while strong constraints do not affect the complexity of the language,
constraints "mildly" increase the computational complexity. Indeed, we show
that brave reasoning is \Delta P
programs. Interestingly, priorities among constraints affect the com-
plexity, which decreases to \Delta P[O(log n)] (\Delta P[O(log n)] for DATALOG :;c ) if priorities
are disallowed. The complexity results may support in choosing an appropriate fragment
of the language, which fits the needs in practice (see Section 5).
ffl We carry out an in-depth discussion on expressiveness and complexity of DATALOG ;:;c ,
contrasting the language to DATALOG ;: . Besides adding expressive power from the
theoretical view point (as DATALOG ;:;c can encode problems which cannot be represented
at all in DATALOG ;: ), it turns out that constraints improve also usability
and knowledge modeling features of the language. Indeed, well-known problems can
be encoded in a simple and easy-to-understand way in DATALOG ;:;c ; while their
DATALOG ;: encoding is unusable (long and difficult to understand).
Even if strong constraints (traditionally called integrity constraints) are well-known in the
logic programming community (see, e.g., [15, 10, 17, 33, 12]), to our knowledge this is the
first paper which formally studies weak constraints and proposes their use for knowledge representation
and reasoning. Related works can be considered the interesting papers of Greco
and Sacc'a [24, 23] which analyze other extensions of Datalog to express NP optimizations
problems. Related studies on complexity of knowledge representation languages have been
carried out in [15, 50, 8, 14, 21, 39, 52, 9, 12]. Priority levels have been used also in the
context of theory update and revision [16] and [22], in prioritized circumscription [31], in
the preferred subtheories approach for default reasoning [6], and in the preferred answer set
semantics for extended logic programs [7].
The paper is organized as follows. In Section 2 we provide both the syntax and the semantics
of the language. Then, in Section 3 we describe the knowledge representation capabilities
of DATALOG ;:;c by a number of examples. In particular, we show how the language can
be used to easily formulate complex knowledge-based problems arising in various areas of
computer science, including planning, graph theory and abduction. In Section 4 we analyze
the complexity of the language under the possibility version of the stable model semantics.
Finally, Section 5 reports an in-depth discussion on the language and draws our conclusions
Appendix
A shows the encoding of a number of classical optimization problems in
reports the proofs of the complexity results for the fragments
of DATALOG ;:;c where disjunction is either disallowed or constrained to the head
2 The DATALOG ;:;c Language
We assume that the reader is familiar with the basic concepts of deductive databases [55, 34]
and logic programming [32]. We describe next the extension of Disjunctive Datalog by
constraints.
2.1 Syntax
A term is either a constant or a variable 2 . An atom is a is a predicate of
are terms. A literal is either a positive literal p or a negative literal
:p, where p is an atom.
A (disjunctive) rule r is a clause of the form
a
where a are atoms. The disjunction a 1 \Delta \Delta \Delta a n is the head of r, while
the conjunction is the body of r. If (i.e., the head is -free),
then r is normal; if (the body is :-free), then r is positive. A DATALOG ;: program
LP is a finite set of rules; LP is normal (resp., positive) if all rules in LP are normal (resp.
positive).
A strong constraint is a syntactic of the form /
literal (i.e., it is a rule with empty head).
A weak constraint is a syntactic of the form (
literal.
simply program") is a triple
where LP is a DATALOG ;: program, S a (possibly empty) finite set of strong constraints
and is a (possibly empty) finite list of components, each consisting of a
finite set of weak constraints. If w then we say that w 0 is tronger
thanor more important than"w (hence, the last component W n is the strongest).
2 Note that function symbols are not considered in this paper.
Example 2 Consider the program P p sch of Section 1. Here, LP p sch
and W p sch (that is, w 2 is stronger than w 3 ).2.2 General Semantics
program. The Herbrand universe U P of P is the set of all
constants appearing in P. The Herbrand base B P of P is the set of all possible ground
atoms constructible from the predicates appearing in P and the constants occurring in
U P (clearly, both U P and B P are finite). The instantiation of rules, strong and weak
constraints is defined in the obvious way over the constants in U P , and are denoted by
ground(LP ), ground(S) and ground(W ), respectively; we denote the instantiation of P by
A (total) interpretation for P is a subset I of B P . A ground positive literal a is true (resp.,
w.r.t. I if a 2 I (resp., a =
I). A ground negative literal :a is true (resp., false) w.r.t.
I if a =
I (resp., a 2 I).
Let r be a ground rule in ground(LP ). Rule r is satisfied (or true) w.r.t. I if its head is
true w.r.t. I (i.e., some head atom is true) or its body is false (i.e., some body literal is false)
w.r.t. I. A ground (strong or weak) constraint c in (ground(S) [ ground(W )) is satisfied
w.r.t. I if (at least) one literal appearing in c is false w.r.t. I; otherwise, c is violated.
We next define the semantics of DATALOG ;:;c in a general way which does not rely on a
specific semantical proposal for DATALOG ;: ; but, rather, it can be applied to any semantics
of Disjunctive Datalog. 3 To this end, we define a candidate model for as an
interpretation M for P which satisfies every rule r 2 ground(LP ). A ground semantics for P
is a finite set of candidate models for P. Clearly, every classical semantics for LP provides a
ground semantics for P. For example the semantics presented in [20, 40, 46, 47, 48, 51, 59]
can be taken as ground semantics.
Now, to define the meaning of a program in the context of a given ground
semantics, we need to take into account the presence of constraints. To this end, since
constraints may have different priorities, we associate with each component W i 2 W ,
inductively defined as follows:
where jground(W )j denotes the total number of ground instances of the weak constraints
appearing in W . Further, given an interpretation M for a program P, we denote by H P;M
the following sum of products:
3 Note that, although we consider only total model semantics in this paper, the semantics of weak constraints
simply extends to the case of partial model semantics.
n) is the number of ground instances of weak constraints in the
component W i which are violated in M .
Now, we are ready to define the notion of model for w.r.t. a given ground
semantics.
Definition 3 Given a ground semantics \Gamma for of P is a candidate
model every strong constraint s 2 ground(S) is satisfied w.r.t. M ,
and (2) H P;M is minimum, that is, there is no candidate model verifying Point (1)
such that H P;N ! H P;M
Thus, a \Gamma-model M of P, by minimizing H P;M selects those candidate models that, besides
satisfying strong constraints, minimize the number of unsatisfied (instances of) weak constraints
according to their importance - that is, those with the minimal number of violated
constraints in W n are chosen, and, among these, those with the minimal number of violated
constraints in W n\Gamma1 , and so on and so forth.
2.3 Stable Model Semantics
Using the notion of candidate model we have parametrized the semantics of DATALOG ;:;c
programs, that is, the actual semantics of a program P relies on the semantics we choose
for LP . Several proposals can be found in the literature for disjunctive logic programs
[5, 20, 40, 46, 47, 48, 51]. One which is generally acknowledged is the extension of the stable
model semantics to take into account disjunction [20, 46]. We next report a brief discussion
on this semantics.
In [40], Minker proposed a model-theoretic semantics for positive (disjunctive) programs,
whereby a positive program LP is assigned with a set MM(LP ) of minimal models, each
representing a possible meaning of LP (recall that a model M for LP is minimal if no proper
subset of M is a model for LP ).
Example 4 For the positive program /g the (total) interpretations fag and
fbg are its minimal models (i.e., MM(LP
The stable model semantics generalises the above approach to programs with negation.
Given a program LP and a total interpretation I, the Gelfond-Lifschitz transformation of
LP with respect to I, denoted by LP I , is the positive program defined as follows:
a
let I be an interpretation for a program LP . I is a stable model for LP if I 2
I ) (i.e., I is a minimal model of the positive program LP I ). We denote the set of
all stable models for LP by SM(LP ).
Example 5 Let fbg.
It is easy to verify that I is a minimal model for LP I ; thus, I is a stable model for LP . 4
Clearly, if LP is positive then LP I coincides with ground(LP ). It turns out that, for a
positive program, minimal and stable models coincide. A normal positive programs LP has
exactly one stable model which coincides with the least model of LP (i.e., it is the unique
minimal model of LP ). When negation is allowed, however, even a normal program can
admit several stable models.
We conclude this section observing that Definition 3 provides the stable model semantics
of a DATALOG ;:;c program the choosen ground semantics is SM(LP ),
the set of the stable models of LP .
Definition 6 A \Gamma-model for a program stable model for
Example 7 Consider the program P sch sch ) of Section
According to the stable model semantics, LP sch has as many
stable models as the possibilities of assigning all courses, say n, to 3 time slots (namely, 3 n ).
The stable models of P sch are the stable models of LP sch satisfying the strong constraint s 1 ,
that is, those (if any) for which no two incompatible courses are assigned with the same time
slot.
The program P a sch = (LP a sch ; S a sch ; W a sch ) of Section 1, where LP a sch
; and W a sch = hfw 1 gi, is obtained from LP sch by replacing s 1 by w 1 (note that LP a sch =
LP sch ). The stable models of P a sch are the stable models of LP a sch which minimize the
number of violated instances of w 1 , that is, the number of incompatible courses assigned to
the same time slots. Thus, P a sch provides a different solution from P a sch only if the latter
does not admit any stable model, i.e., the problem has no "exact" solution - there is no way
to assign different time slots to all incompatible courses. Otherwise, the two programs have
exactly the same stable models.
Finally, consider the program P sch ) of Section 1, where LP
"). Each stable model
of LP p sch is a possible assignment of courses to time slots. The stable models of P p sch are
the stable models of LP p sch that, first of all, minimize the number of violated instance of w 2
and, in second order, minimize the number of violated instances of w 3 . 4
It is worth noting that a DATALOG ;:;c program P may have several stable models (there
can also be no one). The modalities of brave and cautious reasoning are used to handle this.
Brave reasoning (or credulous reasoning) infers that a ground literal Q is true in P (denoted
brave Q) iff Q is true w.r.t. M for some stable model M of P.
Cautious reasoning (or skeptical reasoning) infers that a ground literal Q is true in P
(denoted P cautious Q) iff Q is true w.r.t. M for every stable model M of P.
The inferences brave and cautious extend to sets of literals as usual.
Knowledge Representation in DATALOG ;:;c
In this section we provide a number of examples that show how DATALOG ;:;c can be used
to easily formulate many interesting and difficult knowledge-based problems. Among those,
we consider planning problems, classical optimization problems from Graph Theory and
various forms of abductive reasoning. We show how the language provides natural support
for their representation. A number of further examples are reported in AppendixA.
As we shall see, most programs have a common structure of the form guess, check, choose-
best. Candidate solutions are first nondeterministically generated (through disjunctive
mandatory properties are checked (through strong constraints), finally, solutions
that best satisfy desiderata are selected (through weak constraints). Such a modular
structure shows quite natural for expressing complex problems and, further, makes programs
easy to understand.
3.1 Planning
A first example of planning, namely the APPROX SCHEDULING problem, has been presented
in the introduction. A further example is next reported.
Example 8 PROJECT GROUPS. Consider the problem of organizing a given set of employees
into two (disjoint) project groups p1 and p2. We wish each group to be possibly
heterogeneous as far as skills are concerned. Further, it is preferable that couple of persons
married to each other do not work in the same group. And, finally, we would like that the
members of the same group already know each other. We consider the former two requirements
more important than the latter one. Supposing that the information about employees,
skills, known and married persons, are specified through a number of input facts a simple
way of solving this problem is given by the following program
are stronger than w 1 (i.e., gi. The first rule r 1 above
assigns each employee to either one of the two projects p1 and p2 (recall that, by minimality
of a stable model, exactly one of member(X; p1) and member(X; p2) is true in it, for each
employee X). The weak constraint w 1 expresses the aim of forming groups with persons that
possibly already know each other; w 2 , in turn, expresses the preference of having groups with
no persons married each other; and, finally, the weak constraint w 3 tries avoid that persons
4 For notational simplicity, in the examples we represent programs just as sets of rules and constraints.
Observe also that we use the inequality predicate 6= as a built-in, this is legal as inequality can be easily
simulated in DATALOG ;:;c .
with the same skill work in same project. It is easy to recognize that each stable model of the
is a possible assignment of employees to projects. Due
to the priority of weak constraints (recall that w 2 and w 3 are both tronger than"w 1 ) the
stable models of P proj are those of LP that minimize the overall number of violated instances
of both w 2 and w 3 and, among these, those minimizing the number of violated instances of
. For an instance, suppose that the employees are a; b; c; d; e, of which a and b have
the same skill, c and d are married to each other and, finally, the following pairs know
reciprocally: (b; c), (c; d), (a; e), (d; e). In this case, P proj has two stable models M 1 and
which correspond to the following division in project teams (we list only the atoms with
predicate member of the two models).
Observe that both M 1 and M 2 violate only two instances of w 1 (as a and b do not know
each other), and they satisfy all instances of w 2 and w 3 . 4
3.2 Optimization from Graph Theory
In the above example we have regarded weak constraints essentially as desiderata - that is,
conditions to be possibly satisfied. However, a useful way of regarding weak constraints is as
objective functionsof optimization problems. That is, a program with a weak constraint of
the form ( B can be regarded as modeling a minimization problem whose objective function
is the cardinality of the relation B (from this perspective, a strong constraint / B can then
be seen as a particular case in which the cardinality of the relation B is required to be zero).
This suggests us to use DATALOG ;:;c to model optimization problems. Next we show
some classical NP optimization problems from graph theory formulated in our language. A
number of further examples can be found in AppendixA.
Example 9 MAX CLIQUE : Given a graph find a maximum clique, that is,
a subset of maximum cardinality C of V such that every two vertices in C are joined by an
edge in E. To this end, we define the following program
Here, r 1 partitions the set of vertices into two subsets, namely, c and not c - i.e., it guesses
a clique c. The strong constraint s 1 checks the guess, that is, every pair of vertices in c (the
clique) must be joined by an edge. Finally, the weak constraint w 1 minimizes the number of
vertices that are not in the clique c (thus, maximizing the size of c, by discarding all cliques
whose size is not maximum). It holds that each stable model of P cliq is a maximum clique of
G. 4
COLORING. Given a graph coloring of G is an assignment
of colors to vertices, in such a way that every pair of vertices joined by an edge have
different colors. A coloring is minimum if it uses a minimum number of colors. We determine
a minimum coloring by the following program
used col(I) / col(X; I)
used col(I)
The first rule r 1 guesses a graph coloring; col(X; I) says that vertex X is assigned to color I
and not col(X; I) that it is not. Strong constraints s 1 -s 3 check the guess: two joined vertices
cannot have the same color (s 1 ), and each vertex is assigned to exactly one color (s 2 and
Finally, the weak constraint w 1 requires the cardinality of the relation used col to be
minimum (i.e., the number of used colors is minimum). It holds that there is a one-to-one
correspondence between the stable models of P col and the minimum colorings of the graph G.3.3 Abduction
We next show that some important forms of abduction over (disjunctive) logic programs,
namely abduction with prioritization and abduction under minimum-cardinality solution
preference [14], can be represented in a very easy and natural way in DATALOG ;:;c (while,
in general, they cannot be encoded in DATALOG ;: ).
Abduction, first studied by Peirce [43], is an important kind of reasoning, having wide
applicability in different areas of computer science; in particular, it has been recognized as
an important principle of common-sense reasoning. For these reasons, abduction plays a
central role in Artificial Intelligence, and has recently received growing attention also in the
field of Logic Programming.
Abductive logic programming deals with the problem of finding an explanation for ob-
servations, based on a theory represented by a logic program [35, 36]. Roughly speaking,
abduction is an inverse of modus ponens: Given the clause a / b and the observation a,
abduction concludes b as a possible explanation. Following [13, 35, 36], an abductive logic
programming problem (LPAP) can be formally described as a tuple
where Hyp is a set of ground atoms (called hypotheses), Obs is a set of ground literals
(called manifestations or observations), and LP is a logic program (with disjunction and
negation allowed). An explanation for P is a subset E ' Hyp which satisfies the following
property:
c
d
e
a
Figure
1: A computer network
brave Obs, i.e., there exists a stable model M of LP [E, where all literals in Obs
are true.
In general, an LPAP may have no, a single, or several explanations. In accordance with
Occam's principle of parsimony [42], which states that from two explanations the simpler
explanation is preferable, some minimality criterion is usually imposed on abductive expla-
nations. Two important minimality criterions are Minimum Cardinality and Prioritization
[14]. The minimum cardinality criterion states that a solution A ' Hyp is preferable to a
solution denotes the cardinality of set X). According to
this criterion, the acceptable solutions of P are restricted to the explanations of minimum
cardinality (that are considered the most likely).
Example 11 ABDUCTION Consider the computer network depicted in Figure 1. We make
the observation that, sitting at machine a, which is online, we cannot reach machine e. Which
machines are offline?
This can be easily modeled as the LPAP P net = hHyp net ; Obs net ; LP net i, where the theory
LP net is
and the hypotheses and observations are
and
respectively.
For example, according with the intuition, line(b)g is an explanation for P net ;
but also line(c)g is an explanation and even
off line(c); off line(d); off line(e) g is an explanation. However, under the minimum cardinality
criterion, only line(e)g are the solutions of P net
(we can see them as the most probable ones).Abduction with minimum cardinality can be represented very simply in DATALOG ;:;c .
Given a LPAP pr(P) be the DATALOG ;:;c program (LP [
G;
positive literal, and (for each h 2 Hyp, not h
denotes a new symbol). It is easy to see that the stable models of pr(P) correspond exactly
to the minimum cardinality solutions of P 5 . Intuitively, the clauses in G "guess" a solution
(i.e., a set of hypoteses), the strong constraints in S check that the observations are entailed,
and the weak constraints in W 0 enforce the solution to have minimum cardinality.
Example 12 (cont'd) The DATALOG ;:;c program for our network problem is pr(P net
net [ G; net is the theory of P net and
The stable models of LP net [ G that satisfy both strong constraints / off line(a) and /
reaches(a; e) encode one-to-one the abductive explanations of P net ; among them, the weak
constraints in W 0 select the explanations with minimum cardinality. Indeed, pr(P net ) has
only two stable models: one contains off line(b) and the other contains off line(e).The method of abduction with priorities [14] is a refinement of the above minimality cri-
terion. Roughly speaking, it operates as follows. The set of hypotheses Hyp is partitioned
into groups with different priorities, and explanations which are (cardinality) minimal on the
lowest priority hypotheses are selected from the solutions. The quality of the selected solutions
on the hypotheses of the next priority level is taken into account for further screening;
5 Without loss of generality, we assume that no hypothesis in Hyp appears in the head of a rule in LP .
this process is continued over all priority levels. As pointed out in [14], prioritization is also
a qualitative version of probability, where the different priorities levels represent different
magnitude of probabilities. It is well suited in case no precise numerical values are known,
but the hypotheses can be grouped in clusters such that the probabilities of hypotheses
belonging to the same cluster do not (basically) differ compared to the difference between
hypotheses from different clusters [14].
Example 13 (cont'd) Suppose that statistical data about the computers in the network of
Figure
1 tell us that b falls down very often, e is always online, while c and d sometimes
are offline. Then, we restate the LPAP P net as an abductive problem with prioritization
net ; Obs net ; LP net i, where
According with the prioritization, the abductive explainations are ordered as follows:
Therefore, foff line(b)g becomes the preferred (i.e., most likely) solution. (Note that, differently
from the convention we adopted for weak constraints, the hypotheses of the smallest
priority level are the most important here).By using priorities among weak constraints, we obtain a very natural encoding of abduction
with prioritization. Given pr(P) be the DATALOG ;:;c
G;
and S and G are defined as above. Then, the stable models of pr(P) correspond exactly to
the preferred solutions of P according with the prioritization.
Example 14 (cont'd)
The DATALOG ;:;c program simulating abduction with priorities for the network of Example
11 is pr(P 0
net G; are as above, and
(recall that W 3 is the strongest component). It is easy to see that pr(P 0
net ) has only one stable
model M which contains (only) the hypothesys off line(b).
4 The Complexity of Constraints: Propositional Case
In this section we analyze the complexity of brave reasoning over disjunctive Datalog programs
with constraints. Since the complexity is not independent from the underlying ground
semantics, we consider in this section the stable model semantics [20, 46], which is a widely
acknowledged semantics for normal and disjunctive Datalog programs.
4.1 Preliminaries on Complexity Theory
For NP-completeness and complexity theory, cf. [41]. The classes \Sigma P
k of the
Polynomial Hierarchy (PH) (cf. [53]) are defined as follows:
In particular,
denotes the class
of problems that are solvable in polynomial time on a deterministic (resp. nondeterministic)
Turing machine with an oracle for any problem in the class C.
The oracle replies to a query in unit time, and thus, roughly speaking, models a call to a
subroutine for that is evaluated in unit time. If C has complete problems, then instances
of any problem 0 in C can be solved in polynomial time using an oracle for any C-complete
problem , by transforming them into instances of ; we refer to this by stating that an
oracle for C is used. Notice that all classes C considered here have complete problems.
The classes \Delta P
refined by the class \Delta P
k [O(log n)], in which the number
of calls to the oracle is in each computation bounded by O(log n), where n is the size of the
input.
Observe that for all k 1,
each inclusion is widely conjectured to be strict. Note that, by the rightmost inclusion,
all these classes contain only problems that are solvable in polynomial space. They allow,
however, a finer grained distinction between NP-hard problems that are in PSPACE.
The above complexity classes have complete problems under polynomial-time transformations
involving Quantified Boolean Formulas (QBFs). A QBF is an expression of the
where E is a Boolean expression whose atoms are from pairwise disjoint nonempty sets of
variables and the Q i 's are alternating quantifiers from f9; 8g, for all
If then we say the QBF is k-existential, otherwise it is k-universal. Validity of QBFs
is defined in the obvious way by recursion to variable-free Boolean expressions. We denote
by QBF k;9 (resp., QBF k;8 ) the set of all valid k-existential (resp., k-universal) QBFs (1).
Given a k-existential QBF \Phi (resp. a k-universal QBF \Psi), deciding whether \Phi 2 QBF k;9
classical \Sigma P
k has also complete problems for all k 2; for example, given a formula E on variables
deciding whether the with
respect to hX 1 lexicographically minimum truth assignment 6 OE to X
that (where such a OE is known to exist) fulfills OE(X n
(cf. [37, 58]). 7 Also \Delta P
[O(log n)] has complete problems for all k 1; for example, given
, such that \Phi
is odd ([58, 14]).
The problems remain as hard under the following restrictions: (a) E in (1) is in conjunctive
normal form and each clause contains three literals (3CNF) if Q
disjunctive normal form and each monom contains three literals (3DNF) if Q
4.2 Complexity Results
We analyze the complexity of the propositional case; therefore, throughout this section we
assume that the programs and query literals are ground (i.e., variable-free). The complexity
results, however, can be easily extended to data complexity [57].
Given a program we denote by maxH(P) the value that H P;M
assumes when the interpretation M violates all the ground instances of weak constraints
(in each component). More precisely,
is the cardinality of the set of the ground instances of the constraints in W i .
Observe that maxH(P) is exponential in the number of components of P; indeed, from the
(inductive) definition of f(W i ) (see Section 2.2), it can be easily seen that
It turns out that (the value of) maxH(P) can be exponential in the size
of the input; however, it is computable in polynomial time (its binary representation has
polynomial size).
We next report detailed proofs of all complexity results of DATALOG ;:;c programs where
full disjunction is allowed (these proofs are the most involved and technically interesting).
The demonstrations of the results on disjunction-free programs and on head-cycle free disjunctive
programs are reported in AppendixB. All complexity results are sumarized in Table
1, and discussed in-depth in Section 5.
First of all, we look for an upper bound to the complexity of brave reasoning on DATALOG ;:;c .
To this end, we provide two preliminary lemmata.
Lemma 15 Given a DATALOG ;:;c program positive integer n as
input, it is in \Sigma Pdeciding whether there exists a stable model M of LP satisfying S such
that H P;M n.
6 OE is lexicographically greater than / w.r.t. hX false for the least j
such that OE(X j )
is the set of all variable-free true formulas.
Proof. We can decide the problem as follows. Guess check that: (1) M is
a stable model of LP , (2) all constraints in S are satisfied, and (3) H P;M n. Clearly,
property (2) and (3) can be checked in polynomial time; while (1) can be decided by a single
call to an NP oracle [39, 15]. The problem is therefore in \Sigma P. 2
Lemma Given a DATALOG ;:;c program positive integer n, and a
literal q as input, it is in \Sigma Pdeciding whether there exists a stable model M of LP satisfying
S such that H and q is true w.r.t. M .
Proof. We can decide the problem as follows. Guess check that: (1) M is
a stable model of LP , (2) all constraints in ground(S) are satisfied, (3) H
q is true w.r.t. M . Clearly, property (2), (3), and (4) can be checked in polynomial time;
while (1) can be decided by a single call to an NP oracle [39, 15]. The problem is therefore
in \Sigma P
Now, we are in the position to determine the upper bound to brave reasoning on full
DATALOG ;:;c programs.
Theorem 17 Given a DATALOG ;:;c program literal q as input,
deciding whether q is true in some stable model of P is in \Delta P
3 .
Proof. We first call a \Sigma Poracle to verify that P admits some stable model satisfying S
(otherwise, q cannot be a brave consequence). We compute then (in polynomial time)
maxH(P). After that, by binary search on [0::k], we determine the cost \Sigma of the stable
models of P, by a polynomial number of calls to an oracle deciding whether there exists a
stable model M of LP satisfying S such that H P;M n (n = k=2 on the first call; then if
the oracle answers "yes", otherwise, n is set to k=2 + k=4, and so on, according
to standard binary search) - the oracle is in \Sigma P
2 by virtue of Lemma 15. (Observe that the
number of calls to the oracle is logarithmic in k, and, as a consequence, is polynomial in the
size of the input, as the value k is O(2 jP j ).) Finally, a further call to a \Sigma P
oracle verifies
that q is true in some stable model of P, that is a stable model M of LP satisfying S with
(this is doable in \Sigma P
2 from Lemma 16). 2
We will next strenghten the above result on \Delta P-membership, to a completeness result
in this class. Hardness will be proven by reductions from QBFs into problems related to
stable models of DATALOG ;:;c programs; the utilized disjunctive Datalog programs will
be suitable adaptations and extensions of the disjunctive program reported next (which has
been first described in [12]).
Let \Phi be a formula of form 8Y E, where E is a Boolean expression over propositional
variables from X [ Y , where g. We assume that E is
in 3DNF, i.e., and each D is a conjunction of literals
L i;j . Define the following (positive) Disjunctive Datalog program LP (\Phi):
for each
where oe maps literals to atoms as follows:
Intuitively, x 0
i corresponds to :x i and y 0
j corresponds to :y j .
Given a truth assignment OE(X) to g, we denote by M OE ' B LP (\Phi) the
following interpretation
fy 0
Moreover, given an interpretation M of LP (\Phi), we denote by OE M the truth assignment to
Lemma (\Phi) be the formula and the disjunctive Datalog program
defined above, respectively. Then, there is a one-to-one correspondence between the truth
assignments OE(X) to such that \Phi valid and the stable
models of LP (\Phi) which contain the atom v. In particular:
1. if \Phi
2. if M 2 SM(LP (\Phi)) contains v, then \Phi OE M
Proof. It follows from Theorems 3.1 of [12]. 2
Note that LP (\Phi) is constructible from \Phi in polynomial time.
We are now ready to determine exactly the complexity of reasoning over programs with
Theorem 19 Given a DATALOG ;:;c program literal q as input,
deciding whether q is true in some stable model of P is \Delta P-complete.
Proof. From Theorem 17 it remains to prove only hardness. \Delta P-hardness is shown by
exhibiting a DATALOG ;:;c program that, under brave reasoning, solves the following
complete problem \Pi: let OE(X), g, be the lexicographically minimum truth
assignment with respect to hx 1 ; :::; x n i such that \Phi assume such a
OE exists); now, is it true that OE(x n W.l.o.g. we assume that E is in 3DNF of the
form defined above. Let P be a DATALOG ;:;c program where LP (\Phi) is
the positive disjunctive Datalog program defined above, and W
:vgi. We next show that the answer to \Pi is true iff x n is true in some stable model of P 0 .
Let SM v (LP (\Phi)) denote the set of the stable models of LP (\Phi) which contain v. From
Lemma we know that there is a one-to-one correspondence between SM v (LP (\Phi)) and
the set of truth assignments OE which make \Phi OE valid. Since \Phi OE is valid by hypothesis, this
implies that SM v (LP (\Phi)) 6= ;. Consequently, P 0 has some stable models (as there is no
strong constraint in P 0 ), and each stable model of P 0 contains v as ( :v is the strongest
constraint in W 0 . The priorities among constraints ff( x n impose a total
order on SM v (LP (\Phi)). In particular, given two stable models M and M 0 in SM v (LP (\Phi)),
the truth assignment OE M 0 is greater than OE M in the lexicographically order.
Therefore, P 0 has a unique stable model M (which is in SM v (LP (\Phi))), corresponding to the
lexicographically minimum truth assignment OE such that \Phi Hence, the
lexicographically minimum truth assignment OE(X) making \Phi OE valid fulfills OE(x n
and only if x n is true in some stable model of P 0 . Therefore, brave reasoning is \Delta P
3 -hard for
DATALOG ;:;c . 2
Next we show that neither strong constraints nor negation affect the complexity of DATALOG ;:;c
that remains unchanged even if we disallow both of them.
Corollary 20 Given a DATALOG ;:;c program literal q as input,
deciding whether q is true in some stable model of P is \Delta P-complete even if strong constraints
are disallowed and LP is stratified or even positive.
Proof. Membership in \Delta P
3 trivially holds from Theorem 17. As far as hardness is concerned,
observe that in the reduction of Theorem 19 the DATALOG ;:;c program P 0 has no strong
constraints (i.e., its logic program LP (\Phi) is positive (and therefore stratified).Thus, the syntactic restrictions imposed in Corollary 20 do not decrease the complexity
of DATALOG ;:;c . Interestingly, the complexity of the language decreases if we disallow
priorities among weak constraints.
Theorem 21 Given a DATALOG ;:;c program consists of a
single component, and a literal q as input, deciding whether q is true in some stable model
of P is \Delta P
3 [O(log n)]-complete.
Proof. \Delta P[O(log n)]-Membership. We proceed exactly as in the proof of Theorem 17. How-
ever, since W consists of one only component, this time
programs with priorities is O(2 jP j )), as there is only one component in W and its weight is
1. Consequently, the number of calls to the \Sigma P
2 oracle is logarithmic in the size of the input,
as it is logarithmic in k (because we perform a binary search on [0::k]).
n)]-Hardness. Let \Phi be 2-existential QBFs, such that \Phi
m. We next reduce the \Delta P
3 [O(log n)]-hard problem of deciding
is odd to brave reasoning on DATALOG ;:;c
programs without priorities (i.e., with one only component in W ). W.l.o.g. we assume that
is even, 2) the same propositional variable does not appear in two distinct QBFs. and
the QBFs \Phi are as in the proof of Theorem 19.
a) and LP (\Phi i ) is defined as
in Theorem 19 apart from replacing each occurrence of v by v
c)
The weak constraints in W 0 enable to select the stable models of LP (\Phi) where the number
of valid QBFs is maximum. Then, is odd if and only if odd
is true in some stable model of P. 2
As for DATALOG ;:;c programs with prioritized weak constraints, the complexity is not
affected at all by negation and strong constraints.
Corollary 22 Let program where W has a single
component. Then, brave reasoning is \Delta P
3 [O(log n)]-complete even if P is subject to the
following restrictions:
LP is stratified or even positive.
Proof. Membership in \Delta P
3 [O(log n)] comes from Theorem 21. Hardness under the restriction
1 for stratified programs comes from the hardness proof of Theorem 21, as the reduction
there does not make use of both priorities and strong constraints and utilizes only stratified
negation. To complete the proof we have only to demonstrate hardness for the case of
positive programs. To this end, we just need to eliminate negation from the logic program
(of the proof of Theorem 21) as follows:
1. Eliminate from LP 0 all rules with head odd (that are the only rules containing nega-
tion).
2. Insert the following weak constraints in W :
even (1 i m=2)5 Discussion on Complexity and Expressiveness of DATALOG ;:;c
In this section we discuss the results on the complexity of DATALOG ;:;c that we established
in the previous section. Interestingly, these results allow us to draw some useful remarks also
on the expressive power of our language, which turns out to be strictly more powerful than
disjunctive Datalog (without constraints). We show also that the increased expressive power
of our language has a relevance even from the practical side, providing some meaningful
examples that can be represented very naturally in DATALOG ;:;c , while their disjunctive
encoding is impossible (or so complex to be unusable).
Table
1: The Complexity of Brave Reasoning in various Extensions of Datalog with Constraints
(Propositional Case under Stable Model Semantics)
Table
1 sumarizes the complexity results of the previous section, complemented with other
results (on the complexity of programs without constraints) already known in the literature.
Therein, each column refers to a specific form of constraints, namely:
strong constraints, constraints with priorities,
priorities (i.e., W has only one component). The lines of Table 1 specify the allowance of
disjunction and negation; in particular, : s stands for stratified negation [45] and h stands
for HCF disjunction [3] (see AppendixB). Each entry of the table provides the complexity
class of the corresponding fragment of the language. For an instance, the entry (8; 5) defines
the fragment of DATALOG ;:;c allowing disjunction and stratified negation
as far as constraints is concerned, only weak constraints with priorities (fw ! g); that is,
unstratified negation and strong constraints are disallowed in this fragment. The languages
correspond to the fragments (3,1) and (9,1), respectively.
Full DATALOG ;:;c is the fragment (9,6), whose complexity is reported in the lower right
corner of the table. The corresponding entry in the table, namely \Delta P, expresses that the
brave reasoning for DATALOG ;:;c is \Delta P
3 -complete.
The first observation is that DATALOG ;:;c is strictly more expressive than DATALOG ;: .
Indeed, the former language can express all problems that can be represented in the lat-
ter, as DATALOG ;:;c is an extension of DATALOG ;: . Moreover, the \Delta P-completeness
of DATALOG ;:;c witnesses that DATALOG ;:;c can express some \Delta P
3 -complete problems,
while DATALOG ;: cannot (unless the Polynomial Hierarchy collapses) [15]. An example of
encoding of a \Delta P
3 -complete problem has been reported in the proof of Theorem 19. To see
another example, consider again abductive reasoning addressed in Section 3.3. A hypothesis
is relevant for a logic programming abductive problem P if it occurs in some explanation of
under minimum cardinality (resp. prioritized) abduction, relevance requires membership
to a minimum cardinality solution (resp. minimal solution according to the prioritization).
From the results of [13, 14], deciding relevance is \Delta P
3 [O(log n)]-complete for abduction with
minimum cardinality criterion, while it is \Delta P-complete for prioritized abduction. There-
fore, relevance of an hypothesis h for a logic programming abduction problem cannot be
expressed reasoning on disjunctive Datalog (as \Sigma Pis an upper bound to the complexity of
brave reasoning on DATALOG ;: [15]). On the other hand, from the translation we provided
in Section 3.3, it is easy to see that h is relevant for P iff it is a brave consequence of pr(P)
(i.e., relevance can be reduced to brave reasoning on DATALOG ;:;c ). 8
Considering that DATALOG ;:;c is a linguistic extension of DATALOG ;: by constraints,
it turns out that constraints do add expressive power to DATALOG ;: . However, it is not
the case of strong constraints, as it can be seen from Table 1. Indeed, if we look at the various
fragments of the language that differ only for the presence of strong constraints, we can note
that complexity is constant (compare column 1 to 2, or 3 to 4, or 5 to 6). In fact, under stable
model semantics, a strong constraint of the form / B is actually a shorthand for p / B; :p.
As an example, consider the problem SCHEDULING of Section 1. The program P sch we used
to express SCHEDULING relies on HCF disjunction with stratified negation and contains
only one strong constraint (no weak constraints occur in it - fragment (5,2)). This problem
can be easily formulated in HCF DATALOG ;: with unstratified negation (fragment (6,1))
simply by replacing the strong constraint / assign(X; S); assign(Y; S); incompatible(X; Y )
by the DATALOG : rule
where p is a new atom. 9 It is worth noting that brave reasoning allows us to simulate strong
constraints if negation is not allowed in the program: replace each strong constraint / B
by the rule p / B (where p is a fresh atom), and require that :p is a brave consequence.
Thus, it is rather clear that strong constraints do not affect at all the complexity and
expressiveness of the language. Therefore, the increased expressive power of DATALOG ;:;c
(w.r.t. DATALOG ;: ) is due entirely to weak constraints. However, weak constraints do not
8 Note that we have \Delta P-hardness only if disjunction is allowed in the abductive theory LP . Thus, example
P net is not an hard instance of the problem; but the translation we gave is general and holds also for hard
instances with disjunction in the abductive theory LP .
9 actually, the SCHEDULING problem can be even formulated in DATALOG : , as HCF disjunction
can be simulated by unstratified negation. Indeed, the "guess" expressed by the disjunctive rule can be
implemented by the following three nondisjunctive rules
ts 1
ts 2
ts 3
ts 2
ts 1
ts 3
ts 3
ts 1
ts 2
cause a tremendous increase of the computational cost, they "mildly" increase the complexity,
as we can see by comparing column 1 to columns 3 and 5 of Table 1. For example, if
we add weak constraints to HCF disjunction (with or without negation - see lines 4-6)
we increase the complexity from NP to \Delta P[O(log n)] or \Delta P, depending on whether weak
constraints are prioritized or not, respectively. Note that the program P cliq of Example 9
consists of a weak constraint on top of an HCF program. Likewise, weak constraints added
to DATALOG ;: increase its complexity from \Sigma Pto \Delta P[O(log n)] or \Delta P, depending on the
possible prioritization of weak constraints (see lines 7-9). Therefore, adding weak constraints,
the complexity of the language always remains at the same level of the polynomial hierarchy;
for instance, from NP we can reach \Delta P
2 , but we will never jump to \Sigma P
2 or to
other classes of the second level of the polynomial hierarchy.
Clearly, weak constraints do not impact on the complexity of fragments admitting a unique
stable model (see lines 1 and 2).
Further remarks on the complexity of the various fragments of the language are the fol-
lowing. First, the presence of negation does not impact on the complexity of the fragments
allowing disjunction (see lines 4 to 9). Second, the complexity of the -free fragments
with full negation (see line coincides with that of HCF disjunction (lines 4 to 6) and
lies one level down in the polynomial hierarchy with respect to fragments based on plain
DATALOG ;: (see line 9).
Besides the increase in theoretical expressiveness - the ability to express problems that
cannot be represented in DATALOG ;: - DATALOG ;:;c provides another even more important
advantage: several problems that can be represented also in DATALOG ;: (whose
complexity is thus not greater than \Sigma P) admit a much more natural and easy-to-understand
encoding in DATALOG ;:;c . This is because coupling both disjunction and constraints, provides
a powerful and elegant tool to encode knowledge. To better appreciate the simplicity
and readability of DATALOG ;:;c , consider again the problem MAX CLIQUE, that has
been represented very simply and elegantly in DATALOG ;:;c (see Example 9). We next
show that it can be encoded also in DATALOG ;: but, unfortunately, the resulting program
is very tricky and difficult to understand. The technique we use to build a DATALOG ;:
version of MAX CLIQUE is that described in [15], based on a modular approach. A first
module LP 1 generates candidate solutions (cliques in this case) and a second module LP 2
discards those that do not satisfy certain criteria (maximal cardinality here).
The above disjunctive program LP 1 generates all cliques and their sizes. We have assumed
that the nodes are ordered by the relation succ that is provided in input (but could also be
"guessed" by another module). Rule r 1 guesses a clique c and rule r 2 enforces the constraint
that every two nodes in c must be joined by an edge. By this rule the constraint must be
satisfied in every stable model (if the constraint is unsatisfiable, then no stable model exist).
The atom count(I; J) expresses that the node I is the J-th node of the clique c. Thus, rule
r 6 defines the size (number of nodes) of c.
Now, to restrict to only maximum size cliques, we add to LP 1 the following DATALOG ;:
program LP 2 which discards clique nonmaximal in size.
r 0: c 0 (X) not c 0 (X) / node(X)
r 0: count1(I; J) / count1(A; B); succ(A; I); succ(B; J); c 0 (I)
r 0: c 0 size(K) / greatestV ertex(N); count1(N; K)
r 0: notGreater / c
r 0: not c 0 (X) / node(X); notGreater
Here, r 0
1 guesses another clique c 0 . notGreater is true if c 0 is not a clique. count1 counts
the elements of c 0 and c 0 size define the size of c 0 . If the size of c 0 is less than or equal to
that of c then notGreater is true. If so, c 0 and not c 0 are assigned the maximum extension
by rules r 0
8 and r 0
9 . In this way, if all cliques c 0 are smaller than c then all stable models of
collapse into a single stable model with the maximum extension for both c and c 0 and
containing notGreater. If, on the contrary, there is some clique c 0 bigger than c, then no
stable model of LP 2 contains notGreater. Thus, the rule r 0imposes every stable model to
contain notGreater (if any). It is easy to recognize that, for the program
there is a one-to-one correspondence between stable models and maximum cliques of G.
Comparing the above DATALOG ;: program with the DATALOG ;:;c version of Example
it is quite apparent the advantage that weak constraints provide in terms of simplicity
and naturalness of programming.
Concluding, we would like to bring reader's attention to the fragment of DATALOG ;:;c
with HCF disjunction and stratified negation ((5,6) in Table 1): it has a very clear and easy-
to-understand semantics and, at the same time, allows us to express several hard problems
(up to \Delta P
-complete problems) in a natural and compact fashion. (In our opinion, recursion
through disjunction or negation makes programs more difficult to understand).
Perhaps our DATALOG ;: encoding of MAX CLIQUE is not the best; but we are very skeptical on the
existence of a DATALOG ;: encoding which is simpler than the DATALOG ;:;c encoding (of example 9).
--R
Logic Programming and Negation: A Survey
Logic Programming and Knowledge Representation Journal of Logic Programming
Propositional Semantics for Disjunctive Logic Pro- grams
Reasoning with Minimal Models: Efficient Algorithms and Applications.
Disjunctive Semantics Based upon Partial and Bottom-Up Evaluation
Nonmonotonic Reasoning: Logical Foundations of Commonsense.
Preferred Answer Sets for Extended Logic Programs
A Survey of Complexity Results for Non-monotonic Logics
Generalized Closed World Assumption is
A slick procedure for integrity checking in deductive databases
Adding Disjunction to Datalog
On the Computational Cost of Disjunctive Logic Pro- gramming: Propositional Case
Abduction From Logic Programs: Semantics and Complexity.
The Complexity of Logic-Based Abduction
Disjunctive
On the Semantics of Updates in Databases
Disjunctive LP
Semantics of Disjunctive Deductive Databases
Computers and Intractability - A Guide to the Theory of NP-Completeness
Classical Negation in Logic Programs and Disjunctive Databases
Complexity and Expressive Power of Disjunctive Logic Pro- gramming
Extending Datalog with Choice and Weak Constraints
"Disjunctive Logic Programming and Disjunctive Data- bases,"
The Complexity of Optimization Problems.
A theorem-proving approach to database integrity
Declarative and Fixpoint Characterizations of Disjunctive Stable Models
Disjunctive Stable Models: Unfounded Sets
Computing Circumscription
Foundations of Logic Programming.
constraint checking in stratified databases
Foundations of Disjunctive Logic Programming MIT Press
Generalized stable models: a semantics for abduction
Abductive Logic Programming.
Generalizations of OptP to the Polynomial Hierarchy
The Relationship between Logic Program Semantics and Non-Monotonic Reasoning
Autoepistemic Logic
On Indefinite Data Bases and the Closed World Assumption
Computational Complexity
Abductive Inference Models for Diagnostic Problem Solving
Abduction and induction
Weakly Perfect Model Semantics for Logic Programs
"Foundations of deductive databases and logic programming,"
Stable Semantics for Disjunctive Programs
Static Semantics for Normal and Disjunctive Logic Programs
"Deductive and Object-Oriented Databases,"
Modular Stratification and Magic Sets for Datalog Programs with Negation
The Expressive Powers of Stable Models for Bound and Unbound DATALOG Queries
Possible model semantics for disjunctive databases
The Expressive Powers of Logic Programming Semantics
Classifying the Computational Complexity of Problems
Word Problems Requiring Exponential Time
Principles of Database and Knowledge-Base Systems
The Well-Founded Semantics for General Logic Programs
Complexity of relational query languages
Bounded query classes
--TR
--CTR
Alfredo Garro , Luigi Palopoli , Francesco Ricca, Exploiting agents in e-learning and skills management context, AI Communications, v.19 n.2, p.137-154, April 2006
Alfredo Garro , Luigi Palopoli , Francesco Ricca, Exploiting agents in e-learning and skills management context, AI Communications, v.19 n.2, p.137-154, January 2006
On the rewriting and efficient computation of bound disjunctive datalog queries, Proceedings of the 5th ACM SIGPLAN international conference on Principles and practice of declaritive programming, p.136-147, August 27-29, 2003, Uppsala, Sweden
Gianluigi Greco , Sergio Greco , Irina Trubitsyna , Ester Zumpano, Optimization of bound disjunctive queries with constraints, Theory and Practice of Logic Programming, v.5 n.6, p.713-745, November 2005
Simona Perri , Francesco Scarcello , Nicola Leone, Abductive logic programs with penalization: semantics, complexity and implementation, Theory and Practice of Logic Programming, v.5 n.1-2, p.123-159, January 2005
Francesco Calimeri , Giovambattista Ianni, Template programs for Disjunctive Logic Programming: An operational semantics, AI Communications, v.19 n.3, p.193-206, August 2006
Filippo Furfaro , Sergio Greco , Sumit Ganguly , Carlo Zaniolo, Pushing extrema aggregates to optimize logic queries, Information Systems, v.27 n.5, p.321-343, July 2002
Thomas Eiter , Michael Fink , Hans Tompits, A knowledge-based approach for selecting information sources, Theory and Practice of Logic Programming, v.7 n.3, p.249-300, May 2007
Marcelo Arenas , Leopoldo Bertossi , Jan Chomicki, Answer sets for consistent query answering in inconsistent databases, Theory and Practice of Logic Programming, v.3 n.4, p.393-424, July
Nicola Leone , Gerald Pfeifer , Wolfgang Faber , Thomas Eiter , Georg Gottlob , Simona Perri , Francesco Scarcello, The DLV system for knowledge representation and reasoning, ACM Transactions on Computational Logic (TOCL), v.7 n.3, p.499-562, July 2006
Christoph Koch , Nicola Leone , Gerald Pfeifer, Enhancing disjunctive logic programming systems by SAT checkers, Artificial Intelligence, v.151 n.1-2, p.177-212, December
Thomas Eiter , Michael Fink , Giuliana Sabbatini , Hans Tompits, Using methods of declarative logic programming for intelligent information agents, Theory and Practice of Logic Programming, v.2 n.6, p.645-709, November 2002
Evgeny Dantsin , Thomas Eiter , Georg Gottlob , Andrei Voronkov, Complexity and expressive power of logic programming, ACM Computing Surveys (CSUR), v.33 n.3, p.374-425, September 2001 | disjunctive datalog;deductive databases;knowledge representation;computational complexity;nonmonotonic reasoning |
628097 | Integrating Security and Real-Time Requirements Using Covert Channel Capacity. | AbstractDatabase systems for real-time applications must satisfy timing constraints associated with transactions in addition to maintaining data consistency. In addition to real-time requirements, security is usually required in many applications. Multilevel security requirements introduce a new dimension to transaction processing in real-time database systems. In this paper, we argue that, due to the conflicting goals of each requirement, trade-offs need to be made between security and timeliness. We first define mutual information, a measure of the degree to which security is being satisfied by a system. A secure two-phase locking protocol is then described and a scheme is proposed to allow partial violations of security for improved timeliness. Analytical expressions for the mutual information of the resultant covert channel are derived and a feedback control scheme is proposed that does not allow the mutual information to exceed a specified upper bound. Results showing the efficacy of the scheme obtained through simulation experiments are also discussed. | Introduction
Database security is concerned with the ability of a database management system to enforce a
security policy governing the disclosure, modification or destruction of information. Most secure
database systems use an access control mechanism based on the Bell-LaPadula model [3]. This
model is stated in terms of subjects and objects. An object is understood to be a data file, record
or a field within a record. A subject is an active process that requests access to objects. Every
object is assigned a classification and every subject a clearance. Classifications and clearances
are collectively referred to as security classes (or levels) and they are partially ordered. The Bell-LaPadula
model imposes the following restrictions on all data accesses:
a) Simple Security Property: A subject is allowed read access to an object only if the former's
clearance is identical to or higher (in the partial order) than the latter's classification.
b) The *-Property: A subject is allowed write access to an object only if the former's clearance
is identical to or lower than the latter's classification.
The above two restrictions are intended to ensure that there is no flow of information from
objects at a higher access class to subjects at a lower access class. Since the above restrictions are
mandatory and enforced automatically, the system checks security classes of all reads and writes.
Database systems that support the Bell-LaPadula properties are called multilevel secure database
systems (MLS/DBMS).
The Bell-LaPadula model prevents direct flow of information from a higher access class to a lower
access class, but the conditions are not sufficient to ensure that security is not violated indirectly
through what are known as covert channels [14]. A covert channel allows indirect transfer of
information from a subject at a higher access class to a subject at a lower access class. In the
context of concurrency control approaches, a covert channel arises when a resource or object in the
database is shared between subjects with different access classes. The two subjects can cooperate
with each other to transfer information. An important measure of the degree to which security is
compromised by a covert channel is measured by the amount of information that may be transferred
from a high-subject to a low-subject. This will be explained in greater detail in Section 3.
A real-time database management system (RTDBMS) is a transaction processing system where
transactions have explicit timing constraints. Typically a timing constraint is expressed in the
form of a deadline, a certain time in the future by which a transaction needs to be completed. In
a real-time system, transactions must be scheduled and processed in such a way that they can be
completed before their corresponding deadline expires. Conventional data models and databases are
not adequate for time-critical applications. They are designed to provide good average performance,
while possibly yielding unacceptable worst-case response times. As advances in multilevel security
take place, MLS/DBMSs are also required to support real-time requirements. As more and more of
such systems are in use, one cannot avoid the need for integrating real-time transaction processing
techniques into MLS/DBMSs.
Concurrency control is used in databases to manage the concurrent execution of operations by
different subjects on the same data object such that consistency is maintained. In multilevel secure
databases, there is the additional problem of maintaining consistency without introducing covert
channels. In this paper, we concern ourselves with concurrency control mechanisms that have to
satisfy both security and real-time requirements. We advance our claim that conflicts between
these two requirements are inherent and hence trade-offs between them are necessary. A summary
of related work in this area is included in Section 2. Some background information on correctness
criteria for secure schedulers is covered in Section 3. In Section 4, the problems associated with
time-constrained secure concurrency control are studied. In Section 5, the secure two phase locking
protocol and 2PL-High Priority are discussed. A scheme that allows partial violations of security
requirements is proposed in Section 6, and the mutual information of the resultant covert channel
is derived. A feedback control mechanism that maintains the amount of mutual information of the
system at a specified upper bound is described in Section 7. In Section 8, it is shown that the
analysis and control of the single covert channel considered in Section 6 is enough to bound the
mutual information of all covert channels that could be potentially exploited. An implementation
and performance analysis of the feedback control mechanism is explained in Section 9. Section 10
concludes the paper.
Related Work
There have been several interesting approaches to analyzing and reducing the covert channel band-width
[30, 11, 19, 9]. While some of these approaches could be used to specify policies to make
it difficult to exploit the covert channels that may arise from the trade-off, others may not be
applicable in real-time application. For example, a collection of techniques known as fuzzy time
[30, 11] is inappropriate in a real-time setting, since the overall mission may be jeopardized by not
getting the exact timing information. In fact, this problem between real-time and covert channel
was identified in Secure Alpha work [10]. They have pointed out that slowing clocks or isolating
processes from precise timing information is impractical for real-time systems. An adaptive solution
to make appropriate trade-offs between the requirements of real-time and security is essential,
and it requires resolution rules to specify the appropriate behavior. To be effective, it is desirable
that the rules be based on application-specific knowledge [5]. Our resolution specification approach
is similar to their idea of "Important Enough to Interfere" and signaling cost which consider the
timeliness and levels between which a covert channel could be established.
The idea of using probabilistic partitioning in bus-contention covert channel is proposed in [9].
Instead of keeping track of the percentage of violations for making decisions when conflicts occur,
the system could enforce a certain pre-determined percentage by picking up a random number that
generates 0 or 1, based on the required percentage. It needs further study to find out whether
this way of enforcing the requirements provides a reasonable level of flexibility in specifying the
requirements and reduced system overhead.
To improve the practicality and usability of the covert channel analysis in real systems, it might
be necessary to provide methods to specify a higher-level goals regarding the potential trade-offs
between real-time requirements and covert channel leaks. The user-centered security approach [31]
which considers user needs as a primary design goal of secure system development could be useful
to figure out a higher-level description of user needs and expectations on specific situations. It
could begin with some scenario-based requirement specification for the system to clearly identify
the situation and necessary actions to take. The system may need to install monitors to check the
system states and perform necessary adjustments by feedback control mechanisms to maintain the
high-level goals specified by the user. Ideas similar to the dynamic adaptive security model proposed
in [29] could be used to provide allowable trade-offs between security and real-time performance.
George and Haritsa studied the problem of supporting real-time and security requirements [7].
They examined real-time concurrency control protocols to identify the ones that can support the
security requirement of non-interference. This work is fundamentally different from our work because
they make the assumption that security must always be maintained. In their work, it is not
permissible to allow a security violation in order to improve on real-time performance.
There have been several approaches to exploit the possible trade-offs between real-time and
security requirements. In [20], a novel concurrency control protocol has been proposed to meet
the real-time, security, and serializability requirements of applications. This protocol employs a
primary and secondary copies for each object. While the transactions at higher levels refer to
the secondary copy, the transactions at the same classification level as the object refer to the
primary copy. Due to this scheme, a higher level transaction is never delayed due to a lower-level
transaction. Similarly, a high-level transaction never interferes with a low-level transaction. In
[27], an adaptive protocol was proposed with performance results which illustrate the clear benefit
of using adaptive approach in secure real-time databases. In their approach, conflicts are resolved
based on two factors: the security factor which indicates the degree of security violations, and the
deadline miss factor which indicates the timeliness of the system. Depending on the values of those
factors, the system takes either secure option (no security violation) or insecure option (no priority
In [21], a multiversion locking protocol was proposed to provide security and timeliness
together, using multiple versions of data objects. The protocol provides 1-copy serializability and
eliminates all the covert channels. The protocol ensures that high priority transactions are neither
delayed nor aborted by low priority transactions. In [28], a set of flexible security policies were
proposed and evaluated, based on the notion of partial security, instead of absolute security. They
proposed a specification method that enables the system designer to specify important properties
of the database at an appropriate level. A tool can analyze the database specification to find
potential conflicts, and to allow the designer to specify the rules to follow during execution when
those conflicts arise. Ahmed and Vrbsky also studied the trade-offs between security and real-time
requirements, and proposed a secure optimistic concurrency control protocol [2].
3 Correctness Criteria for Secure Schedulers
Covert channel analysis and removal is one of the most important issues in multilevel secure concurrency
control. The notion non-interference has been proposed [8] as a simple and intuitively
satisfying definition of what it means for a system to be secure. The property of non-interference
states that the output as seen by a subject must be unaffected by the inputs of another subject
at a higher access class. This means that a subject at a lower access class should not be able to
distinguish between the outputs from the system in response to an input sequence including actions
from a higher level subject and an input sequence in which all inputs at a higher access class have
been removed [13].
An extensive analysis of the possible covert channels in a secure concurrency control mechanism
and the necessary and sufficient conditions for a secure, interference-free scheduler are given in [13].
Three of these properties are of relevance to the secure two phase locking protocol discussed in this
paper. Each property represents one way to prevent a covert channel in a secure system. Clearly,
all three properties need to be enforced to completely eliminate the possibility of covert channels
with secure two-phase concurrency control protocols.
For the following definitions, given a schedule s and an access level l, purge(s; l) is the schedule
with all actions at a level ? l removed from s.
scheduler satisfies this property if values read by a subject are not
affected by actions with higher subject classification levels. Stated formally, for an input
schedule p, the output schedule s is said to be value secure if purge(s; l) is view equivalent 1
to the output schedule produced for purge(p; l).
Delay Security: This property ensures that the delay experienced by an action is not
affected by the actions of a subject at a higher classification level. Here, the delay is measured
as the time between the arrival of the request for the execution of an action at the system
to the time the action is completed. For an input schedule p and an output schedule s, a
scheduler is delay secure if for all levels l in p, each of the actions a 1 in purge(p; l) is delayed
in the output schedule produced for purge(p; l) if and only if it is delayed in purge(s; l).
Recovery Security: Due to conflicting actions, transactions in a real-time database system
may be involved in a deadlock. Recovery of the system from this state involves aborting
one or more actions leading to the deadlock. The recovery security property ensures that
the occurrence of a deadlock appears the same to a low-level subject, independent of whether
higher level actions are in the schedule or not. The actions taken to recover from deadlock are
also not affected by the presence of higher level transactions. When a deadlock occurs, other
channels are available for signaling in addition to those protected by value security and delay
security. The following condition takes care of these channels [13]: A scheduler is recovery
secure for all schedules p if, on the arrival of an action AX for scheduling:
1) If a deadlock occurs, resulting in a set of actions D being rolled back, then for all subject
classification levels l in p, which dominate one of those in D, a deadlock also occurs in
response to the schedule purge(p; l) on the arrival of the action AX , with the actions
purge(D; l) being rolled back. In other words, the presence of the higher level actions
did not interfere with the occurrence of deadlocks among lower level actions.
Two schedules are view equivalent if each read operation reads the same value (reads-from relationship) and the
final values of each data object is the same in both schedules [4].
If no deadlock occurs on the arrival of AX , then for all subject classification levels l in
it does not occur on the arrival of AX in the input schedule purge(p; l). In other
words, if there were no deadlocks among actions at a lower level in the presence of
higher level actions, then there would be none in their absence. This again emphasizes
the non-interference of high-level actions with low-level actions.
Performance Penalty of Enforcing Security
In order to enforce security in database systems, we need to enforce the property of non-interference
of high-level transactions with low-level transactions. For example, in a secure environment, a
transaction at a higher level:
ffl cannot cause a transaction at a lower access class to abort. If it is allowed to do so, it is
possible that it can control the number of times a lower level transaction is aborted, thereby
opening a covert channel.
ffl cannot conflict with a transaction at a lower access class. If such a conflict does occur, the
higher level transaction has to be blocked or aborted, not the low level transaction.
ffl cannot be granted greater priority of execution over a transaction at a lower access class.
However, such enforcement has the unfortunate effect of degrading performance for high-level
transactions in a real-time system. For example, a typical real-time data base assigns priorities
to transactions based on how close they are to missing their deadlines [1, 22, 24]. A high-level
transaction with a closer deadline is assigned a higher-priority than a possibly conflicting low-level
transaction with farther deadlines. However, this may be interpreted as interference of the high-level
transaction the with the low-level transaction in a secure environment. In other words, if
we were to enforce security and hence the non-interference properties described above, we need to
assign higher priority to the low-level transaction and a lower priority to the high-level transaction.
This may, however, result in missing of deadlines for the high-level transaction. In other words,
the performance of high-level transactions is being penalized to enforce security.
To illustrate the penalty on high-level transactions due to security enforcement, let us consider
the following example. A sequence of four transactions are input to a scheduler (the transactions
arrived in the T
Assume that T 1 , T 2 , T 3 and T 4 have priorities 5, 7, 10 and 12 respectively and the priority
assignment scheme is such that if priority(T critical and has to
be scheduled ahead of T 1 . In the above example, T 2 and T 3 are initially blocked by T 1 when they
arrive. When T 1 completes execution, T 3 is scheduled ahead of T 2 , since it has a greater priority
than T 2 and the transaction execution order would be T 1 However, if the transaction T 1
is removed, the execution order would be T 2 T 3 T 4 because T 2 would have been scheduled as soon
as it had arrived. The presence of the SECRET transaction T 1 thus changes the value read by
the UNCLASSIFIED transaction T 4 , which is a violation of value security. Delay security is also
violated, since the presence of T 1 delays both T 2 and T 3 .
Therefore, to satisfy the correctness properties discussed in Section 3 (i.e., to close all covert
channels), we see that very high performance penalty would be paid. In our approach to improving
performance, we shall discuss a method to trade-off mutual information transfer allowed by a covert
channel with performance (measured in terms of deadline miss percentage).
5 Secure Two Phase Locking
Before a discussion and analysis of covert channels, let us study two concurrency control approaches
at different ends of the spectrum-Secure 2PL, a fully secure protocol which does not consider
transaction priorities while scheduling and 2PL-HP, which has some deadline cognizance built into
it, but is not free from covert channels.
5.1 Secure 2PL
Basic two-phase locking does not work for secure databases because a transaction at a lower access
class (say T l ) cannot be blocked due to a conflicting lock held by a transaction at a higher access class
(T h ). If T l were somehow allowed to continue with its execution in spite of the conflict, then non-interference
would be satisfied. The basic principle behind the secure two-phase locking protocol is
to try to simulate execution of Basic 2PL without blocking the lower access class transactions by
higher access class transactions.
Consider the two transactions in the following example (Example 1):
Basic two phase locking would fail because w 2 [x] would be blocked waiting for T 1 to commit and
release read-lock on x (i.e., ru 1 [x]). In our modification to the two phase locking protocol, T 2 is
allowed to set a virtual lock vwl 2 [x], write onto a version of x local to T 2 and continue with the
execution of its next operation, i.e., c 2 . When T 1 commits and releases the lock on x, T 2 's virtual
write lock is upgraded to a real lock and w 2 [x] is performed. Until w 2 [x] is performed, no conflicting
action is allowed to set a lock on x. The sequence of operations performed are therefore, rl 1 [x] r 1 [x]
This modification alone is not enough, as illustrated in the following example (Example 2):
The sequence of operations that would be performed are rl 1 [x] r 1 [x] vwl 2 [x] vw 2 [x] wl 2 [y] w 2 [y]
c 2 . After these operations, deadlock would occur because r 1 [y] waits for w 2 [y] to release its virtual
lock and vw 2 [x] waits for r 1 [x] to release its lock. This deadlock would not have occurred in basic
two phase locking. Note that our aim of trying to simulate execution of basic two phase locking
is not being achieved. On closer inspection, it is obvious that this problem arises because w 2 [y] is
allowed to proceed with its execution even though w 2 [x] could only write onto a local version of x
due to the read lock rl 1 [x] set by T 1 . To avoid this problem, for each transaction T i , two lists are
maintained - before(T i ) which is the list of active transactions that precede T i in the serialization
order and after(T i ) which is the list of active transactions that follow T i in the serialization order.
This idea is adapted from [24], where before cnt and after cnt are used to dynamically adjust the
serialization order of transactions. The following additions are made to the basic two phase locking
protocol:
When an action p i [x] sets a virtual lock on x because of a real lock ql j [x] held by T j , then
transactions in after(T i ) are added to after(T j ), and T j and all transactions in
are added to before(T i ).
When an action w i [x] arrives and finds that a previous action w i [y] (for some data item y)
has already set a virtual write lock vwl i [y], then a dependent lock dvwl i [x] is set with respect
to vwl i [y].
When an action p i [x] arrives and finds that a conflicting virtual or dependent lock vql j [x] or
dvql j [x] has been set by a transaction T j which is in after(T i ), then p i [x] is allowed to set a
lock on x and perform p i [x] in spite of the conflicting lock.
dependent virtual lock dvp i [x], dependent on some action q i [y], is upgraded to a virtual
lock when vql i [x] is upgraded to a real lock.
The maintenance of a serialization order and the presence of dependent locks are necessary to
prevent uncontrolled acquisition of virtual locks by transactions at lower access classes.
For example 2, the sequence of operations that would now be performed are rl 1 [x] r 1 [x] vwl 2 [x]
A formal description of the Secure 2PL algorithm and its correctness proofs are given in [6].
5.2 2PL - High Priority
In 2PL-HP [1], all data conflicts are resolved in favor of the transaction with higher priority. When
a transaction requests a lock on an object held by other transactions in a conflicting mode, if the
requester's priority is higher than that of all lock holders, the holders are restarted and the requester
is granted the lock; if the requester's priority is lower, it waits for the lock holders to release the
lock. In addition, a new read lock requester can join a group of read lock holders only if its priority
is higher than that of all waiting write lock operations.
A real-time secure concurrency control must possess two characteristics - high performance and
minimal deadline miss percentage. The secure two phase locking protocol [6] was shown to yield best
average case performance among all the secure concurrency control approaches whose performance
was evaluated in [26]. We therefore use it as a basis for our approach to the problem of real-time
secure concurrency control. From our discussion earlier in this paper, it is clear that priority
based transaction scheduling is not feasible for a fully secure database system. Therefore, for
minimizing deadline miss percentage, we take the approach that partial security violations under
certain conditions are permissible, if it results in substantial gain in time cognizance.
6 Covert Channel Analysis
6.1 Covert Channels and Mutual Information
The systematic study of covert channels began with [14]. As an example of a simple covert channel
consider two processes running on a system that schedules them alternately for exactly one or two
time quanta each, the choice being up to the process [16]. One process (the sender) may send
information covertly to the other (receiver) by encoding successive symbols (0's and 1's in this
paper) in the amount of time taken for its execution. If the receiver had to wait for one quantum
before its execution, then it assumes a "0" was sent; if it waits for two quanta, it assumes a "1"
was sent. In the absence of any other processes, the maximum rate at which information can
be transmitted through this channel is one bit per quantum (assuming only 0's are transmitted).
Assuming that 0's and 1's are transmitted with equal frequency the information rate is 1/((0.5)(1)
(0.5(2)), or 2/3 bits per quanta. The presence of other processes in the system interferes with
the transmission and can be viewed as "noise". The presence of noise decreases the information
rate.
Covert channel analysis is just a subset of information theory, which is concerned with sending
signals from a transmitter to a receiver, with the possibility of noise degrading the signal fidelity.
Shannon's pioneering work [23] gives an upper limit on the rate at which messages can be passed
through the communication channel based solely on how noise affects the transmission of signals.
In popular usage, the term "information" is elusive to define. However, information has a precise
meaning to a communication theorist, expressed solely in terms of probabilities of source messages
and actions of the channel. A precise measurement of information is based on various entropy
(or uncertainty) measures associated with the communication process and information exchange is
defined by reduction in entropy.
Consider a discrete scalar random variable X, which can be regarded as an output of a discrete
message source. Suppose the variable X can assume one of K possible outcomes, labeled x
specified by P i . The entropy of the random variable X is
The entropy measures the "information" or "surprise" of the different values of X. For a particular
value x i the surprise is log(1=P i ); if x i happens with certainty then its surprise is zero, and
if x i never occurs its surprise is maximal at infinity. Note that base two logarithm is used so that
the units of information is in bits.
Information theory is concerned with how the input or transmission entropy changes while it
travels through the channel. If the channel is noiseless then the amount of information in a transmission
should be unchanged. If there is noise in the channel, then the fidelity of the signal is
degraded and the information sent is diminished. If the channel noise is so great and all encom-
passing, then there is no more surprise in seeing any symbol over another. This is mathematically
modeled by the equivocation or conditional entropy H(XjY ), where X is the random variable representing
the channel input and Y is the random variable representing the channel output. The
uncertainty associated with X, given that
Conditional entropy can therefore be defined as:
Shannon defined information as follows: the (average) mutual information shared between random
variables X and Y is
i.e., the information Y reveals about X is the prior uncertainty in X, less the posterior uncertainty
about X after Y is specified. From this definition, we have
Using the definition of conditional probability,
When transmitting, the transmitter can do nothing about the noise, and the receiver is passive
and waits for symbols to be passed over the channel. However, the transmitter can send different
symbols with different frequencies; thus there are different distributions for X. By changing the
frequency of the symbols sent, the transmitter can affect the amount of information sent to the
receiver.
There is a critical difference between covert channels and communication channels, though. The
goal of a communication channel designer is to maximize mutual information and minimize the
influence of noise. When covert channels exist, the goal of the system designer is exactly the
opposite-to try to minimize the mutual information, usually by increasing noise.
6.2 A Noisy Covert Channel
In any system where a locking mechanism is used for synchronization of concurrently executing
transactions, whenever a transaction T 1 requests a lock on a data item x on which another trans-action
holds a conflicting lock, there are two possible options:
could be blocked until T 2 releases the lock.
could be aborted and the lock granted to T 1 .
The latter option is a "non-secure" option that is taken by 2PL-HP when T 1 has a higher priority
than T 2 . The former option, along with the additional conditions and actions described in Section
5.1, would be the "secure" option if T 1 were at a higher security level than T 2 . However, this option
does not take into account the priorities of T 1 and T 2 . In our approach, we try to strike a balance
between these two options. Consider a Bernoulli random variable X with parameter q (i.e., X takes
value 1 with probability q and a 0 with probability conflict arises between
a lock holding transaction (T 2 ) and a lock requesting transaction (T 1 ), such that priority(T 1
aborted if aborted with a probability q (the
"non-secure" option is taken). If then the "secure" option is taken. Note that q can be
used to control the extent to which security is satisfied. Lesser the value of q, greater the extent to
which security is satisfied and therefore greater the miss percentage.
Unfortunately, this approach is not free from covert channels. Consider two collaborating trans-
actions, one at security level LOW and the other at security level HIGH, each consisting of just
one operation. Assume that at the start of a time interval of duration t (henceforth referred to as
a tick), the LOW transaction submits a write on a data item x and shortly thereafter (within the
tick), the HIGH transaction submits a read on x. Also assume that the transactions collaborate
to ensure that the HIGH transaction has an earlier deadline than the LOW transaction. Now, in
the absence of other transactions and if q were 1, then the LOW transaction would certainly be
aborted due to the HIGH transaction. If the HIGH transaction were not submitted, then the LOW
transaction would commit. Therefore it takes just one tick for the HIGH transaction to transmit
either a "1" (by submitting its operation) or a "0" (by not submitting its operation). In this case,
the mutual information of the channel is 1 bit/tick. There are, however, two factors which introduce
noise into this channel - firstly, the presence of other transactions and secondly, the probability q
of the lock holding transaction being aborted.
The first factor is modeled by a set of parameters: r (Table 1) and p 1 through p 6 (Table 2).
r is the probability that a transaction T i (other than T 1 and T 2 ) with an earlier deadline than
the LOW transaction submits a read or a write on x before the end of execution of the LOW
transaction, i.e., the aborting of the LOW transaction may be caused by either HIGH transaction
or T i . The probabilities (shown as S i ) with respect
to the LOW and HIGH transactions. These are summarized in Table 2. For example, p 1 is the
probability that T i does not arrive within -L units of arrival of LOW transaction (or In other
words, if the HIGH transaction (T H ) has not been submitted, then with p 1 probability, would
be committed and "0" conveyed to the LOW user. Similarly, p 2 is the probability with which T i
arrives after 's arrival but within the lock holding time (- L ) of . Since a transaction is often
delayed due to other operating system overheads such as interrupt handling, we have introduced a
time-out factor for the transactions. For example, if the LOW user does not get a response (abort
or commit of units of the initiation of automatically aborted by an explicit
operation from the LOW user. Such instances are considered as ERROR by that user. In the next
sub-section, we shall derive an equation for mutual information in terms of these factors.
An important assumption has to be stated at this point regarding the extent of knowledge that a
HIGH user has. We assume that a HIGH user has information only about the transactions that it
and its collaborators submit, i.e., all system-maintained information such as current arrival rate of
transactions, the deadlines of other transactions in the system, locks held by other transactions, etc.,
is at a SUPER-HIGH level and inaccessible to HIGH users. This assumption is not unfair because
the concurrency control manager is trusted and therefore, should not leak out information that
could be used by a malicious user. This assumption is important because if a malicious HIGH user
has access to system information, it has control over q. If it knows which transactions could possibly
interfere with its transmission of a "1" to the LOW user, it can then get rid of those transactions
as follows: At the start of a tick, the HIGH user first finds a set of active transactions that have
an earlier deadline than the collaborating LOW level transaction and a data item on which each
transaction holds a lock. This can be represented as a set of tuples
It then submits transactions with a lesser deadline that access each of these data items, thereby
causing the abortion of all the transactions in the set. This does not eliminate the effect of q on
the channel, but reduces its value.
6.3 Analysis of Mutual Information
To derive an expression for mutual information of the covert channel, we make the following
assumptions:
ffl The LOW user submits transaction periodically. has a period of -, computation time
requirement of -L , and a priority of PL .
ffl requires a write-lock on a data object x at the beginning of execution. The lock is released
only at the end of its execution.
ffl The HIGH user also has a periodic behavior (with periodicity -). Whenever it intends to
send "1" via the covert channel to the LOW user, it submits transaction TH ; it does not
submit TH in a period when it intends to send "0". In other words, the time interval between
successive arrivals of TH is an integral multiple of -.
ffl The arrivals of and TH are out of phase (or phase-shifted) in the sense that, the arrival of
TH always takes place at exactly \Delta units from the last arrival of . Since the information is
conveyed through the abort/commit of In other words, should hold the lock
on x long enough that it is aborted when the high priority TH requests for a readlock on x.
Parameter Description
low-level transaction
TH High-priority and high-level transaction
(besides
Request time of T i
Phase out time between arrivals of
-L Lock holding (or execution) time for
- L Time-out period for LOW user
PL Priority of
PH Priority of TH
q Prob. of aborting a low-priority transaction when a conflicting
data lock request is made by a high-priority transaction
r Prob. of
Prob. that "0" is sent by HIGH user
Table
1: Modeling Parameters for Covert Channel Analysis
Event Probability of event
Relationships among probabilities
Table
2: Probabilities related to T i 's lock request and release times
As discussed above, T i represents other transactions (besides may have conflicting
data access requirements with on x. For simplicity of analysis, we assume that at most
one such exists to interfere with the covert operations of TH and through the data access of
object x.
Let us assume that the low user has submitted TL at time t. More precisely, the instance of
under consideration has arrived and requested for a write-lock on x at time t. The final outcome
of will depend on the behavior of TH as well as T i . We present the analysis in terms of the two
cases: HIGH user sends "0" and HIGH user send "1".
Case 1: High user sends "0": In this case, since the phase-out time between and TH is \Delta,
the HIGH user does not submit TH at t + \Delta. Accordingly, this instance of TL has no interference
from TH . However, it may be affected by T i . The following subcases arise:
unlocked at t: gets writelock at t. However, whether it commits or aborts (prior to
We have the following cases.
(a does not arrive prior to t + -L . So commits and releases x at t + -L .
arrives prior to t + -L . Accordingly, commits and releases
x at t + -L .
arrives prior to t + -L . Now, as per the concurrency control
aborts continues and commits at t + -L
with probability
locked by T i at t: We have the following cases:
(a waits until either the readlock is released or the low-user aborts it inten-
tionally. If T i releases the lock prior to t gets the lock and commits
prior to t . Otherwise, the low-user aborts considering it as an error bit.
(a 22 ) aborts T i , gets lock, and commits at t + -L .
However, with probability its lock. If T i releases the lock
prior to t gets the lock and commits prior to t . Otherwise, the
low-user aborts considering it as an error bit.
Case 2: High user sends "1": In this case, the HIGH user submits TH at t + \Delta. Hence,
outcome may depend on both TH and T i . The following cases arise:
unlocked at t: Whether commits or not depends on both TH and We have the
following cases.
(a does not arrive prior to t + -L . But TH arrives at t + \Delta. So aborted by TH
with probability q and it commits at t + -L with probability
(a \Delta. Thus T i cannot influence
may be aborted by TH at commits at t +-L with
(a \Delta. Hence, with probability q,
aborted by T i . With probability \Delta. At this time TH arrives.
Now, two cases are possible. (i) TH aborts
and commits at t + -L with probability
arrives at t + \Delta,
may have already been aborted by TH at that time with probability q or commits
at
(a arrives between t+ \Delta and t+-L . As before, may be aborted
by TH at t + \Delta with probability q or TL is still active when T i arrives with probability
In the latter case, once again, may be aborted by T i with probability q, or
continues and commits at t + -L with probability
locked by T i at t: We have the following cases.
(a has a lock on x at t. Hence, one of the following subcases arise.
aborted by TL at t with probability q. Further, aborted by TH with
probability q at commits at t + -L with probability
waits and T i continues at time t. Further, the following
cases arise.
(a 4121 ) If T i releases lock prior to t hence prior to t
aborted by TH with probability q or T i commits prior to t+-L with probability
(a 4122 then the low-user aborts
considering it as an error bit.
a lock on x at t. Hence, one of the following subcases arise.
(a 421 ) If T i releases lock prior to t hence prior to t + \Delta). either
aborted by TH with probability q or T i commits prior to t
(a 422 then the low-user aborts
considering it as an error bit.
All the subcases, the outcomes, and the corresponding probabilities are summarized in Table 3.
From these probabilities, we now derive the following factors for covert channel analysis. Here, x i
refers to the event when the input the input is binary, i is either 0 or 1. Similarly,
Case Abort/Commit of
TH Not Submitted at time t
a 11 Commit p 1
a 12 Commit
a 13 Abort
Commit
a
a 22 Commit (p 3
Commit
TH submitted at time t
a
Commit
a
Commit
a
Commit
a 34 Abort
a
a 411 Abort (p 3
Commit (p 3
a 4121 Abort
Commit
a 4122
a 421 Abort
Commit
a 422
Table
3: Abort/Commit Probabilities
refers to the event when the output as conceived by the LOW user can be 0, 1,
or ERROR (denoted by e), j can take these values. The ERROR value essentially represents the
event when the LOW user intentionally aborts its execution is not complete within the
time-out period (- L ).
Further, if we assume that HIGH user sends "0" with probability
then we can derive the following:
Substituting these terms in Equation 1, we get the expression for the mutual information, I.
A plot of I vs. r for different values of fl with aborted when HIGH
arrives) is shown in Figure 1. In addition, the other parameters are chosen such that it is possible
to exchange maximum mutual information through the covert channel. Accordingly,
It may be observed that the mutual information is the
highest when there is least intervention from the other transactions or r = 1:0. Similarly, the
mutual information reaches the upper bound of 1.0 when the probability of sending "0" or "1" is
equal
The impact of the arrival of other transactions on I is further illustrated in Figure 2 where
different values of p i 's are chosen. Once again, to maximize I, it is assumed that an arriving high
priority transaction always aborts a low priority holder.
r
Capacity
q=1.0 ((LOW always aborted for HIGH)
Figure
1: Mutual Information (I) vs. r when Low priority is always aborted for High priority
r
Capacity
q=1.0 ((LOW always aborted for HIGH)
Figure
2: Mutual Information (I) vs. r when Low priority is always aborted for High priority
r
Capacity
aborted for HIGH half the time)
Figure
3: Mutual Information (I) vs. r when Low priority is aborted half the time by High priority
Further, the impact of the "High aborting Low" is illustrated in Figure 3 where or a low
priority is aborted by a high priority transaction only with probability 0.5. Clearly, the mutual
information is smaller in this case.
Assuming that there is no interference from other transactions (i.e., r=1.0), the effect of q on I
is illustrated in Figure 4. Since there is no interference, the p i s are irrelevant. It may be observed
that the mutual information increases with the value of q.
Finally, a plot of the I versus both q and r with chosen values of p i s is displayed
in
Figure
5. The results support our intuitive understanding of the effect of the system parameters
on the mutual information transferred through the covert channel.
7 A Secure Real-Time Concurrency Control Mechanism
From the above discussion, it is clear that the mutual information of a covert channel is determined
by the parameters p 1 through p 6 , q, and r. Clearly, q is a parameter that is completely under the
control of our system. The parameter r, however, depends on the characteristic of other transactions
in the system. Obviously, r varies with system load and the relative priority of other transactions
with respect to the LOW transaction. Larger values of r implies higher interference for the LOW
and HIGH users, and hence lower transfer of mutual information through the covert channel. Thus,
to reduce the mutual information, r can be arbitrarily increased by introducing "fake" transactions
which do not change the state of the database, but which access data items randomly. This is
Capacity
r=1.0 (No Interference from others)
Figure
4: Mutual Information (I) vs. q with no interference from other transactions0.20.610.20.610.20.61
r
Capacity
Figure
5: Mutual Information (I) vs. q and r
not a desirable option, since these transactions compete for resources and data items that would
otherwise be allocated to normal transactions, thereby degrading the performance.
The parameters p 1 through p 6 are influenced by the start and finish times of other transactions
that are active during In addition, they are influenced by -L , - L , and \Delta of the LOW user
transaction. In particular, smaller time-out periods (- imply higher value of p 4 and
hence a higher probability for the LOW-user to receive the error symbol "e". On the other hand,
higher values of time-out imply larger value of p 3 and smaller value of p 4 resulting in larger value
of P )-probability that TL is committed even when TH is submitted.
Therefore, we shall assume that r is a parameter that cannot be controlled; however, the average
value of r can be estimated by the scheduler for each level of transactions, periodically. The
parameters are even more difficult to estimate since they depend on the parameters
dictated by the LOW and HIGH user, and hence are not known to the scheduler (or any other
trusted component of the system). For this reason, it is best to make a conservative estimate of
these parameters resulting in maximal mutual information for the covert channel. Finally, it is q
that is under the control of the scheduler and can be tuned according to the allowable I. Given r
and the allowable value for I, the system can adjust the value of q.
The two transactions involved in the covert channel can collaborate to reduce the duration of a
tick, thereby reducing r. However, there is a certain lower bound, below which the duration cannot
be reduced. This is because there are three steps involved in the transmission of a symbol ("0" or
- at the start of a tick, the LOW transaction submits its write operation.
- if the HIGH user wishes to transmit a "1", it submits its read operation.
- the system has to send a "TRANSACTION ABORTED" message to the LOW user.
Or alternately,
- at the start of a tick, the LOW transaction submits its write operation.
- if the HIGH user wishes to transmit a "0", no operation is submitted; otherwise it submits a
read operation.
- the system sends either a "TRANSACTION COMMITTED" or a "TRANSACTION ABORTED"
message depending on the interference and its decision to abort/not-abort a low-priority trans-action
for a high-priority transaction.
For the covert channel to be effective, the duration of a tick cannot be lower than the overhead
involved in performing these three operations in the worst case.
There are two requirements on a secure real-time concurrency control mechanism-a security
requirement, expressed as an upper bound on mutual information and a real-time requirement,
expressed as an upper bound on miss percentage. Given I and values for r and p 1 through p 6 (recall
that the value of r is estimated and p 1 through p 6 are computed based on conservative assumptions)
q can be calculated from the equation derived for mutual information in the previous section. It is
very difficult to derive a closed form solution for q in terms of r, I, and p 1 through p 6 . But a simple
iterative solution for q can be obtained easily using the Newton-Raphson method. While there is
no direct mathematical relationship between the deadline miss percentage and these parameters,
simulation studies [26] indicate that with increasing arrival rate (and therefore increasing r), the
deadline miss percentage increases slowly but steadily up to a certain point, after which the system
becomes unstable. Similarly, with increasing q (from 0 to 1), the deadline miss percentage first
increases (up to a value of r in the range 0.3 to 0.4) and then decreases continuously until
Our approach to a real-time secure concurrency control mechanism uses a feedback control
mechanism to ensure that the mutual information at any given time does not exceed the upper
bound specified. The approach is described by the following pseudo-code:
desired deadline miss percentage (DDMP) and mutual information (I)
Calculate q, given I and current r (with computed conservative estimates of
resulting deadline miss percentage (DMP)
report back to database administrator (DBA);
DBA readjusts DDMP and(or) I;
go to step 2;
Else If ((DDMP - DMP) ? THRESHOLD)
decrease q to 0; /* reduce mutual information to 0 */
go to step 3;
Else
go to step 3;
This approach provides guarantees only on the mutual information allowed by the resulting
channel, not on the deadline miss percentage. If the miss percentage increases above the desired
miss percentage specified, there is nothing that the system can do. The only thing that can be
done is to report to DBA as in step 4. The DBA can then either increase the upper bound on I,
thereby increasing q and in turn decreasing the miss percentage, or can relax the miss percentage
requirement and increase the value of the desired miss percentage.
If the deadline miss percentage requirement is being comfortably met by the system, then a drop
in miss percentage can be afforded. This is what is done in step 5, where the covert channel is
effectively closed by setting q to 0. When the miss percentage again increases and approaches the
desired miss percentage value, normal operation is resumed and the value of q is calculated from I
and the current value of r and p 1 through p 6 .
The amount of mutual information transferred through a covert channel varies inversely with the
degree of randomness in the system. In the scheme that we have discussed, there is not much
randomness, since we strive to maintain the mutual information at a specified value. One can
therefore argue that since the amount of mutual information allowed is maintained more or less
constant, a malicious subject can utilize this channel-albeit at a much lower fidelity-to transmit
information. A certain degree of randomness can be introduced by the following procedure: the
value of q is calculated from the desired value of I and the current value of r. Instead of using
the value of q thus calculated, the value of q is sampled (for example) from a uniform distribution
between ffi]. The greater the value of ffi, the greater the uncertainty in the resulting value
of I. This might mean that sometimes the mutual information might increase beyond the upper
bound specified, but due to the uncertainty it is very difficult for a user to exploit this channel.
All the derivations and methods to control I explained in this paper have been for the type of the
covert channel discussed in Section 6.2. Are there other covert channels that malicious users can
exploit and whose allowed mutual information would not be controlled by the feedback monitoring
method explained earlier in the previous section? Let us investigate this issue further. From
the correctness criteria for secure schedulers, covert channels can be broadly classified into three
categories - those that communicate information through a violation of delay security, those that
violate recovery security and those that violate value security. In [6], it is proved that Secure 2PL
satisfies delay security. Our real-time secure concurrency control mechanism explained in Section
7 is based on the Secure 2PL protocol. The approach differs from Secure 2PL only when there is a
conflict between a lock holding transaction T 1 and a lock requesting transaction T 2 and priority(T 2 )
In this case, T 1 is aborted and T 2 granted the lock, i.e., no transaction is being
blocked. Therefore, delay security is not violated at any point. The covert channel studied in
Section 6.2 is a canonical example of a channel that exploits a violation of recovery security. There
might be other, more complicated, channels that could involve more than two transactions, but
the parameters on which the mutual information that they could exchange would be dependent on
a superset of q and r. A covert channel involving four collaborating transactions - one at HIGH
and the rest at LOW - that exploits a violation in value security can work as follows:
ffl At the start of a tick, a LOW transaction T 1 submits a write on a data item x (w 1 [x]).
ffl A second LOW transaction T 2 then submits a write on x (w 2 [x]).
ffl If the HIGH transaction T 3 wants to transmit a "1," it submits a read on x, such that
As a result, T 2 is aborted.
ffl The "receiving" LOW transaction T 4 then submits a read on x. If it reads the value written
by T 1 , then a "1"is received and if it reads the value written by T 2 , a "0" is received.
This covert channel too is dependent on two factors - the probability that a transaction T 4
would cause the aborting of T 2 before T 3 arrives and a probability q that T 2 would actually be
aborted when T 3 submits its operation. In addition, there is also the possibility that T 1 could
be aborted before T 4 submits its read, introducing an additional "noise" factor. As a result, the
mutual information allowed by this channel would actually be less than that of the simple channel
studied in Section 6.2.
Summarizing, we find that simpler the covert channel, the lesser the number of factors that the
mutual information of the channel is dependent on and therefore greater is its I. The covert channel
studied in Section 6.2 is the simplest possible channel that can be exploited, given the correctness
properties that are violated and therefore bounding its I is enough to bound the mutual information
of more complicated covert channels that could be exploited.
9 Performance Evaluation
In this section, we present the results of our performance study of the feedback control mechanism
for a range of transaction arrival rates. The goal of the analysis is to show the variation in miss
percentage for varying amounts of mutual information transferred through the covert channel.
9.1 Simulation Model
Central to the simulation model is a single-site disk resident database system operating on shared-memory
multiprocessors [15]. The system consists of a disk-based database and a main memory
cache. The unit of database granularity is the page. When a transaction needs to perform an
operation on a data item it accesses a page. If the page is not found in the cache, it is read from
disk. CPU or disk access is through an M/M/k queueing system, consisting of a single queue with
servers (where k is the number of disks or CPUs). The amounts of CPU and disk I/O times are
specified as model parameters in Table 4. Since we are concerned only with providing security at
the concurrency control level, the issue of providing security at the operating system or resource
scheduling layer is not considered in this paper. That is the reason why we do not consider a
secure CPU/disk scheduling approach. Our assumption is that the lower layers provide the higher
concurrency control layer with a fair resource scheduling policy.
The feedback approach is implemented as a layer over Secure 2PL. In the model, the execution of
a transaction consists of multiple instances of alternating data access requests and data operation
steps, until all the data operations in it complete or it is aborted. When a transaction makes a
data request, i.e., lock request on a data object, the request must go through concurrency control
to obtain a lock on the data object. If the transaction's priority is greater than all of the lock
holders, and its lock request conflicts with that of the holders, then the holders are aborted and the
transaction is granted a lock with a probability q else the steps taken by the Secure 2PL protocol
are followed; if the transaction's priority is lower, it waits for the lock holders to release the lock
[1]. The probability q depends on the factors I and r. I is available directly, but r is calculated
based on the arrival rate of transactions, the probability of contention, and their deadlines. The
analysis is based on preemptive priority queueing policy with restart. The details of the analysis
can be found in [6].
If the request for a lock is granted, the transaction proceeds to perform the data operation,
which consists of a possible disk access (if the data item is not present in the cache) followed by
CPU computation. However, if only a virtual or dependent lock is granted, the transaction only
does CPU computation, since the operation should only be performed on a local version. If the
request for the lock is denied (the transaction is blocked), the transaction is placed into the data
queue. When the waiting transaction is granted a lock, only then can it perform its data operation.
Also, when a virtual lock for an operation is upgraded to a real lock, the data operation requires
disk access and CPU computation. At any stage, if a deadlock is detected, the transaction to be
aborted to break the deadlock is determined, aborted and restarted. When all the operations in a
transaction are completed, the transaction commits. Even if a transaction misses its deadline, it is
allowed to execute until all its actions are completed.
9.2 Parameters and Performance Metrics
Table
4 gives the names and meanings of the parameters that control system resources. The
parameters, CPUTime and DiskTime capture the CPU and disk processing times per data page.
Our simulation system does not explicitly account for the time needed for data operation scheduling.
We assume that these costs are included in CPUTime on a per data object basis. The use of a
database cache is simulated using probability. When a transaction attempts to read a data page,
the system determines whether the page is in cache or disk using the probability BufProb. If the
page is determined to be in cache, the transaction can continue processing without disk access.
Otherwise disk access is needed.
Table
5 summarizes the key parameters that characterize system workload and transactions.
Parameter Meaning Base Value
hline DBSize Number of data pages in database 350
NumCPUs Number of processors 2
NumDisks Number of disks 4
CPUTime CPU time for processing an action 15 msec
DiskTime Disk service time for an action 25 msec
BufProb Prob. of a page in memory buffer 0.5
NumSecLevels Num. of security levels supported 6
Table
4: System Resource Parameters
Parameter Meaning Base Value
hline ArriRate Mean transaction arrival rate -
TransSize Average transaction size 6
RestartDelay Mean overhead in restarting 1 msec
MinSlack Minimum slack factor 2
MaxSlack Maximum slack factor 8
Table
5: Workload Parameters
Transactions arrive in a Poisson stream, i.e., their inter-arrival rates are exponentially distributed.
The ArriRate parameter specifies the mean rate of transaction arrivals. The number of data objects
accessed by a transaction is determined by a normal distribution with mean TranSize, and the actual
data objects to be accessed are determined uniformly from the database.
The assignment of deadlines to transactions is controlled by the parameters MinSlack and MaxS-
lack, which set a lower and upper bound, respectively, on a transaction's slack time. We use the
formula for deadline-assignment to a transaction.
AT and ET denote the arrival time and execution time, respectively. The execution time of a
transaction used in this formula is not an actual execution time, but a time estimated using the
values of parameters TranSize, CPUTime and DiskTime. The priorities of transactions are decided
by the Earliest Deadline First policy.
The performance metric used is miss percentage, which is the ratio of the number of transactions
that do not meet their deadline to the total number of transactions committed.
9.3 Experimental Results
An event-based simulation framework was written in 'C'. For each experiment, we ran the simulation
with the same parameters for 6 different random number seeds. Each simulation run was continued
until 200 transactions at each access class were committed. For each run, the statistics gathered
during the first few seconds were discarded in order to let the system stabilize after an initial
transient condition. For each experiment the required performance metric was measured over a
wide range of workload. All the data reported in this paper have 90% confidence intervals, whose
endpoints are within 10% of the point estimate.
In the experiment, the miss percentages for the feedback approach are measured for two different
arrival rates. The resulting graph is shown in Figure 6. Since we are considering a real-time database
system, we restrict attention to the portion of the graph where miss percentages are less than 10%.
The performance after the saturation point is not an issue. We also do not consider the section of
the graph for I less than 0.1. At such low values of I, the value of q is also very low, which means
that the behavior of the system is near identical to the Secure 2PL. Only for higher values of I is a
certain degree of deadline cognizance introduced and that is the portion of the graph that we need
to concentrate on. At low arrival rates, the dependence of miss percentage on I is minimal. This
is because very few transactions miss their deadline even at low values of I and further increase in
I does not appreciably decrease it either. At high arrival rates, however, the miss percentage rate
is quite sensitive to changes in I. As is to be expected, for lower values of I, the miss percentage is
the highest. This is obviously because of a low value of q, which signifies that very few transactions
are being aborted to give greater priority to transactions with an earlier deadline. As the value of
I increases, the value of q increases and the behavior of the system approaches that of 2PL-HP,
resulting in decreased miss-percentage.
In this paper, we have explored a possible direction for research in scheduling transactions to meet
their timing constraints in a secure database. A possible way in which security could be partially
compromised for improved miss percentage was explained and an expression for the mutual information
(I) of the resultant covert channel derived. A feedback control system was then developed,
which ensured that the mutual information transferred through the covert channel did not exceed a
desired upper bound. Although no guarantees can be provided by the system on the deadline miss
percentage, a facility is provided for renegotiating on the desired deadline miss percentage and the
desired amount of mutual information when the desired miss percentage is exceeded.
The importance of real-time database systems in an increasing number of applications, such as
those used in the military or the ones used in national infrastructure such as electric power and
telecommunications is growing. These applications obviously need to support both security and
Capacity
Miss
Percentage
x Arr rate=40 transactions/sec
Figure
Miss percentage vs. Mutual Information (I)
real-time requirements. For example, when an accident or a failure is detected and considered
severe, or physical or electronic attack is under way, the system must switch into crisis mode so
that critical transactions can be executed by the deadline and essential data can be maintained.
In such situations, it would be much more desirable to allow minor security violations to satisfy
critical timing constraints.
There are a number of issues for future work. In the derivation of the mutual information of
the covert channel, we have concentrated mainly on the dependence of I on parameter q. The
dependence of I on the presence of other transactions in the system was conveniently abstracted
away into a single parameter q. Although an approximate method for the estimation of q was
used in the performance analysis, a precise calculation of q has not been considered. A formal
queueing model of the system, based on the arrival rate of transactions, a calculation of lock
conflict probabilities, blocking time, etc., is important not only for determining q, but could also
help in establishing a probabilistic relationship between miss percentage and q and r. This could
eliminate the need for raising an ERROR condition when the desired miss percentage is exceeded,
since the correct setting of r can be obtained mathematically from I and desired miss percentage.
Secondly, in [18] the use of I as a measure of security is questioned. Examples of zero mutual
information channels are provided, where short messages can be sent through without any errors
(or loss in fidelity). A small message criterion (SMC) is introduced, which is an indication of what
will be tolerated by the system in terms of covertly leaking a short covert message of length n
("0"s and "1"s) in time t and with fidelity of transmission r%. Further work is needed to design a
formal criterion that captures all these factors and has the same mathematical elegance as mutual
information.
Acknowledgements
This work was supported in part by NASA LaRC, ONR, and VCIT.
--R
"Scheduling Real-Time Transactions: A Performance Evaluation,"
"Maintaining Security in Firm Real-Time Database Systems,"
"Secure Computer Systems: Unified Exposition and Multics Interpretation,"
Concurrency Control and Recovery in Database Systems
"Toward a Multilevel-Secure, Best-Effort, Real-Time Scheduler,"
"A Secure Two Phase Locking Protocol,"
"Secure Transaction Processing in Firm Real-Time Database Sys- tems,"
"Security Policy and Security Models,"
On Introducing Noise into the Bus-Contention Channel
The Secure Alpha Study - Final Summary Report
"Reducing Timing Channels with Fuzzy Time,"
"Alternative Correctness Criteria for Concurrent Execution of Transactions in Multilevel Secure Databases,"
"Multilevel Secure Database Concurrency Control,"
"A Note on the Confinement Problem,"
"Concurrency Control Algorithms for Real-Time Database Systems,"
"Finite-State Noiseless Covert Channels,"
"The Channel Capacity of a Certain Noisy Timing Channel,"
"Covert Channels - Here to Stay?,"
"An Analysis of Timed Z-Channel,"
"A Secure Concurrency Control Protocol for Real-Time Databases,"
"Priority-driven Secure Multiversion Locking Protocol for Real-Time Secure Database Systems,"
"Priority Inheritance Protocol: An Approach to Real-time Synchronization,"
The Mathematical Theory of Communication
"Hybrid Protocols Using Dynamic Adjustment of Serialization Order for Real-Time Concurrency Control,"
"Towards a Multilevel Secure Database Management System for Real-Time Applications,"
"Design and Analysis of a Secure Two-Phase Locking Protocol,"
"Design and Analysis of an Adaptive Policy for Secure Real-Time Locking Protocol"
"Partial Security Policies to Support Timeliness in Secure Real-Time Databases,"
"A Security Model for Dynamic Adaptive Traffic Masking,"
"An Analysis of Covert Timing Channels,"
"User-Centered Security,"
--TR
--CTR
Sang H. Son , Ravi Mukkamala , Rasikan David, Correction to 'Integrating Security and Real-Time Requirements Using Covert Channel Capacity', IEEE Transactions on Knowledge and Data Engineering, v.13 n.5, p.862, September 2001
Quazi N. Ahmed , Susan V. Vrbsky, Maintaining security and timeliness in real-time database system, Journal of Systems and Software, v.61 n.1, p.15-29, March 2002
Kyoung-Don Kang , Sang H. Son, Towards security and QoS optimization in real-time embedded systems, ACM SIGBED Review, v.3 n.1, p.29-34, January 2006
Kyoung-Don Kang , Sang H. Son , John A. Stankovic, Managing Deadline Miss Ratio and Sensor Data Freshness in Real-Time Databases, IEEE Transactions on Knowledge and Data Engineering, v.16 n.10, p.1200-1216, October 2004
Kyoung-Don Kang , Sang H. Son , John A. Stankovic, Differentiated Real-Time Data Services for E-Commerce Applications, Electronic Commerce Research, v.3 n.1-2, p.113-142, January-April
Tao Xie , Xiao Qin, Improving security for periodic tasks in embedded systems through scheduling, ACM Transactions on Embedded Computing Systems (TECS), v.6 n.3, p.20-es, July 2007
Chanjung Park , Seog Park , Sang H. Son, Multiversion Locking Protocol with Freezing for Secure Real-Time Database Systems, IEEE Transactions on Knowledge and Data Engineering, v.14 n.5, p.1141-1154, September 2002
Krithi Ramamritham , Sang H. Son , Lisa Cingiser Dipippo, Real-Time Databases and Data Services, Real-Time Systems, v.28 n.2-3, p.179-215, November-December 2004 | concurrency control;database systems;real-time systems;covert channel analysis;multilevel security;locking protocols |
628100 | Exploiting Spatial Indexes for Semijoin-Based Join Processing in Distributed Spatial Databases. | AbstractIn a distributed spatial database system, a user may issue a query that relates two spatial relations not stored at the same site. Because of the sheer volume and complexity of spatial data, spatial joins between two spatial relations at different sites are expensive in terms of computation and transmission cost. In this paper, we address the problems of processing spatial joins in a distributed environment. We propose a semijoin-like operator, called the spatial semijoin, to prune away objects that will not contribute to the join result. This operator also reduces both the transmission and local processing costs for a later join operation. However, the cost of the elimination process must be taken into account, and we consider approaches to minimize these overheads. We also studied and compared two families of distributed join algorithms that are based on the spatial semijoin operator. The first is based on multidimensional approximations obtained from an index such as the R-tree, and the second is based on single-dimensional approximations obtained from object mapping. We conducted experiments on real data sets and report the results in this paper. | Introduction
Queries in spatial databases frequently involve relationships between two spatial entities. These
relationships include containment, intersection, adjacency and proximity. For example, the
query "which schools are adjacent to areas zoned for industrial purposes?" requires an adjacency
relationship, while the query "which police stations are within 1 kilometer from a major road?"
involves a proximity relationship. To answer these queries, spatial joins are used to materialize
the relationships between the two spatial entities. Like the relational join operation, spatial
joins are expensive, and have received much research attention recently [5, 9, 8, 11, 12, 14, 15,
While spatial database research to date has largely focused on single-site databases, some
prospective applications now call for distributed spatial databases. A representative example
is in government agencies where the sharing of core data sets across agencies has been shown
to provide high savings [33]. Legislative and organizational requirements make it very difficult
to establish a single unified database; rather data sharing must be approached as a problem in
distributed access to many autonomous databases.
To realize the full potential of a distributed spatial database system, many issues have to
be addressed. Most of these issues are similar to those that arise in designing heterogeneous
database systems [30]. These include the integration of existing schemas and data, the processing
and optimization of distributed queries, and transaction processing issues. However, the
issue of processing distributed spatial query has been largely ignored.
In a distributed spatial database system, a user may issue a query that joins two spatial
relations stored at different sites. The sheer volume and complexity of spatial data lead to
expensive spatial join processing across sites in terms of computation and transmission cost. In
this paper, we focus on the design of distributed spatial join algorithms. Unlike conventional
distributed databases where the transmission cost is dominant [26], both the transmission cost
and the processing cost may be comparable for distributed spatial databases. Thus, it is no
longer appropriate to design algorithms that minimize transmission cost alone. Instead, we
design and study distributed spatial join algorithms that are based on the concept of spatial
semijoins. A spatial semijoin eliminates objects before transmission to reduce both transmission
and local processing costs. This elimination is performed as a spatial selection on one
database using approximations to the spatial descriptions of the other. Inherently, the elimination
process itself introduces additional costs, and hence the choice of approximation is critical
to the performance of the distributed algorithms. By varying the approximations to the spatial
descriptions, we can study the tradeoffs between transmission cost and processing cost.
In our earlier paper [4], we proposed a distributed semijoin-based strategy that employs
single-dimensional approximations obtained from object mapping as a possible join optimization
strategy in our heterogeneous spatial database system [3]. The technique allows us to
exploit sort-merge algorithm. In this paper, we built on and extended this initial work. Besides
reporting more experimental studies on the algorithms, we also propose a new family of
join strategies that employ the spatial semijoin operator. The new algorithms are based on
multi-dimensional approximations obtained from an index. We study R-tree index and use the
bounding boxes stored in an existing R-tree as approximations. We conducted extensive experiments
on our semijoin-based algorithms on real data sets. The results show that the methods
are effective in reducing total processing cost in distributed join processing. We also report on
a comparative study on the two families of algorithms.
The remainder of this paper is organized as follows. In the next section, we shall look at the
issues involved in distributed spatial join processing. We also review uniprocessor spatial join
algorithms in this section. Section 3 introduces the concept of spatial semijoin, and present a
framework for designing distributed spatial join algorithms. In Section 4, we describe the two
families of join strategies studied. Section 5 present and analyze the results from experiments
with a representative database, and finally, we summarize our conclusions in Section 6.
2. Spatial Join Processing
In this section, we review uniprocessor join algorithms. In particular, we pay more attention to
join algorithms that employ R-trees [13] and locational key techniques [10] as our distributed
spatial join strategies are based on them.
2.1. Uniprocessor Join Processing: A Brief Review
Several spatial join processing algorithms have been studied. Experimental and analytical
results have shown that algorithms that employ indexing or ordering techniques are much more
efficient than simple nested-loops joins [9, 11, 12, 17]. Before looking at some of these schemes,
it is worth pointing out that several spatial join algorithms have recently been proposed for
non-indexed relations - Patition-based Merge Join [27], Spatial Hash Join [18], Size Separation
Spatial Join [15], and Scalable Sweeping-based Spatial Join [5].
2.1.1. Joins Based on Multi-Dimensional Indexing Techniques
Spatial joins can be performed efficiently by exploiting existing indexes. Rotem [28] applied the
concept of join indexes [34] on two frequently joined spatial relations indexed by grid files [22].
Lu and Han also proposed a similar mechanism using distance metrics to facilitate fast access
to spatial window queries [20]. The approximate join indexes [19] speeds up join processing by
precomputing pairs of index pages that contain matching objects. In [32], a join algorithm for
relations indexed with filter trees is proposed.
Several spatial join algorithms based on R-tree-like indexes have also been studied [12, 9, 17].
The R-tree [13] and its variants (R + -tree [31], R -tree [7]) have been widely used to speed up
access for multi-dimensional spatial objects. Like the -tree, the R-tree is height-balanced.
Each internal R-tree node contains entries of the form (mbr, childptr) where mbr represents
the minimum bounding rectangle (MBR) enclosing all the objects described by the child node
pointed to by childptr. A leaf node contains entries of the form (mbr, oid) where mbr is the
MBR of the object that oid points to. Figure 1 illustrates an index structure of the R-tree.
Spatial join algorithms based on R-trees require that both relations be indexed. If a relation
is not indexed, then one will be created temporally for join processing [17]. The basic idea is to
traverse both spatial indexes simultaneously, and entries of internal nodes are checked to see if
they overlap. If the condition is true, both the subtrees are recursively traversed. Results are
produced when the leaf nodes are reached. In [12], a breadth-first-search algorithm is adopted
in traversing the tree, while a depth-first-search algorithm is employed in [9, 17].
Figure
1: A R-tree index.
2.1.2. Joins Based on Linearized Single-Dimensional Object Mapping
Linearizing spatial objects has the advantage that sort-merge join methods can be exploited.
Sort-merge join methods require, in the best case, that both data sets be scanned at most once.
Various techniques on ordering multi-dimensional objects using single-dimensional values have
been proposed [10, 23] and one of the most popular ordering techniques is that based on bit
interleaving proposed by Morton [21].
In [23], a space is recursively divided into four equal-sized quadrants as in the quad-tree
[29], forming a hierarchy of quadrants. These quadrants are then linearized based on their
z-ordering (see Figure 2), and objects can be accessed quickly using one-dimensional indexes
(such as the B + -tree) on their z-values. The join between two relations is performed using a
merge-like operation on the z-values of the objects [24, 25].
A similar approach is adopted in [10]. However, for each quadrant, a unique key of base 5
is attached. Figure 2 illustrates an example of key assignment. As can be seen from the figure,
when these keys are traced, it is the z-order enumeration of grid cells as in [23]. An object that
is fully contained by a subspace is assigned its key. To improve the approximation of objects
by quadrants, an object is assigned up to k locational keys (In [10], 4). For example, using
4, the rectangular object in Figure 2 will be assigned the keys: 1100, 1233, 1300, 1410.
The spatial objects are held sorted in ascending sequence by the locational key value, with
each member of the sorted list consisting of the object identifier, its MBR and the assigned
locational key. A B + -tree is used to provide direct access to the spatial objects based on the
locational keys [1].
Consider two spatial relations R and S that are to be joined. Figure 3(a) shows the objects
Figure
2: Assignment of locational keys.
in R and S, and Figure 3(b) shows the locational key values of the objects. In this example,
we have allowed each object to have a maximum of 4 keys. Joining the two relations can be
performed by a merge-like operation along the two sorted lists of locational keys. We note
the following. First, the join is a non-equi join. For example, the locational key value 1240 of
matches two locational key values of R1 (namely 1241 and 1242). Second, the results may
contain duplicates. This is because each object has multiple keys, and different pairs of keys of
the same objects may intersect. For example, there are two resulting pairs between R1 and S4.
Finally, the results may contain false drops, i.e., results obtained from matching locational keys
but the actual objects do not match. Looking at Figure 3(a), we notice that there are only two
intersections, but we obtain four. The pair (R1, S4) and (R2, S2) are false drops. This arises
because the locational keys are but approximations of the actual objects.
2.2. Issues in Distributed Spatial Join Processing
While much research has been done in distributed query processing (see [26, 37]), the unique
characteristics of spatial data (large volume of large data items and complex operations) poses
interesting challenges.
It has traditionally been assumed that transmission of data dominates distributed query
processing performance, and as a result, many algorithms have been designed to minimize
transmission cost [26]. This assumption is valid in the the old days when network speeds
were slow and wide area networks were assumed. While the transmission cost of exporting a
spatial data set from one site to another can be high, this assumption may no longer be valid
in distributed spatial databases. This is because of the high local processing cost for spatial
operations. The local processing cost cannot be ignored since
ffl many spatial databases of real-world interest are very large, with sizes ranging from tens
objects in R objects in S
(a) Sample objects from two relations to be
(b) Sort-merge join processing
Figure
3: Local join processing using locational keys.
of thousands to millions of objects;
ffl spatial descriptions of objects are typically extensive, ranging from a few hundred bytes
in Land Information System applications to megabytes in natural resource applications
[2]; and
ffl many basic spatial operations such as testing the intersection of two polygons are expensive
As a broad indication of the relative costs of operations, transmission of a land parcel spatial
description (with an average size of 48 bytes) over an Ethernet Local Area Network requires
milliseconds. On the other hand, retrieving a polygon from a database of 50000 polygons
using an intersection qualification costs in the region of 2 milliseconds on a SUN SPARC-10
machine. The challenge then is to develop new distributed spatial join strategies that take into
account the transmission cost as well as the local processing cost.
In this paper, unless otherwise stated, R and S represent two spatial relations. R resides at
site R site and S at site S site . A spatial join between R and S is on the spatial attributes R:A
and S:B. The result of the spatial join is to be produced at site S site .
3. A Framework for Semijoin-Based Spatial Join
A straightforward approach to perform a spatial join between R and S is to transmit the whole
of R from R site to S site . The spatial join can then be performed at S site using an existing
uniprocessor join algorithm. This method, though simple, incurs high transmission cost and
high local processing cost.
In this section, we present an alternative approach that is based on the concept of semijoin.
We will first look at the semijoin operator for spatial databases called the spatial semijoin, and
then present a framework for designing semijoin-based algorithms.
3.1. Spatial Semijoin
In conventional distributed databases, the semijoin operator [6] has been proposed to reduce
transmission costs. The '-semijoin of a relation R with another relation S on the join condition
R:A ' S:B is the relation R 0 ' R such that all the records in R 0 satisfy the join condition,
where ' is a scalar comparison operator (i.e. 1 The join of R and S, that are
located at sites R site and S site respectively, can be performed using a semijoin in three steps.
First, S is projected on the joining attribute at S site , and the resultant set of distinct values,
say S 0 , is transmitted to R site . Next, the semijoin of R and S 0 is performed at R site to give
R 0 , which is sent to S site . Finally, the join of R 0 and S is performed at S site to produce the join
result. Since R 0 has fewer records than R, transmitting R 0 is cheaper. Moreover, there is some
saving in local processing costs for evaluation of the final join at site S site . Clearly, there are
additional local processing and transmission operations to be performed and a design objective
is to ensure that there is a net saving in cost.
The semijoin concept can be readily adapted to perform joins in distributed spatial databases.
However, the conventional semijoin has to be extended for additional considerations that are
peculiar to spatial databases. These considerations are:
ffl The conventional semijoin uses the distinct values of the joining attribute in S to minimize
the transmission cost from S site to R site and the cost to evaluate the semijoin. However,
as most spatial descriptions are intrinsically complex data types and represent irregular,
non-overlapping partitions of space, there is no direct equivalent to a projection on a
single-value attribute where the joining attribute is a spatial description. Consider, for
example, a relation representing soil types in a region. Each record then describes a
polygonal area occupied by a certain soil type. These polygons are represented as arrays
of the coordinates of their vertices. In natural resource databases, spatial descriptions are
typically hundreds to thousands of bytes long [9]. For cadastral databases, descriptions
are much smaller, but still in the region of 100 bytes [2]. Therefore, transmitting the
spatial descriptions remains costly.
ffl Evaluation of spatial relationships such as containment, intersection and adjacency between
two polygons is complex and expensive compared to testing equality of two single-value
attributes. For example, our study shows that testing intersection of two polygons
each with an average of six vertices costs 250 microseconds on a SUN SPARC-10
workstation, while the equality test of two single-value attributes is of the order of 0.1
microseconds.
In [6], ' is set to "=".
To cut down on the transmission cost and local processing cost to evaluate the semijoin
on the spatial attributes of R and S, we propose that approximations to the objects in R
and S and a computationally less expensive, but weaker, spatial relationship be used. The
approximations and the weaker spatial relationship must satisfy the property of conservation,
i.e., for two objects that are spatially related, their approximations must also be related by the
weaker relationship. For example, as shown in Figure 4, an object e 2 is possibly contained in
object e 1 only if its approximation
E2
Figure
4: E1 overlaps E2 , and e1 includes e2 .
To some extent, the idea of spatial semijoins using weaker relationships is similar to joins
on approximations in [12]. Table 1 lists some examples of spatial relationships (denoted ') and
their approximations (denoted \Theta).
distance d from e distance d from E 2
(measured between center points) (measured between closest points)
contained in e
Table
1: Some examples of operations and their approximations.
Using approximations and a weaker relationship are motivated by two observations. First,
the approximation is shorter than the full descriptions of the spatial object and hence will incur
lower transmission costs. Second, geometrically-simple approximations such as the minimum
bounding rectangles allow simple evaluation of spatial relationships and so reduce the computation
cost when evaluating the semijoin. For example, referring to Figure 4 again, transmitting
rectangles with 2 vertices is definitely cheaper than transmitting the irregularly shaped ob-
jects. Moreover, checking for rectangle intersection is also cheaper than checking for polygon
containment.
More formally, the spatial semijoin is defined as follows.
Definition 1. Let R and S be two spatial relations, and A and B be attributes of R and S
on spatial domains, respectively. The # spatial semijoin between R and S on attributes A and
B, denoted R
s
S, is defined by R (s:B)g. Here
# and ' (= g(#)) are spatial operators, f R and f S are approximation functions, and g is a
mapping function on relationships such that the following holds: for two records t 2 R, and
We have the following remarks on the definition of spatial semijoin.
1.) Approximation functions are used to map the complex spatial descriptions of objects to
a simpler form. The functions f R and f S may be different. For example, f R may map
each record of R to its minimum bounding rectangle, while f S may map each record of S
to its rotated minimum bounding rectangle.
2.) The semantics of ' is dependent on #. In fact, ' is a weaker relationship than #. While #
and ' generally refer to the same relationship, they may be different. Table 1 illustrates
some of these.
3.) As a result of point (2), i.e., using a weaker relationship, the result of the spatial semijoin
contains all the records of R that will participate in the final join operation. On the
contrary to the conventional semijoin, where the result of the semijoin is the set of records
of R that will participate in the final join operation, the spatial semijoin using a weaker
relationship does not eliminate totally all records that do not contribute to the final
answer. In other words, the spatial semijoin result contains both hits and false drops.
Hits are records of R that will satisfy the join operation, #, while false drops are those
that satisfy the join operation ' but not #.
3.2. The Framework
A distributed spatial join can be pre-processed using a spatial semijoin. The basic framework
of a spatial join, R
s
S at site S site , using a semijoin can be expressed as follows.
Framework 1. Distributed Spatial Join Processing Using Semijoin
Input: Two relations R and S with spatial attributes A and B, respectively.
s
S.
1.
2.
3. Send distinct values of B f from S site to R site ;
4. Reduce R to R such that fR (t:A) ' b where
5. Send R 0 to S site ;
6. Perform R 0 s
The performance of the framework hinges on the approximations to the spatial descriptions
of S, i.e., step 2. We restrict our discussion here to three possible approximation functions:
ffl f S is an identity function I such that t. In other words, the spatial
descriptions of the records of S are sent to the R site .
ffl f S is a 1-1 mapping such that each record of B 0 is mapped to a record in B f . However,
instead of the complex spatial description, a simpler approximation that bounds the
spatial object is used. For example, the minimum bounding rectangles (MBR) of the
records may be used and transmitted.
ffl f S is a m-1 mapping such that several records of B 0 are mapped to the same B f record.
For example, a smaller set of MBRs can be used such that each MBR bounds several
spatial objects.
The first two approaches lead to B f having as many records as B. The last approach,
on the other hand, has the potential for varying the size of B f to optimize the total cost of
the operation. The first approach has no false drops in R 0 if #, while the last approach
has the highest number of false drops in R 0 . However, the first approach incurs the highest
cost in transmitting B f to site R site , while the last approach requires minimum transmission
cost. Since the number of objects being reduced by S using the third approach might not be
significant, transmitting R 0 could still be very expensive. Such an effect will defeat the purpose
of a spatial semijoin.
The second and third approaches follow closely the usual practice in spatial database of
structuring operations as a filter operation using simplified descriptions (or approximations)
followed by full evaluation using the full descriptions of objects [23, 9, 8, 11, 36, 35]. Typi-
cally, a polygonal object is represented for the filter operation by its MBR, because MBRs are
inexpensive to store (16 bytes for single-precision) and spatial relationships between them are
inexpensive to evaluate. The filter test is formulated not to reject cases that satisfy the full
evaluation. However, it typically will not reject all cases that do not satisfy the full test. Design
then seeks a good compromise between the false drops, the costs of the filter operation and the
costs of storing and (if necessary) deriving the approximations. In place of MBRs, more accurate
single object filters such as convex hull, n-corner, g-degree rotated x-range and g-degree
rotated y-range [8, 36, 35] can be used to reduce the number of false drops. However, such
filters require both additional storage and computation overhead (as more points are needed to
represent the bounding box).
From the definition of the spatial semijoin and the description of the basic distributed join
framework, we note that they are independent of the approximations used and the spatial
relationships on the joining attributes of R and S. Thus, adopting different approximations
and spatial relationships will lead to different families and classes of semijoin-based algorithms.
Without loss of generality, we shall adopt the MBR as the basic approximation and intersection
as the spatial relationship.
4. Distributed Spatial Join Algorithms
In this section, we present several distributed intersection-join algorithms that use spatial semi-
joins. We assume both R and S are indexed. This is not unreasonable since large spatial
relations in practice usually have existing spatial indexes. The algorithms exploit existing
spatial indexes in two ways:
ffl to provide the approximations for spatial objects;
ffl to perform the semijoin and final join using the indexes at the respective site.
4.1. Semijoins Using Multi-Dimensional Approximations
For this category, we adopt R-tree as the indexing technique. When an R-tree index on S
exists, it can be exploited for distributed spatial join processing. Since the MBRs held at each
level of the R-tree form a candidate set of containing rectangles of the objects in S, the MBRs
can be considered as approximations for objects in S. In other words, the MBRs at any level of
the R-tree correspond to the result of an implicit m-1 mapping function on the objects of S. In
fact, this implicit m-1 mapping function is a reasonably good one since data objects bounded
by an MBR of an R-tree are clustered. Choosing the level of nodes in the R-tree to supply the
MBRs is equivalent to choosing the number of MBRs approximating S. While using a higher
level in the R-tree provides a smaller number of MBRs, employing a lower level provides better
approximations.
The semijoin at R site , and the final join at S site are both local (single-site) spatial join
operations. Since both the semijoin and final join are performed in a similar manner, we shall
only describe how the semijoin is performed. The main concern here is that we do not have an
index on the approximations of S, S 0 , at R site . Two possible techniques can be adopted:
ffl Create an R-tree index for S 0 at R site . 2 This allows us to exploit existing join techniques
proposed in [9] which require pre-computed R-tree indexes to exist on both data sets.
The join is then performed as described in Section 2.1.1.
ffl Employ a nested-index technique by performing a series of selections on R. This technique
essentially treats each MBR of S 0 as a query window on R.
2 This is because of our assumption that R is indexed using the same family of indexes, i.e., a R-tree in this case.
Note that this is not a restriction on our proposed semijoin-based join techniques. If R is indexed using other
indexing schemes such as the R -tree [7], we can also create an R -tree for S 0 and perform the join accordingly.
Our preliminary experimental study indicated that creating an R-tree is a very expensive oper-
ation. Using the data sets that we have, the cost of creating R-trees for the semijoin and final
join is more than half the total processing cost using the second approach. As such, as well as
for simplicity, we employ the nested-index technique in this paper.
4.2. Semijoins Using Linearized Single-Dimensional Object Mapping
For this category, we employ the quad-tree based key assignment to order spatial objects. For
each object, the locational keys of its MBR are used as approximations.
The distributed spatial join can be performed by transmitting the locational keys. Since
the locational keys are sorted, the semijoin at R site and the final join at S site can be performed
using a merge-like algorithm, as described in Section 2.1.2. As each object has multiple keys,
the join result may contain duplicates. Hashing is used to remove the duplicates.
We note that the semijoin is less complex than the final join. Since the join is a non-equijoin,
the scanning of the sorted lists for the final join requires "backing up". On the other hand, the
semijoin requires a full scan of R, and at most a full scan of S. This is because it suffices to
check that a record of R matches at least one record of S 0 , rather than multiple records of S 0 .
For the final join, to cut down the cost of I/O when records are backed-up, we cached those
records of R 0 (and S) that join with multiple records of S (and R 0 ).
5. Performance Study
A performance study on the distributed join algorithms was performed to answer the following
questions:
ffl How effective are the semijoin-based algorithms compared to the naive methods of evaluating
a join on distributed spatial databases?
ffl What approximations will lead to the best performance?
ffl What is the relative performance between the algorithms that are based on approximations
on cluster of objects and those that are based on single-dimensional locational keys?
In this section, we report the experiments conducted and their results.
5.1. The Experimental Setup
A data set of the land parcels for the whole State of South Australia was used for the experi-
ments. This database has 762000 records with polygonal spatial descriptions of an average of 6
vertices. Three pairs of test sets were extracted: sets 10R and 10S with 10000 parcels each, 50R
and 50S with 50000 parcels, and 100R and 100S with 100000 parcels each. The generation of
pairs of sets sought to ensure a controlled number of intersecting parcels. The database was initially
divided into three parts corresponding to three geographic regions of South Australia, so
that the upper part contained 280000 parcels, the middle 245000 parcels and the lower 237000
parcels. The set 10R was generated by randomly selecting two thirds of the records (i.e., 6666
parcels) from the upper part and one third (i.e., 3334 parcels) from the middle. The set 10S
was generated by selecting one third of the records from the middle part and two thirds from
the lower. The objects of S are translated by 100m northward and eastward. In this way, the
objects in the database that appear in both R and S become "different". The other pairs of test
sets were similarly generated. Tests showed that there were 135 intersections between objects
in the pair 10R and 10S, 3253 in the pair 50R and 50S, and 13096 in the pair 100R and 100S.
The performance of the various algorithms were compared on the total time for the distributed
spatial join processing. We omit the cost of producing the final result since it is the
same for all strategies. The total cost to perform a distributed spatial join is given as:
where C transmit , C cpu and C io respectively represent the transmission, CPU and I/O cost required
for the join.
The transmission cost incurred includes the cost of transmitting the approximations of S,
0 , from S site to R site , and the cost of transmitting records of R that will participate in the
final join, R 0 , from R site to S site .
The CPU cost comprises several components. These include the cost to initiate I/Os, to
initiate sending and receiving of objects/approximations, to extract the approximations, to
perform the semijoin at R site and to perform the final join at S site .
The I/O cost comes from fetching records for generating S 0 at S site , storing S 0 at R site ,
fetching S 0 for and performing the semijoin, fetching R 0 , storing R 0 at S site , refetching R 0 for
and performing the final join.
The various algorithms were implemented on a SUN SPARC-10 machine. The evaluation of
the algorithms was conducted by performing the semijoin and the final join using the data sets.
These provided information on the size of the approximations used to evaluate the semijoin
and the size of the semijoin result. We also monitored the CPU usage and the number of page
accesses. While the value of the CPU cost monitored reflects the CPU usage, the transmission
and I/O cost are computed respectively as follows:
number of bytes transmitted \Delta! bandwidth
number of page accesses \Delta io
where w bandwidth represents the bandwidth of the communication network and io represents the
cost per page access.
In our study, unless otherwise stated, ! bandwidth is 100KBytes/sec, the I/O cost for a 8
KBytes block is 0.025 sec/block, and the system has a buffer size of 10 MBytes. Both R-trees
-trees of locational keys were implemented in C.
Two series of experiments were conducted. The first evaluates the algorithms that make
use of approximations obtained from the R-tree, and the second examines the performance
of the locational key algorithms. We also conducted an experimental study on the relative
performance of the R-tree based and locational key based algorithms.
5.2. Results for Algorithms Using Multi-Dimensional Approximations
The R-tree index node is 2KBytes. Each node has at most 56 entries and at least 28 entries.
The characteristics of the trees are summarized in Table 2.
Relation R Relation S
Cardinality 10K 50K 100K 10K 50K 100K
height of the tree 3 3 4 3 3 4
number of nodes 285 1415 2829 281 1397 2838
Table
2: Properties of the R-trees.
Following the distributed join strategies described in Section 4.1, we studied the following
three strategies.
ffl Algorithm RT-N. This is the naive algorithm that transmits R to S site , and evaluates
the join directly as a sequence of selections on S.
ffl Algorithm RT-L. The MBR of each object of S is used as its approximation, and these
MBRs are taken from the leaf nodes of the R-tree for S. The MBRs are transmitted to
R site to reduce R before R is sent to S site for the final join.
ffl Algorithm RT-I. This algorithm is similar to algorithm RT-L except the approximations
of S are taken from the internal nodes at the level immediately above the leaf level
of the R-tree for S.
Figure
5 shows the result of the experiment when both relations R and S are of the same
size. In the figure, the lower, middle and upper bars represent the CPU, transmission and I/O
cost respectively. The corresponding information on the number of objects transmitted for the
semijoin-based algorithms are tabulated in Table 3. In the Table, S 0 denotes the number of
approximations (i.e., MBRs) of S transmitted to R site for the semijoin, and R 0 is the number
of R objects satisfying the semijoin and to be transmitted to S site for the final join.
A number of observations can be made, bearing in mind they are valid only within the
context of the characteristics of the data sets used (particularly the short spatial descriptions
of the Land Information Systems data) and of the relatively high transmission rate used. First,
Cardinality RT-L RT-I
100K/100K 100000 17112 2759 33319
Table
3: Number of objects transmitted for R-tree-based algorithms.
for all the algorithms, the local processing cost (C cpu significantly higher than the
communication cost. This confirms that local processing cost cannot be ignored during query
processing for distributed spatial joins. We note that several recent join algorithms [5, 15, 18, 27]
can be employed for local join processing. While these techniques may reduce the CPU and
I/O cost, we expect the relative performance between the various schemes to remain.
CPU cost
I/O cost
Transmission cost
10R 10S 50R 50S 100R 100S
RT-L RT-L RT-L
RT-N RT-I RT-I
RT-N RT-N RT-I
Total
time
(sec)
Figure
5: Comparisons of R-tree based algorithms.
Second, the result shows the effect of choosing the number of MBRs to approximate S.
There is a tradeoff between the number of MBRs used to approximate S and the number of
false drops. As shown in Table 3, a large number of MBRs results in a small number of false
drops, and a small number of MBRs leads to a large number of false drops. From Figure 5, we
see that algorithm RT-L, which uses a large number of MBRs to approximate S, performs worse
than algorithm RT-N. RT-L managed to reduce the communication cost through the semijoin
but its processing cost is much larger than those of RT-N. This is because RT-L requires as
many R-tree probes to perform the semijoin as RT-N takes to perform the final join. The
additional overhead incurred for the final join is more than enough to offset the benefits of
transmitting the reduced R. On the other hand, algorithm RT-I outperforms algorithm RT-N.
Transmitting a smaller number of MBRs leads to lower communication and processing cost for
the semijoin. However, a larger number of false drops results in higher communication and
processing cost for the final join. It turns out that the total cost is reduced. The particular
significance of this observation is that the containing rectangles of the internal nodes of an R-tree
one level above the leaf nodes is a good source of approximating rectangles to the objects
of the indexed relation. This means that a semijoin algorithm can re-use the spatial index
normally recommended for large spatial relations.
To study the effect of the size of the joining relations have on the algorithms, we conducted
experiments using relations of different sizes. The results are summarized in Figure 6. Several
interesting observations can be made. First, when R is large as compared to S (see the cases
for 100R s
./ 10S and 50R s
./ 10S), RT-L performs best. When joining a large R with a small S,
the effect of false drops becomes significant. For algorithms RT-N and RT-I, the large number
of false drops leads to high communication cost and high processing cost for the final join.
Second, algorithm RT-I outperforms RT-N in all cases. This shows that semijoin algorithms
are effective for distributed spatial databases. Before leaving this section, we would like to
point out that we have not exploited the obvious optimization of treating R as the larger of the
relations. While this may help in some cases, it is not expected to be so in our context as the
result site is fixed at S site (which means an additional phase of shipping the result from R site to
site would be necessary if the optimization is adopted for small R and large S relation pairs).
50S
10R 100S
50R 100S
50R 10S
100R 50S
100R
Total
time
(sec)
RT-L
RT-I
Figure
based joins of different relation sizes.
To understand more fully the effects of transmission rates on the algorithms, we also performed
sensitivity studies on the transmission rates. Figures 7(a)-7(c) show the results of these
tests when the communication bandwidth varies from 20 KBytes/sec to 100 KBytes/sec, and
Figure
7(d) shows the result for 100R s
100S when the communication bandwidth varies from
100 KBytes/sec to 1 MBytes/sec. This effectively models the range of network speeds from
WAN to LAN. In all the experiments, the length of spatial description is fixed at 48 bytes.
The result shows that when the communication bandwidth is low (! both the
semijoin based algorithms are more efficient than the naive approach. At low communication
bandwidth, transmission cost may become a dominant component in the total processing cost.
The ability to prune away objects that do not satisfy the join conditions makes the semijoin
based algorithms effective and attractive. On the other hand, the transmission cost for the
naive strategy to transmit the entirety of R from R site to S site is very high, resulting in its
poor performance. However, as the bandwidth increases, the naive strategy slowly catches up
with the semijoin based techniques. At bandwidth greater than outperforms
RT-L, and at bandwidth greater than 300 KBytes/sec, it performs best. This is because, at
higher communication bandwidth, the communication cost decreases and the processing cost
dominates performance. It turns out that the semijoin-based algorithms incur higher processing
cost for small spatial descriptions. This is because the semijoin-based algorithms need to write
out the approximations at R site and S site , and the result of the semijoin at S site . Moreover,
most of these I/Os are incurred twice - one to write, and the other to reread. We see a tradeoff
between the total I/O cost incurred and the I/O cost saved by the semijoin-based algorithms.
Note that the I/O cost saved comes from two components: number of false drops reduced by
the algorithm (the naive method did not reduce any false drops), and the size of the spatial
descriptions. When the length of spatial descriptions is small, the I/O cost saved is relatively
small.
The length of spatial descriptions affects both the I/O and communication cost. Figures
8(a)-8(c) show the results as we vary the length of the spatial descriptions from 50 bytes
to 2 KBytes. The lower end of the range (towards 50 bytes) models LIS applications with their
small spatial descriptions while the higher end (towards 2KBytes) represents GIS applications
with long descriptions. In this study, the communication bandwidth of 100 KBytes/sec was
used. The result shows that RT-I always outperform RT-N, regardless of the size of the spatial
descriptions. RT-L outperforms RT-N for large spatial descriptions. This is due to the significantly
higher transmission cost and I/O cost for RT-N. Comparing RT-I and RT-L, we note
that RT-L outperforms RT-I for large spatial descriptions. As the size of spatial descriptions in-
creases, false drops will result in more data being transmitted leading to higher communication
and I/O cost and hence RT-L becomes comparable to RT-I.
From the experiments, there is no doubt about the effectiveness of using semijoins since
RT-N is always worse than either RT-I or RT-L (except for very high communication band-
width). However, the choice of the approximations is critical to the effectiveness of a semijoin.
For applications with small spatial descriptions, RT-I is the best choice. On the other hand,
for applications with large spatial descriptions, RT-L proves to be effective. The results are
expected because when the spatial description is small, unless the reduced R is very small,
the saving will not be significant. In this case, the overhead of for transmitting the (reducing)
approximations of S should not be too high for semijoin-based algorithms to be effective. As
such, RT-I which has a smaller number of approximations is the better scheme. When the
spatial description is large, the saving from transmitting spatial objects that are of no interest
e
Comm. bandwidth (in KBytes)
RT-N \Theta
\Theta
\Theta
\Theta
\Theta \Theta
(a) 10R s
Total
Time
Comm. bandwidth (in KBytes)
RT-N \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(b) 50R s
Total
Time
Comm. bandwidth (in KBytes)
RT-N \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(c) 100R s
Total
Time
Comm. bandwidth (in KBytes)
RT-N \Theta
\Theta
\Theta
\Theta \Theta \Theta \Theta
(d) 100R s
Figure
7: R-tree based joins with varying communication bandwidth.
to the answer is much higher, and more accurate approximations are more effective and the
cost of transmitting these approximations are much lower than the cost of transmitting real
objects.
Before leaving this section, we would like to add that the data set that we used for land
parcels is densely populated. For sparsely populated data sets, it may be possible for RT-L to
outperform RT-I since the MBRs at RT-I is likely to generate a large number of false drops.
5.3. Results for Algorithms Using Single Dimensional Object Mapping
In this section, we report on the experiments and results for the algorithms based on single
dimensional object mapping. We note that most of the results in this section have also been
reported in [4]. From the data sets, we generated the locational key values of the objects. Three
e
RT-N \Theta
\Theta \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(a) Record
Total
Time
RT-N \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(b) Record
Total
Time
\Theta
\Theta
\Theta
\Theta
\Theta
(c) Record
Figure
8: R-tree based joins with varying record width.
strategies based on locational keys were evaluated. They are:
ffl Algorithm QT-N. This is the naive algorithm which transmits R and the associated
locational keys to S site , and evaluates the join directly. 3
ffl Algorithm QT-4. This algorithm uses a semijoin. The locational keys of all objects
in S are used as the approximations. These locational keys are readily available from the
leaf nodes of the tree for S. In our study, each object has at most four locational
keys. 4 Duplicate keys are removed before transmission.
3 We have conducted preliminary tests on a nested-loops algorithm that transmits R to S site , and performs the
join as a series of spatial selections of R records on S. The results turned out to be bad. It is always worse
than algorithm QT-N, and performed as much as 4 times worse than algorithm QT-N. As such, we omitted the
nested-loops algorithm in our experiments.
4 We can derive a family of algorithms with different number of locational keys. Our preliminary study, however,
shows that 4 locational keys perform well.
ffl Algorithm QT-C. In algorithm QT-4, there may be redundancy in the approximations
of S. First, a region represented by a locational key may be contained in a region
represented by another locational key. Second, 4 regions represented by 4 distinct locational
keys may be merged into a bigger region represented by a single locational key. To
remove such redundancy, the approximations of S are compacted before transmitting to
R site . We denote the new algorithm QT-C. Note that compaction does not result in any
loss of information. While compaction minimizes the size of the approximations to be
transmitted and the cost of the semijoin, it also incurs additional CPU cost to compact
the approximations.
With the data sets that we have, each object has an average of 2.3 locational keys, so the
relations have about 23K locational keys. Table 4 summarizes the data for the various
relations used.
Relation R Relation S
Cardinality 10K 50K 100K 10K 50K 100K
Total number of keys 23189 115484 231426 23175 115904 232169
Table
4: Total number of locational keys.
We repeated the same sets of experiments on the multi-dimensional approximation based
algorithms using the locational key algorithms. The results are summarized in Figures 9 - 11.
In
Figure
both relations R and S are of the same size. The corresponding information on the
data transmitted are shown in Table 5. Here, S 0 corresponds to the number of locational keys
of S transmitted to R site , R 0 is the cardinality of the semijoin result (i.e., number of objects of
R that satisfy the semijoin), and R 00 is the corresponding number of locational keys transmitted
for the semijoin result.
First, we observe that, unlike the multi-dimensional approximation based algorithms, the
communication cost for locational key based algorithms is more significant. Two reasons account
for this - the lower local processing (especially CPU) cost and the larger amount of data to be
transmitted. By using a sort-merge join method, the two lists of locational keys can be scanned
simultaneously, resulting in low CPU cost for the semijoin and final join. On the other hand,
the size of the approximations could be large, and transmitting the approximations becomes
costly.
Second, we note that the algorithms based on semijoins outperform the naive approach. For
algorithm QT-N, the communication cost dominates performance. Algorithm QT-4 performs
better since it is able to cut down the size of the data to be transmitted. Algorithm QT-C
outperforms QT-4 slightly because it reduces the size of the approximations to be transmitted.
From the result, we see that QT-C is more effective for large S. This is because for large S,
more approximations can be compacted.
Figure
shows the results of the experiments when R and S have different relation sizes.
Cardinality QT-4 QT-C
Table
5: Number of objects transmitted for locational key based algorithms.
QT-C
QT-N QT-4 QT-C
QT-N QT-4
QT-C
QT-N QT-4
50R 50S
10R 10S 100R 100S
CPU cost
Transmission cost
I/O cost10.95 10.5156.55151.5861.15100Total
time
(sec)
Figure
9: Comparisons of locational key based algorithms.
Both algorithms QT-4 and QT-C are inferior to QT-N when S is larger than R. This is
expected, since for large S, transmitting the large number of approximations of S to R site is
not less expensive than transmitting the entirety of the smaller R to S site . Thus, the savings
gained in reducing R cannot outweigh the overhead.
Figures
11(a)-11(c) show the results for spatial joins of different relation sizes when the
communication bandwidth varies from 20 KBytes/sec to 100 KBytes/sec. The result as communication
bandwidth varies from 100 KBytes/sec to 1 MBytes/sec is shown in Figure 11(d).
As shown in the figures, the semijoin-based algorithms are more effective at lower communication
bandwidth (! 300 KBytes/sec). This is expected since communication cost is critical in
locational key based algorithms as more data are transmitted.
In
Figures
12(a)-12(c), we note that the semijoin-based algorithms also perform best for
50S 10S
100R 50S
100R60150QT-4
QT-C
QT-N
10R 100S
10R 50R 100S
50R
Total
time
(sec)
Figure
10: Locational keys based joins of different relation sizes.
varying length of spatial descriptions. The longer the spatial descriptions, the higher the communication
cost (and I/O cost). This leads to the poor performance of the naive methods.
Note that the difference between the two semijoin algorithms is not significant. In fact, the
difference decreases as the length increases. The compaction of locational keys only benefits
the transmission of the approximations which do not change with the length of the spatial
descriptions, i.e., the total cost is increasing while the saving due to compaction stays constant.
5.4. Comparison of Multi-Dimensional and Single-Dimensional Approximations Algorithm
A fair comparison of multi-dimensional approximations and single-dimensional algorithms should
include a comprehensive study on (1) a set of operations (e.g. selection, intersection-join,
adjacency-join, etc.), (2) a large set of data, and (3) different applications (e.g. GIS, LIS).
We would thus note that our comparative study between the R-tree and locational key algorithms
reported in this section is valid only with respect to our data sets. Furthermore, we
have restricted our evaluation to only the intersection-join operation.
Based on the results of the experiments on the R-tree based and locational keys based
algorithms, we compare RT-I with QT-C. For simplicity, we denote these two algorithms as RT
and QT respectively . Figures 13 - are some representative sets of results.
From the results, we note that algorithm QT outperforms RT in almost all cases. As seen in
Figure
13, algorithm RT is very costly in terms of local processing, especially the CPU cost in
probing the R-tree. On the other hand, QT requires only one scan of sorted lists of locational
which makes it very efficient. In the largest test data sets used (i.e., 100R s
./ 100S), a
cache size of 20 pages (4KBytes each) is enough to contain all records of R 0 that join with
multiple records of S. In other words, both relations need to be read only once.
Because of the high CPU cost of RT, we decide to investigate the effect of faster CPU speed
with fixed I/O and communication costs. Figure 14 shows the result for 100R s
./ 100S using
e
Comm. bandwidth (in KBytes)
QT-N \Theta
\Theta
\Theta
\Theta
\Theta
(a) 10R s
Total
Time
Comm. bandwidth (in KBytes)
QT-N \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(b) 50R s
Total
Time
Comm. bandwidth (in KBytes)
QT-N \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(c) 100R s
Total
Time
Comm. bandwidth (in KBytes)
QT-N \Theta
\Theta
\Theta
\Theta \Theta \Theta
(d) 100R s
Figure
11: Locational keys based joins with varying communication bandwidth.
the default setting. In the figure, the x-axis denotes the CPU speed factor, e.g., a value of 2
refers to a processor speed twice of that used in our study. As shown in the figure, with a faster
processor, the RT can outperform QT. However, this is achieved only with very fast processors
(factor ? 6).
Figure
15 shows the results as the communication bandwidth varies. In Figure 15(a), the
spatial description is short (50 bytes). We note that while QT is superior for a wide range of
bandwidth, the gain in performance over RT decreases as the bandwidth decreases. Moreover,
when the spatial description is large (1000 bytes), RT performs equally well as QT at low
bandwidth (see Figure 15(b)). In both cases, the reason is because QT transmits more data -
both the locational keys and the data. Recall that the locational keys of an object is obtained
from its MBR. This means that QT is less effective in reducing R during the semijoin, i.e., for
QT, the number of false drops is larger. In our study, we find that the false drops produced by
e
QT-N \Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(a) Record
Total
Time
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
(b) Record
Total
Time
\Theta
\Theta
\Theta
\Theta
\Theta
(c) Record
Figure
12: Locational keys based joins with varying communication bandwidth.
QT is twice as much as that produced by RT. As a result, more data is transmitted and more
I/Os are incurred.
Figure
compares the two algorithms for varying length of spatial descriptions. Again,
the study was conducted on both low and high communication bandwidth (20 and 100 KB/sec
respectively). The results confirm our earlier observation that QT is superior for high communication
bandwidth and small spatial descriptions. On the other hand, RT performs better at
low communication bandwidth and large spatial descriptions.
To summarize, the results show that RT and QT perform equally well for GIS applications.
For LIS applications, QT is best for LIS applications. However, with faster CPU, RT can be
promising.
CPU cost
Transmission cost
I/O cost
QT
RT QT
RT QT
RT
10R 10S 50R 50S 100R 100S102.2310.51117.96200
Total
time
(sec)
Figure
13: Comparisons of R-tree and locational keys based algorithms.
6. Conclusions
In this paper, we have presented the spatial semijoin operator as a basis for improved algorithms
for evaluation of joins on distributed spatial databases. Its formulation and application draw
on the concepts of the semijoin of conventional databases and of the filter test of spatial query
processing. The application of a spatial semijoin in two families of algorithms, one based
on the use of multi-dimensional approximations and the other based on the use of single-dimensional
locational keys, has been described. The multi-dimensional approximation based
semijoin algorithms use the minimum bounding rectangles derived from the nodes at a certain
level of the R-tree as approximations. On the other hand, the single-dimensional locational key
based algorithms exploit sort-merge techniques by representing each object with at most four
single-dimensional locational keys.
Our experimental study showed that semijoin algorithms provide useful reductions in the
cost of evaluating a join in most cases. For the algorithms that are based on the approximations
in R-tree, the choice of the number of approximations is critical for different domains
of application. In particular, for Geographic Information System (GIS) applications, a larger
number of approximations is preferred. However, in applications like Land Information System
(LIS) applications, a smaller number of approximations result in better performance. For the
locational key algorithms, while compacting large relations could reduce the cost, the gain is
not significant, especially for applications with large spatial descriptions. The comparison between
these two families of algorithms showed that both locational key and multi-dimensional
approximation based semijoin algorithms are suited for GIS applications, while the locational
CPU speed factor (x)
RT
QT
Figure
14: Effect of CPU speed.
based semijoin algorithms are more effective for LIS applications. However, with faster
CPU, we can expect the R-tree based methods to be promising.
We plan to extend this work in several directions. First, we would like to cut down the
high CPU cost of the R-tree based algorithms. Second, we have not adequately addressed the
issue of buffering. Recent results on the effect of buffering, and buffering schemes will provide
a good starting point [16]. Finally, for the locational key based techniques, besides compaction
which is non-loss, we plan to look at how approximations may impact the performance of the
algorithm. For the locational key based techniques, besides compaction which is non-loss, we
plan to look at how approximations may impact the performance of the algorithm.
Acknowledgement
We would like to thank Robert Power (of CSIRO) and Jeffrey X. Yu (of Dept of Computer
Science, Australian National Unviersity) for their contributions in discussion. They have also
read an earlier draft of this paper and provided helpful comments. Robert Power also helped set
up the experiments for this work. The anonymous referees also provided insightful comments
that help improve the paper.
--R
Some evolutionary paths for spatial database.
The virtual database: A tool for migration from legacy lis.
Spatial join strategies in distributed spatial dbms.
Scalable sweeping-based spatial join
Using semi-joins to solve relational queries
Efficient processing of spatial joins using r-trees
Processing joins with user-defined functions
Efficient computation of spatial joins.
A dynamic index structure for spatial searching.
Spatial joins using r-trees: Breadth-first traversal with global optimizations
Size separation spatial joins.
The effect of buffering on the performance of r-trees
Spatial joins using seeded trees.
Spatial hash-joins
Spatial joins by precomputation of approximation.
A computer oriented geodetic data base and a new technique in file sequencing.
The grid file: An adaptable
Spatial query processing in an object-oriented database system
A comparison of spatial query processing techniques for naive and parameter spaces.
An algorithm for computing the overlay of k-dimensional spaces
Principles of distributed database systems.
Spatial join indices.
The Design and Analysis of Spatial Data Structures.
Filter trees for managing spatial data over a range of size granularities.
GIS Planning - Land Status and Assets Management
Join indices.
Optimization of n-way spatial joins using filters
Query optimization for gis using filters.
Distributed query processing.
--TR
--CTR
Orlando Karam , Fred Petry, Optimizing distributed spatial joins using R-Trees, Proceedings of the 43rd annual southeast regional conference, March 18-20, 2005, Kennesaw, Georgia
Edwin H. Jacox , Hanan Samet, Spatial join techniques, ACM Transactions on Database Systems (TODS), v.32 n.1, p.7-es, March 2007 | locational keys;spatial indexes;r-tree;spatial semijoin;distributed spatial database systems;query processing |
628111 | Finding Interesting Associations without Support Pruning. | AbstractAssociation-rule mining has heretofore relied on the condition of high support to do its work efficiently. In particular, the well-known a priori algorithm is only effective when the only rules of interest are relationships that occur very frequently. However, there are a number of applications, such as data mining, identification of similar web documents, clustering, and collaborative filtering, where the rules of interest have comparatively few instances in the data. In these cases, we must look for highly correlated items, or possibly even causal relationships between infrequent items. We develop a family of algorithms for solving this problem, employing a combination of random sampling and hashing techniques. We provide analysis of the algorithms developed and conduct experiments on real and synthetic data to obtain a comparative performance analysis. | Introduction
A prevalent problem in large-scale data mining is that of association-rule mining, first introduced
by Agrawal, Imielinski, and Swami [1]. This challenge is sometimes referred to as the market-basket
problem due to its origins in the study of consumer purchasing patterns in retail stores,
although the applications extend far beyond this specific setting. Suppose we have a relation R
containing n tuples over a set of boolean attributes A 1 ; A
and
l g be two sets of attributes. We say that I ) J is an association rule if the
following two conditions are satisfied: support - the set I [ J appears in at least an s-fraction of
the tuples; and, confidence - amongst the tuples in which I appears, at least a c-fraction also have
J appearing in them. The goal is to identify all valid association rules for a given relation.
To some extent, the relative popularity of this problem can be attributed to its paradigmatic
nature, the simplicity of the problem statement, and its wide applicability in identifying hidden patterns
in data from more general applications than the original market-basket motivation. Arguably
though, this success has as much to do with the availability of a surprisingly efficient algorithm,
the lack of which has stymied other models of pattern-discovery in data mining. The algorithmic
efficiency derives from an idea due to Agrawal et al [1, 2], called a-priori, which exploits the support
requirement for association rules. The key observation is that if a set of attributes S appears in a
fraction s of the tuples, then any subset of S also appears in a fraction s of the tuples.
This principle enables the following approach based on pruning: to determine a list L k of all
k-sets of attributes with high support, first compute a list L k\Gamma1 of all 1)-sets of attributes
of high support, and consider as candidates for L k only those k-sets that have all their
subsets in L k\Gamma1 . Variants and enhancements of this approach underlie essentially all known efficient
algorithms for computing association rules or their variants. Note that in the worst case, the
problem of computing association rules requires time exponential in m, but the a-priori algorithm
avoids this pathology on real data sets. Observe also that the confidence requirement plays no role
in the algorithm, and indeed is completely ignored until the end-game, when the high-support sets
are screened for high confidence.
Our work is motivated by the long-standing open question of devising an efficient algorithm
for finding rules that have extremely high confidence, but for which there is no (or extremely
support. For example, in market-basket data the standard association-rule algorithms may
be useful for commonly-purchased (i.e., high-support) items such as "beer and diapers," but are
essentially useless for discovering rules such as that "Beluga caviar and Ketel vodka" are always
bought together, because there are only a few people who purchase either of the two items. We
develop a body of techniques which rely on the confidence requirement alone to obtain efficient
algorithms.
There are two possible objections to removing the support requirement. First, this may increase
the number of rules that are produced and make it difficult for a user to pin-point the rules
of interest. But note that in most of the applications described next, the output rules are not
intended for human analysis but rather for an automated analysis. In any case, our intent is to
seek low-support rules with confidence extremely close to 100%; the latter will substantially reduce
the output size and yet leave in the rules that are of interest. Second, it may be argued that
rules of low support are inherently uninteresting. While this may be true in the classical market-basket
applications, there are many applications where it is essential to discover rules of extremely
high confidence without regard for support. We discuss these applications briefly and give some
supporting experimental evidence before turning to a detailed description of our results.
One motivation for seeking such associations, of high confidence but without any support reNames
Phrases Misc Relations
(Dalai, Lama) (pneumocystis, carinii) (avant, garde) (encyclopedia, Britannica)
(Meryl, Streep) (meseo, oceania) (mache, papier) (Salman, Satanic)
(Bertolt, Brecht) (fibrosis, cystic) (cosa, nostra) (Mardi, Gras)
(Buenos, Aires) (hors, oeuvres) (emperor, Hirohito)
(Darth, Vader) (presse, agence)
Figure
1: Examples of different types of similar pairs found in the news articles
quirement, is that most rules with high support are obvious and well-known, and it is the rules
of low-support that provide interesting new insights. Not only are the support-free associations a
natural class of patterns for data mining in their own right, they also arise in a variety of applications
such as: copy detection - identifying identical or similar documents and web pages [4, 13];
clustering - identifying similar vectors in high-dimensional spaces for the purposes of clustering
data [6, 9]; and, collaborative filtering - tracking user behavior and making recommendations to
individuals based on similarity of their preferences to those of other users [8, 16]. Note that each
of these applications can be formulated in terms of a table whose columns tend to be sparse, and
the goal is to identify column pairs that appear to be similar, without any support requirement.
There are also other forms of data mining, e.g., detecting causality [15], where it is important to
discover associated columns, but there is no natural notion of support.
We describe some experimental results for one such application: mining for pairs of words that
occur together in news articles obtained from Reuters. The goal was to check whether low-support
and high-confidence pairs provided any interesting information. Indeed the similar pairs proved
to be extremely interesting as illustrated by the representative samples provided in Figure 1. A
large majority of the output pairs were names of famous international personalities, cities, terms
from medicine and other fields, phrases from foreign languages and other miscellaneous items like
author-book pairs and organization names. We also obtained clusters of words, i.e., groups of words
in which most pairs have high similarity. An example is the cluster (chess, Timman, Karpov,
Soviet, Ivanchuk, Polger) which represents a chess event. It should be noted that the pairs
discovered have very low support and would not be discovered under the standard definition of
association rules. Of course, we can run the a-priori algorithm with very low support definition,
but this would be really slow as indicated in the running time comparison provided in Section 5.
2 Summary of Results
The notion of confidence is asymmetric or uni-directional, and it will be convenient for our purpose
to work with a symmetric or bi-directional measure of interest. At a conceptual level, we view the
data as a 0/1 matrix M with n rows and m columns. Typically, the matrix is fairly sparse and
we assume that the average number of 1s per row is r and that r !! m. (For the applications we
have in mind, n could be as much as could be as large as 10 6 , and r could be as small as
as the set of rows that have a 1 in column c i ; also, define the density of column c i
as We define the similarity of two columns c i and c j as
That is, the similarity of c i and c j is the fraction of rows, amongst those containing a 1 in either
c i or c j , that contain a 1 in both c i and c j . Observe that the definition of similarity is symmetric
with respect to c i and c j ; in contrast, the confidence of the rule fc i g ) fc j g is given by
To identify all pairs of columns with similarity exceeding a prespecified threshold is easy when the
matrix M is small and fits in main memory, since a brute-force enumeration algorithm requires
O(m 2 n) time. We are more interested in the case where M is large and the data is disk-resident.
In this paper, our primary focus is on the problem of identifying all pairs of columns with
similarity exceeding a pre-specified threshold s . Restricting ourselves to this most basic version
of the problem will enable us clearly to showcase our techniques for dealing with the main issue
of achieving algorithmic efficiency in the absence of the support requirement. It is possible to
generalize our techniques to more complex settings, and we discuss this briefly before moving on
to the techniques themselves. It will be easy to verify that our basic approach generalizes to
the problem of identifying high-confidence association rules on pairs of columns, as discussed in
Section 6. We omit the analysis and experimental results from this version of the paper, as the
results are all practically the same as for high-similarity pairs. It should be noted that several recent
papers [3, 14, 15] have expressed dissatisfaction with the use of confidence as a measure of interest
for association rules and have suggested various alternate measures. Our ideas are applicable to
these new measures of interest as well. A major restriction in our work is that we only deal with
pairs of columns. However, we believe that it should be possible to apply our techniques to the
identification of more complex rules; this matter is discussed in more detail in Section 6.
All our algorithms for identifying pairs of similar columns follow a very natural three-phase
compute signatures, generate candidates, and prune candidates. In the first phase, we
make a pass over the table generating a small hash-signature for each column. Our goal is to deal
with large-scale tables sitting in secondary memory, and this phase produces a "summary" of the
table that will fit into main memory. In the second phase, we operate in main memory, generating
candidate pairs from the column signatures. Finally, in the third phase, we make another pass
over the original table, determining for each candidate pair whether it indeed has high similarity.
The last phase is identical in all our algorithms: while scanning the table data, maintain for each
candidate column-pair the counts of the number of rows having a 1 in at least one of the
two columns and also the number of rows having a 1 in both columns. Consequently, we limit the
ensuing discussion to the proper implementation of only the first two phases.
The key ingredient, of course, is the hashing scheme for computing signatures. On the one
hand, it needs to be extremely fast, produce small signatures, and be able to do so in a single
pass over the data. Competing with this goal is the requirement that there are not too many
false-positives, i.e., candidate pairs that are not really highly-similar, since the time required for
the third phase depends on the number of candidates to be screened. A related requirement is that
there are extremely few (ideally, none) false-negatives, i.e., highly-similar pairs that do not make it
to the list of candidates.
In Section 3 we present a family of schemes based on a technique called Min-Hashing (MH) which
is inspired by an idea used by Cohen [5] to estimate the size of transitive closure and reachability
sets (see also Broder [4]). The idea is to implicitly define a random order on the rows, selecting
for each column a signature that consists of the first row index (under the ordering) in which the
column has a 1. We will show that the probability that two columns have the same signature is
proportional to their similarity. To reduce the probability of false-positives and false-negatives, we
can collect k signatures by independently repeating the basic process or by picking the first k rows
in which the column has 1's. The main feature of the Min-Hashing scheme is that, for a suitably
large choice of k, the number of false-positives is fairly small and the number of false-negatives is
essentially zero. A disadvantage is that as k rises, the space and time required for the second phase
(candidate generation) increases.
Our second family of schemes, called Locality-Sensitive Hashing (LSH), is presented in Section 4
and is inspired by the ideas used by Gionis, Indyk, and Motwani [7] for high-dimensional nearest
neighbors (see also Indyk and Motwani [11]). The basic idea here is to implicitly partition the set of
rows, computing a signature based on the pattern of 1's of a column in each subtable; for example,
we could just compute a bit for each column in a subtable, denoting whether the number of 1's
in the column is greater than zero or not. This family of schemes suffers from the disadvantage
that reducing the number of false-positives increases the number of false-negatives, and vice versa,
unlike in the previous scheme. While it tends to produce more false-positives or false-negatives, it
has the advantage of having much lower space and time requirements than Min-Hashing.
We have conducted extensive experiments on both real and synthetic data, and the results
are presented in Section 5. As expected, the experiments indicate that our schemes outperform
the a-priori algorithm by orders of magnitude. They also illustrate the point made above about
the trade-off between accuracy and speed in our two algorithms. If it is important to avoid any
false-negatives, than we recommend the use of the Min-Hashing schemes which tend to be slower.
However, if speed is more important than complete accuracy in generating rules, than the Locality-
Sensitive Hashing schemes are to be preferred. We conclude in Section 6 by discussing the extensions
of our work alluded to earlier, and by providing some interesting directions for future work.
Min-Hashing Schemes
The Min-Hashing scheme used an idea due to Cohen [5], in the context of estimating transitive
closure and reachability sets. The basic idea in the Min-Hashing scheme is to randomly permute
the rows and for each column c i compute its hash value h(c i ) as the index of the first row under
the permutation that has a 1 in that column. For reasons of efficiency, we do not wish to explicitly
permute the rows, and indeed would like to compute the hash value for each column in a single
pass over the table. To this end, while scanning the rows, we will simply associate with each row
a hash value that is a number chosen independently and uniformly at random from a range R.
Assuming that the number of rows is no more than 2 will suffice to choose the hash value as a
random 32-bit integer, avoiding the "birthday paradox" [12] of having two rows get identical hash
value. Furthermore, while scanning the table and assigning random hash values to the rows, for
each column c i we keep track of the minimum hash value of the rows which contain a 1 in that
column. Thus, we obtain the Min-Hash value h(c i ) for each column c i in a single pass over the
table, using O(m) memory.
Proposition 1 For any column pair
This is easy to see since two columns will have the same Min-Hash value if and only if, in the
random permutation of rows defined by their hash values, the first row with a 1 in column c i is
also the first row with a 1 in column c j . In other words, h(c i only if in restriction
of the permutation to the rows in C i [ C j , the first row belongs to C
In order to be able to determine the degree of similarity between column-pairs, it will be
necessary to determine multiple (say independent Min-Hash values for each column. To this
end, in a single pass over the input table we select (in parallel) k independent hash values for each
row, defining k distinct permutations over the rows. Using O(mk) memory, during the single pass
we can also determine the corresponding k Min-Hash values, say h
c j under each of k row permutations. In effect, we obtain a matrix c
M with k rows, m columns,
and c
entries in a column are the Min-Hash values for it. The matrix c
can be viewed as a compact representation of the matrix M . We will show in Theorem 1 below
that the similarity of column-pairs in M is captured by their similarity in c
M .
be the fraction of Min-Hash values that are identical for c i and c j , i.e.,
We have defined b
as the fraction of rows of b
S in which the Min-Hash entries for columns
c i and c j are identical. We now show that b
good estimator of S(c Recall that we
set a threshold s such that two columns are said to be highly-similar if S(c Assume
that s is lower bounded by some constant c. The following theorem shows that we are unlikely to
get too many false-positives and false-negatives by using b
S to determine similarity of column-pairs
in the original matrix M .
Theorem 1 Let of columns c i and
we have the following two properties.
a) If S(c
with probability at least 1 \Gamma ffl.
with probability at least 1 \Gamma ffl.
We sketch the proof of the first part of the theorem; the proof of the second part is quite similar
and is omitted. Fix any two columns c i and c j having similarity S(c l be a random
variable that takes on value 1 if h l
By Proposition 1, E[X l ks . Applying the Chernoff bound [12]
with the random variable X , we obtain that
ks
To establish the first part of the theorem, simply notice that b
Theorem 1 establishes that for sufficiently large k, if two columns have high similarity (at
least s ) in M then they agree on a correspondingly large fraction of the Min-Hash values in
c
conversely, if their similarity is low (at most c) in M then they agree on a correspondingly
small fraction of the Min-Hash values in c
M . Since c
M can be computed in a single pass over
the data using O(km) space, we obtain the desired implementation of the first phase (signature
computation). We now turn to the task of devising a suitable implementation of the second phase
(candidate generation).
3.1 Candidate Generation from Min-Hash Values
Having computed the signatures in the first phase as discussed in the previous section, we now
wish to generate the candidate column-pairs in the second phase. At this point, we have a k \Theta m
matrix c
containing k Min-Hash values for each column. Since k !! n, we assume the c
M is much
smaller than the original data and fits in main memory. The goal is to identify all column-pairs
which agree in a large enough fraction (at least to (1 \Gamma ffi)s ) of their Min-Hash values in c
M . A
brute-force enumeration will require O(k) time for each column-pair, for a total of O(km 2 ). We
present two techniques that avoid the quadratic dependence on m and are considerably faster when
(as is typically the case) the average similarity
Row-Sorting: For this algorithm, view the rows of c
M as a list of tuples containing a Min-Hash
value and the corresponding column number. We sort each row on the basis of the Min-Hash
values. This groups together identical Min-Hash values into a sequence of "runs." We maintain
for each column an index into the position of its Min-Hash value in each sorted row. To estimate
the similarity of column c i with all other columns, we use the following algorithm: use m counters
for column c i where the jth counter stores the number of rows in which the Min-Hash values
of columns c i and c j are identical; for each row into the run containing the Min-
Hash value for c i , and for each other column represented in this run, increment the corresponding
counter. To avoid O(m 2 ) counter initializations, we re-use the same O(m) counters when processing
different columns, and remember and re-initialize only counters that were incremented at least once.
We estimate the running time of this algorithm as follows. Sorting the rows requires total time
O(km log m); thereafter, indexes on the columns can be built in time O(km). The remaining time
amounts to the total number of counter increments. When processing a row with column c i , the
number of counter increments is in fact the length of a run. The expected length of a run equals
the sum of similarities
Hence, the expected counter-increment cost when processing
c i is O(k
and the expected combined increments cost is O(k
O(kSm 2 ). Thus, the expected total time required for this algorithm is is O(km log m
Note that the average similarity S is typically a small fraction, and so the latter term in the running
time is not really quadratic in m as it appears to be.
Hash-Count: The next section introduces the K-Min-Hashing algorithm where the signatures
for each column c i is a set Sig i of at most, but not exactly, k Min-Hash values. The similarity
of a column-pair estimated by computing the size of Sig clearly, it suffices
to consider ordered pairs This task can be accomplished via the following
hash-count algorithm. We associate a bucket with each Min-Hash value. Buckets are indexed using
a hash function defined over the Min-Hash values, and store column-indexes for all columns c i with
some element of Sig i hashing into that bucket. We consider the columns c
and for column c i we use counters, of which of the jth counter stores . For each
Min-Hash value v 2 Sig i , we access its hash-bucket and find the indexes of all columns c j (j ! i)
which have v 2 Sig j . For each column c j in the bucket, we increment the counter for
Finally, we add c i itself to the bucket.
Hash-Count can be easily adapted for use with the original Min-Hash scheme where we instead
want to compute for each pair of columns the number of c
M rows in which the two columns agree.
To this end, we use a different hash table (and set of buckets) for each row of the matrix c
M , and
execute the same process as for K-Min-Hash. The argument used for the row-sorting algorithm
shows that hash-count for Min-Hashing takes O(kSm 2 ) time. The running time of Hash-Count for
K-Min-Hash amounts to the number of counter increments. The number of increments made to a
counter exactly the size of jSig i "Sig j j. A simple argument (see Lemma 1) shows that the
expected size EfjSig
Thus, the expected total running time of the hash-table scheme is O(kSm 2 ) in both cases.
3.2 The K-Min-Hashing Algorithm
One disadvantage of the Min-Hashing scheme outlined above is that choosing k independent Min-
Hash values for each column entailed choosing k independent hash values for each row. This has
a negative effect on the efficiency of the signature-computation phase. On the other hand, using
k Min-Hash values per column is essential for reducing the number of false-positives and false-
negatives. We now present a modification called K-Min-Hashing (K-MH) in which we use only a
single hash value for each row, setting the k Min-Hash values for each column to be the hash values
of the first k rows (under the induced row permutation) containing a 1 in that column.
approach was also mentioned in [5] but without an analysis.) In other words, for each column we
pick the k smallest hash values for the rows containing a one in that column. If a column c i has
fewer 1s than k, we assign as Min-Hash values all hash values corresponding to rows with 1s in
that column. The resulting set of (at most) k Min-Hash values forms the signature of the column
c i and is denoted by Sig i .
Proposition 2 In the K-Min-Hashing scheme, for any column c i , the signature Sig i consists of
the hash values for a uniform random sample of distinct rows from C i .
We remark that if the number of 1s in each column is significantly larger than k, then the hash
values may be considered independent and the analysis from Min-Hashing applies. The situation
is slightly more complex when the columns are sparse, which is the case of interest to us.
Let Sig i[j denote the k smallest elements of C . We can
view Sig i[j as the signature of the "column" that would correspond to C i [C j . Observe that Sig i[j
can be obtained (in O(k) time) from Sig i and Sig j since it is in fact the set of the smallest k elements
from corresponds to a set of rows selected uniformly at random from all
elements of C i [C j , the expected number of elements of Sig i[j that belong to the subset C
exactly jSig i[j j\ThetajC
since the signatures are just the smallest k elements. Hence, we obtain the following theorem.
Theorem 2 An unbiased estimator of the similarity S(c given by the expression
Consider the computational cost of this algorithm. While scanning the data, we generate one
hash value per row, and for each column we maintain the minimum k hash values from those
corresponding to rows that contain 1 in that column. We maintain the k minimum hash values for
each column in a simple data structure that allows us to insert a new value (smaller than the current
maximum) and delete the current maximum in O(log time. The data structure also makes the
maximum element amongst the k current Min-Hash values of each column readily available. Hence,
the computation for each row is constant time for each 1 entry and additional log k time for each
column with 1 entry where the hash value of the row was amongst the k smallest seen so far. A
simple probabilistic argument shows that the expected number of rows on which the k-Min-Hash
list of a column c i gets updated is O(k log jC i log n). It follows that the total computation
cost is a single scan of the data and O(jM log k), where jM j is the number of 1s in the
matrix M .
In the second phase, while generating candidates, we need to compute the sets Sig i[j for each
column-pair using merge join (O(k) operations) and while we are merging we can also find the
elements that belong to Sig . Hence, the total time for this phase is O(km 2 ). The quadratic
dependence on the number of columns is prohibitive and is caused by the need to compute Sig i[j for
each column-pair. Instead, we first apply a considerably more efficient biased approximate estimator
for the similarity. The biased estimator is computed for all pairs of columns using Hash-Count in
time. Next we perform a main-memory candidate pruning phase, where the unbiased
estimator of Theorem 2 is explicitly computed for all pairs of columns where the approximate
biased estimator exceeds a threshold.
The choice of threshold for the biased estimator is guided by the following lemma.
Alternatively, the biased estimator and choice of threshold can be derived from the following
analysis. Let As before,
for each column c i , we choose a set Sig i of k Min-Hash values. Let Sig
Then, the expected sizes of Sig ij and Sig ji are given by kjC ij
j). Hence can compute the expected value as
We assume that Prob[jSig ij j ? jSig ji j] - 0 or
Then, the above equation becomes
Thus, we obtain the estimator E[jSig We use this estimate to calculate
and use that to estimate the similarity since we know jC i j and jC j j. We compute jSig
using the hash table technique that we have described earlier in Section 3.1. The time required to
compute the hash values is O(jM j +mk log n log described earlier, and the time for computing
4 Locality-Sensitive Hashing Schemes
In this section we show how to obtain a significant improvement in the running time with respect
to the previous algorithms by resorting to Locality Sensitive Hashing (LSH) technique introduced
by Indyk and Motwani [11] in designing main-memory algorithms for nearest neighbor search in
high-dimensional Euclidean spaces; it has been subsequently improved and tested in [7]. We apply
the LSH framework to the Min-Hash functions described in earlier section, obtaining an algorithm
for similar column-pairs. This problem differs from nearest neighbor search in that the data is
known in advance. We exploit this property by showing how to optimize the running time of the
algorithm given constraints on the quality of the output. Our optimization is input-sensitive, i.e.,
takes into account the characteristics of the input data set.
The key idea in LSH is to hash columns so as to ensure that for each hash function, the
probability of collision is much higher for similar columns than for dissimilar ones. Subsequently,
the hash table is scanned and column-pairs hashed to the same bucket are reported as similar.
Since the process is probabilistic, both false positives and false negatives can occur. In order to
reduce the former, LSH amplifies the difference in collision probabilities for similar and dissimilar
pairs. In order to reduce false negatives, the process is repeated a few times, and the union of pairs
found during all iterations are reported. The fraction of false positives and false negatives can be
analytically controlled using the parameters of the algorithm.
Although not the main focus of this paper, we mention that the LSH algorithm can be adapted
to the on-line framework of [10]. In particular, it follows from our analysis that each iteration of our
algorithm reduces the number of false negatives by a fixed factor; it can also add new false positives,
but they can be removed at a small additional cost. Thus, the user can monitor the progress of
the algorithm and interrupt the process at any time if satisfied with the results produce so far.
Moreover, the higher the similarity, the earlier the pair is likely to be discovered. Therefore, the
user can terminate the process when the output produced appears to be less and less interesting.
4.1 The Min-LSH Scheme
We present now the Min-LSH (M-LSH) scheme for finding similar column-pairs from the matrix c
of Min-Hash values. The M-LSH algorithm splits the matrix c
into l sub-matrices of dimension
r \Theta m. Recall that c
M has dimension k \Theta m, and here we assume that for each of
the l sub-matrices, we repeat the following. Each column, represented by the r Min-Hash values
in the current sub-matrix, is hashed into a table using as hashing key the concatenation of all r
values. If two columns are similar, there is a high probability that they agree in all r Min-Hash
values and so they hash into the same bucket. At the end of the phase we scan the hash table and
produce pairs of columns that have been hashed to the same bucket. To amplify the probability
that similar columns will hash to the same bucket, we repeat the process l times. Let P r;l
the probability that columns c i and c j will hash to the same bucket at least once; since the value
of P depends only upon notation by writing P (s).
Assume that columns c i and c j have similarity s, and also let s be the similarity
threshold. For any 0 ! 1, we can choose the parameters r and l such
ffl For any s -
ffl For any s -
Proof: By Proposition 1, the probability that columns c i , c j agree on one Min-Hash value is
exactly s and the probability that they agree in a group of r values is s r . If we repeat the hashing
process l times, the probability that they will hash at least once to the same bucket would be
. The lemma follows from the properties of the function P .
states that for large values of r and l, the function P approximates the unit step
function translated to the point which can be used to filter out all and only the pairs
with similarity at most s . On the other hand, the time/space requirements of the algorithm are
proportional to so the increase in the values of r and l is subject to a quality-efficiency
trade-off. In practice, if we are willing to allow a number of false negatives (n \Gamma ) and false positives
optimal values for r and l that achieve this quality.
Specifically, assume that we are given (an estimate of) the similarity distribution of the data,
defined as d(s i ) to be the number of pairs having similarity s i . This is not an unreasonable
assumption, since we can approximate this distribution by sampling a small fraction of columns and
estimating all pairwise similarity. The expected number of false negatives would be
and the expected number of false positives would be
Therefore, the
problem of estimating optimal parameters turns into the following minimization problem:
subject to
This is an easy problem since we have only two parameters to optimize, and their feasible values
are small integers. Also, the histogram d(\Delta) is typically quantified in 10-20 bins. One approach
is to solve the minimization problem by iterating on small values of r, finding a lower bound on
the value of l by solving the first inequality, and then performing binary search until the second
inequality is satisfied. In most experiments, the optimal value of r was between 5 and 20.
4.2 The Hamming-LSH Scheme
We now propose another scheme, Hamming-LSH (H-LSH), for finding highly-similar column-pairs.
The idea is to reduce the problem to searching for column-pairs having small Hamming distance.
In order to solve the latter problem we employ the techniques similar to those used in [7] to solve
the nearest neighbor problem. We start by establishing the correspondence between the similarity
and Hamming distance (the proof is easy).
It follows that when we consider pairs such that the sum
the high value of S(c corresponds to small values of dH versa. Hence, we
partition columns into groups of similar density and for each group we find pairs of columns that
have small Hamming distance. First, we briefly describe how to search for pairs of columns with
small Hamming distance. This scheme is similar to to the technique from [7] and can be analyzed
using the tools developed in there. This scheme finds highly-similar columns assuming that the
density of all columns is roughly the same. This is done by partitioning the rows of database into p
subsets. For each partition, process as in the previous algorithm. We declare a pair of columns as
a candidate if they agree on any subset. Thus this scheme is exactly similar to the earlier scheme,
except that we are dealing with the actual data instead of Min-Hash values.
However there are two problems with this scheme. One problem is that if the matrix is sparse,
most of the subsets just contain zeros and also the columns do not have similar densities as assumed.
The following algorithm (which we call H-LSH) improves on the above basic algorithm.
The basic idea is as follows. We perform computation on a sequence of matrices with increasing
densities; we denote them by M :. The matrix M i+1 is obtained from the matrix M i
by randomly pairing all rows of M i , and placing in M i+1 the "OR" of each pair. 1 One can see
that for each i, M i+1 contains half the rows of M i (for illustration purposes we assume that the
initial number of rows is a power of 2). The algorithm is applied to all matrices in the set. A pair
of columns can become a candidate only on a matrix M i in which they are both sufficiently dense
and both their densities belong to a certain range. False negatives are controlled by repeating each
1 Notice, that the "OR operation" gives similar results to hashing each columns to a set of increasingly smaller
hash table; this provides an alternative view of our algorithm.
Number
Similar
pairs
Histogram of Sun data set
Number
Similar
pairs
Histogram of Sun data set
Figure
2: The first figure shows the similarity distribution of the Sun data. The second shows again
the same distribution but it focuses on the region of similarities that we are interested in.
sample l times, and taking the union of the candidate sets across all l runs. Hence, kr rows are
extracted from each compressed matrix. Note that this operation may increase false positives.
We now present the algorithm that was implemented. Experiments show that this scheme is
better than the Min-Hashing algorithms in terms of running time, but the number of false positives
is much larger. Moreover, the number of false positives is increases rapidly if we try to reduce the
number of false negatives. In the case of Min-Hashing algorithms, if we decreased the number of
false negatives by increasing k, the number of false positives would also decrease.
The Algorithm:
described above.
2. For each i - 0, select k sets of r sample rows from M i .
3. A column pair is a candidate if there exists an i, such that (i) the column pair has density
in (1=t; (t \Gamma 1)=t) in M i , and (ii) has identical hash values (essentially, identical r-bit
representations) in at least one of the k runs.
Note that t is a parameter that indicates the range of density for candidate pairs, and we use
our experiments.
5 Experiments
We have conducted experiments to evaluate the performance of the different algorithms. In this
section we report the results for the different experiments. We use two sets of data namely synthetic
data and real data.
Synthetic Data: The data contains 10 4 columns and the number of rows vary from 10 4 to
. The column densities vary from 1% to 5% and for every 100 columns we have a pair of similar
columns. We have 20 pairs similar columns whose similarity fall in the ranges (85, 95), (75, 85),
(65, 75), (55, 65) and (45, 55).
Real Data: The real data set consists of the log of HTTP requests made over a period of 9
days to the Sun Microsystems Web server (www.sun.com). The columns in this case are the URL's
Support Number of columns A-priori MH K-MH H-LSH M-LSH
threshold after support pruning (sec) (sec) (sec) (sec) (sec)
0:1% 15559 - 71.4 87.6 15.6 10.7
0:15% 11568 96.05 44.8 52.0 6.7 9.7
0:2% 9518 79.94 25.8 36.0 6.0 5.1
Figure
3: Running times for the news articles data set
and the rows represent distinct client IP addresses that have recently accessed the server. An entry
is set to 1 if there has been at least one hit for that URL from that particular client IP. The data
set has about thirteen thousand columns and more than 0.2 million rows. Most of the columns are
sparse and have density less than 0.01%. The histogram in Figure 2 shows the number of column
pairs for different values of similarity. Typical examples of similar columns that we extracted from
this data were URLs corresponding to gif images or Java applets which are loaded automatically
when a client IP accesses a parent URL.
To compare our algorithms with existing techniques, we implemented and executed the a-priori
algorithm [1, 2]. Of course, the a-priori is not designed for this setting of low support, but it is the
only existing technique and gives us a benchmark against which we can compare the improvements
afforded by our algorithms. The comparison was done for the news articles data that we have
mentioned in Section 1.
We conducted experiments on the news article data and our results are summarized in Figure 3.
The a-priori algorithm cannot be run on the original data since it runs out of memory. Therefore
we performed support pruning to remove columns that have very few ones in them. It is evident
that our techniques give nearly an order of magnitude improvement in running time; for support
threshold below 0:1%, a-priori runs out of memory on our systems and does a lot of thrashing.
Note that although our algorithms are probabilistic they report the same set of pairs as reported
by a-priori.
5.1 Results
We implemented the four algorithms described in the previous section, namely MH, K-MH, H-LSH,
and M-LSH. All algorithms were compared in terms of the running time and the quality of the
output. Due to the lack of space we report experiments and give graphs for the Sun data, which
in any case are more interesting, but we have also performed tests for the synthetic data, and all
algorithms behave similarly.
The quality of the output is measured in terms of false positives and false negatives generated
by each algorithm. To do that, we plot a curve that shows the ratio of the number of pairs found
by the algorithm over the real number of pairs (computed once off-line) for a given similarity range
(e.g.
Figure
7). The result is typically an "S"-shaped curve, that gives a good visual picture for
the false positives and negatives of the algorithm. Intuitively, the area below the curve and left
to a given similarity cutoff corresponds to the number of false positives, while the area above the
curve and right to the cutoff corresponds to the number of false negatives.
We now describe the behavior of each algorithm as their parameters are varied .
MH and K-MH algorithms have two parameters, s the user specified similarity cutoff, and
k, the number of Min-Hash values extracted to represent the signature of of each column. Fig-
Fraction
of
pairs
found
Performance of MH on Sun data set, f=80
Total
time
(sec)
Running time of MH on Sun data set,f=80
(a) (b)
Fraction
of
pairs
found
Performance of MH on Sun data set,k=500
Total
time
(sec)
Running time of MH on Sun data set,k=500
(c) (d)
Figure
4: Quality of output and total running time for MH algorithm as k and s are varied
ures 4(a) and 5(a) plot "S"-curves for different values of k for the MH and K-MH algorithms. As
the k value increases the curve gets sharper indicating better quality. In Figures 4(c) and 5(c) we
fixed and change the value s of the similarity cutoff. As expected the curves shift to the
right as the cutoff value increases. Figures 4(d) and 5(d) show that for a given value of k the total
running time decreases marginally since we generate fewer candidates. Figure 4(b) shows that the
total running time for MH algorithm increases linearly with k. However this is not the case for
K-MH algorithm as depicted by Figure 5(b). The sub-linear increase of the running time is due to
the sparsity of the data. More specifically, the number of hash values extracted from each column
is upper bounded by the number of ones of that column, and therefore, the hash values extracted
do not increase linearly with k.
We do a similar exploration of the parameter space for the M-LSH and H-LSH algorithms. The
parameters of this algorithm are r, and l. Figures 7(a) and 6(a) illustrate the fact that as r increases
the probability that columns mapped to the same bucked decreases, and therefore the number of
false positives decreases but as a trade-off consequence the number of false negatives increases. On
the other hand, Figure 7(c) and 6(c) shows that an increase in l, corresponds to an increase of the
collision probability, and therefore the number of false negatives decrease but the number of false
positives increases. Figures 7(d) and 6(d) show that the total running time increases with l since
Fraction
of
pairs
found
Performance of K-MH on Sun data set,f=80
Total
time
(sec)
Running time of K-MH on Sun data set,f=80
(a) (b)
Fraction
of
pairs
found
Performance of K-MH on Sun data set,k=500
Total
time
(sec)
Running time of K-MH on Sun data set,k=500
(c) (d)
Figure
5: Quality of output and total running time for K-MH algorithm as k and s are varied
we hash each column more times and this also results in an increase in the number of candidates. In
our implementation of M-LSH, the extraction of min hash values dominates the total computation
time, which increases linearly with the value of r. This is shown in Figure 7(b). On the other hand,
in the implementation of H-LSH, checking for candidates dominates the running times, and as a
result the total running time decreases as r increases since less candidates are produced. This is
shown in Figure 6(b).
We now compare the different algorithms that we have implemented. When comparing the
time requirements of the algorithm we compare the CPU time for each algorithm since the time
spent in I/O is same for all the algorithms. It is important to note that the for all the algorithms
the number of false negatives is very important and this is the quantity that requires to be kept in
control. As long as the number of false positives is not too large (i.e. all of candidates can fit in
main memory) we can always eliminate them in the pruning phase. To compare the algorithms we
fix the percentage of false negatives that can be tolerated. For each algorithm we pick the set of
parameters for which the number of false negatives is within this threshold and the total running
time is minimum. We then plot the total running time and the number of false positives against
the false negative threshold.
Consider Figures 8(a) and 8(c). The Figures show the total running time against the false
Fraction
of
pairs
found
Performance of H-LSH on Sun data set,
5680120160200Value of parameter r
Total
time
(sec)
Running time of H-LSH on Sun data set,
(a) (b)
Fraction
of
pairs
found
Performance of H-LSH on Sun data set,
Value of parameter l
Total
time
(sec)
Running time of H-LSH on Sun data set,
(c) (d)
Figure
Quality of output and total running time for H-LSH algorithm as r and l are varied
negative threshold. We can see that the H-LSH algorithm requires a lot of time if the false negative
threshold is less while it does better if the limit is high. In general the M-LSH and H-LSH algorithms
do better than the MH and K-MH algorithms. However it should be noted that H-LSH algorithm
cannot be used if we are interested in similarity cutoffs that are low. The graph shows that the
best performance is shown by the M-LSH algorithm.
Figure
8 gives the number of false positives generated by the algorithms against the tolerance
limit. The false positives are plotted on a logarithmic scale. In case of H-LSH and M-LSH algorithms
the number of false positives decreases if we are ready to tolerate more false negatives since in that
case we hash every column fewer times. However the false positive graph for K-MH and MH is
not monotonic. There exists a tradeoff in the time spent in the candidate generation stage and
the pruning stage. To maintain the number of false negatives less than the given threshold we
could either increase k and spend more time in the candidate generation stage or else decrease
the similarity cutoff s and spend more time in the pruning stage as we get more false positives.
Hence the points on the graph correspond to different values of similarity cutoff s with which the
algorithms are run to get candidates with similarity above a certain threshold. As a result we do
not observe a monotonic behavior in case of these algorithms.
Fraction
of
pairs
found
Performance of M-LSH on Sun data
time
(sec)
Value of parameter r
Running time of M-LSH on Sun data set,
(a) (b)
Fraction
of
pairs
found
Performance of M-LSH on Sun data set,
time
(sec)
Value of parameter l
Running time of M-LSH on Sun data set,
(c) (d)
Figure
7: Quality of output and total running time for M-LSH algorithm as r and l are varied
We would like to comment that the results provided should be analyzed with caution. The
reader should note that whenever we refer to time we refer to only the CPU time and we expect
I/O time to dominate in the signature generation phase and pruning phase. If we are aware about
the nature of the data then we can be smart in our choice of algorithms. For instance the K-MH
algorithm should be used instead of MH for sparse data sets since it takes advantage of sparsity.
6 Extensions and Further Work
We briefly discuss some extensions of the results presented here as well as directions for future work.
First, note that all the results presented here were for the discovery of bi-directional similarity
measures. However, the Min-Hash technique can be extended to the discovery of column-pairs
which form a high-confidence association rule of the type fc but without any
support requirements. The basic idea is to generate a set of Min-Hash values for each column, and
to determine whether the fraction of these values that are identical for c i and c j is proportional to
the ratio of their densities, d i =d j . The analytical and the experimental results are qualitatively the
same as for similar column-pairs.
Total
time
(sec)
False Negative Threshold (%)
Time vs False Negatives,
MH
K-MH
H-LSH
M-LSH
False Negative Threshold (%)
False Positives vs False Negatives,
MH
K-MH
H-LSH
M-LSH
(a) (b)
time
(sec)
False Negative Threshold (%)
Time vs False Negatives,
MH
K-MH
H-LSH
M-LSH
Number
of
False
positives
False Negative Threshold (%)
False Positives vs False Negatives,
MH
K-MH
H-LSH
M-LSH
(c) (d)
Figure
8: Comparison of different algorithms in terms of total running time and number of false
positives for different negative thresholds.
We can also use our Min-Hashing scheme to determine more complex relationships, e.g., c i is
highly-similar to c j -c j 0 , since the hash values for the induced column c j -c j 0 can be easily computed
by taking the component-wise minimum of the hash value signature for c j and c j 0 . Extending to
more difficult. It works as follows. First, observe that "c i implies c j - c means that "c i
implies ". The latter two implications can be generated as above. Now, we can
conclude that "c i implies c j -c (and only if ) the cardinality of c i is roughly that of c j -c j 0 . This
presents problems when the cardinality of c i is really small, but is not so difficult otherwise. The
case of small c i may not be very interesting anyway, since it is difficult to associate any statistical
significance to the similarity in that case. It is also possible to define "anti-correlation," or mutual
exclusion between a pair of columns. However, for statistical validity, this would require imposing
a support requirement, since extremely sparse columns are likely to be mutually exclusive by sheer
chance. It is interesting to note that our hashing techniques can be extended to deal with this
situation, unlike a-priori which will not be effective even with support requirements. Extensions
to more than three columns and complex boolean expressions are possible but will suffer from an
exponential overhead in the number of columns.
--R
Mining Association Rules Between Sets of Items in Large Databases.
Fast Algorithms for Mining Association Rules.
Dynamic itemset counting and implication rules for market basket data.
On the resemblance and containment of documents.
Pattern Classification and Scene Analysis.
Similarity Search in High Dimensions via Hashing.
Using collaborative filtering to weave an information tapestry.
Online Aggregation.
Approximate Nearest Neighbor: Towards Removing the Curse of Dimensionality.
Randomized Algorithms.
Building a Scalable and Accurate Copy Detection Mechanism.
Beyond Market Baskets: Generalizing Association Rules to Dependence Rules.
Scalable Techniques for Mining Causal Structures.
CACM Special Issue on Recommender Systems.
--TR
--CTR
Masao Nakada , Yuko Osana, Document clustering based on similarity of subjects using integrated subject graph, Proceedings of the 24th IASTED international conference on Artificial intelligence and applications, p.410-415, February 13-16, 2006, Innsbruck, Austria
Yan Huang , Hui Xiong , Shashi Shekhar , Jian Pei, Mining confident co-location rules without a support threshold, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Girish K. Palshikar , Mandar S. Kale , Manoj M. Apte, Association rules mining using heavy itemsets, Data & Knowledge Engineering, v.61 n.1, p.93-113, April, 2007
Yiping Ke , James Cheng , Wilfred Ng, Mining quantitative correlated patterns using an information-theoretic approach, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Zahid Hossain , Sk Ahad Ali, Unified descriptive language for association rules in data mining, Second international workshop on Intelligent systems design and application, p.227-232, August 07-08, 2002, Atlanta, Georgia
Sunita Sarawagi , Alok Kirpal, Efficient set joins on similarity predicates, Proceedings of the 2004 ACM SIGMOD international conference on Management of data, June 13-18, 2004, Paris, France
Yishan Jiao, Maintaining stream statistics over multiscale sliding windows, ACM Transactions on Database Systems (TODS), v.31 n.4, p.1305-1334, December 2006
Edith Cohen , Amos Fiat , Haim Kaplan, A case for associative peer to peer overlays, ACM SIGCOMM Computer Communication Review, v.33 n.1, p.95-100, January
Jian Zhang , Joan Feigenbaum, Finding highly correlated pairs efficiently with powerful pruning, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
David Johnson , Shankar Krishnan , Jatin Chhugani , Subodh Kumar , Suresh Venkatasubramanian, Compressing large boolean matrices using reordering techniques, Proceedings of the Thirtieth international conference on Very large data bases, p.13-23, August 31-September 03, 2004, Toronto, Canada
Wen-Yang Lin , Ming-Cheng Tseng, Automated support specification for efficient mining of interesting association rules, Journal of Information Science, v.32 n.3, p.238-250, June 2006
Chang-Hung Lee , Cheng-Ru Lin , Ming-Syan Chen, Sliding window filtering: an efficient method for incremental mining on a time-variant database., Information Systems, v.30 n.3, p.227-244, May 2005
Edith Cohen , Haim Kaplan, Bottom-k sketches: better and more efficient estimation of aggregates, ACM SIGMETRICS Performance Evaluation Review, v.35 n.1, June 2007
Zan Huang , Wingyan Chung , Hsinchun Chen, A graph model for E-commerce recommender systems, Journal of the American Society for Information Science and Technology, v.55 n.3, p.259-274, February 2004
Edith Cohen , Amos Fiat , Haim Kaplan, Associative search in peer to peer networks: Harnessing latent semantics, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.51 n.8, p.1861-1881, June, 2007 | data mining;locality sensitive hashing;association rules;min hashing;similarity metric |
628118 | Simple Strategies to Encode Tree Automata in Sigmoid Recursive Neural Networks. | AbstractRecently, a number of authors have explored the use of recursive neural nets (RNN) for the adaptive processing of trees or tree-like structures. One of the most important language-theoretical formalizations of the processing of tree-structured data is that of deterministic finite-state tree automata (DFSTA). may easily be realized as RNN using discrete-state units, such as the threshold linear unit. A recent result by Sima (Neural Network World7 (1997), pp. 679686) shows that any threshold linear unit operating on binary inputs can be implemented in an analog unit using a continuous activation function and bounded real inputs. The constructive proof finds a scaling factor for the weights and reestimates the bias accordingly. In this paper, we explore the application of this result to simulate DFSTA in sigmoid RNN (that is, analog RNN using monotonically growing activation functions) and also present an alternative scheme for one-hot encoding of the input that yields smaller weight values and, therefore, works at a lower saturation level. | Introduction
During the last decade, a number of authors have explored the use of analog
recursive neural nets (RNN) for the adaptive processing of data laid out as
trees or tree-like structures such as directed acyclic graphs. In this arena,
Frasconi, Gori and Sperduti [5] have recently established a rather general
formulation of the adaptive processing of structured data, which focuses on
directed ordered acyclic graphs (which includes trees); Sperduti and Starita
[18] have studied the classication of structures (directed ordered graphs,
including cyclic graphs) and Sperduti [18] has studied the computational
power of recursive neural nets as structure processors.
One of the most important language-theoretical formalizations of the processing
of tree-structured data is that of deterministic nite-state tree automata
(DFSTA), also called deterministic frontier-to-root or ascending tree
automata[19, 7]. may easily be realized as RNN using discrete-state
units such as the threshold linear unit (TLU). Sperduti, in fact, [17] has recently
shown that Elman-style [3] RNN using TLU may simulate DFSTA,
and provides an intuitive explanation (similar to that expressed by Kremer
[10] for the special case of deterministic nite automata) why this should
also work for sigmoid networks: incrementing the gain of the sigmoid function
should lead to an arbitrarily precise simulation of a step function. We
are, however, unaware of any attempt to establish a nite value of this gain
such that exact simulation of a may indeed be performed by an
analog RNN.
A recent result by
Sma [16] shows that any TLU operating on binary
inputs can be simulated by an analog unit using a continuous activation
function and bounded real inputs. A TLU is a neuron that computes its
output by applying a threshold or step activation function
to a biased linear combination of its binary inputs.
The corresponding analog neuron works with any activation function g(x)
having two dierent nite limits a and b when x ! 1 and for given input
and output tolerances. The constructive proof nds a scaling factor for the
weights |basically, a value for the gain of the analog activation function|
and uses the same scaling factor on a shifted value of the bias. In this
paper, we dene three possible ways of encoding DFSTA in discrete-state
RNN using TLU and then explore the application of Sma's result to turn
the discrete-state RNN into a sigmoid RNN simulating the original
(with sigmoid meaning an analog activation function that is monotonically
growing). In addition, we present an alternative scheme for analog simulation
that yields smaller weight values than
Sma's scheme for both discrete-state
cases and therefore works at a lower saturation level; our goal is to nd
the smallest possible scaling factor guaranteeing correct behavior. This last
approach, which assumes a one-hot encoding of inputs, is a generalization
of the approach used in [2] for the stable encoding of a family of nite-
state machines (FSM) in a variety of sigmoid discrete-time recurrent neural
networks and similar in spirit to previous work by Omlin and Giles [13, 14] for
deterministic nite automata (a class of FSM) and a particular discrete-time
recurrent neural network (DTRNN) architecture (the second-order DTRNN
used by Giles et al. [6]).
In the following section, tree automata and recursive networks are intro-
duced. Section 3 describes three dierent schemes to encode recursive neural
networks in discrete-state RNN using TLU. The main result by
Sma[16] is
presented in Section 4 together with a similar construction for the case of
exclusive (also called one-hot) encoding of the input. Section 5 describes the
conversion of discrete-state RNN into and their sigmoid counterparts and the
dierent schemes are evaluated by comparing the magnitude of the resulting
weight values. Finally, we present our conclusions in the last section.
Tree automata and recursive neural net-work
Before we explore how neural networks can simulate tree automata we need
to specify the notation for trees and describe the architecture of recursive
neural networks.
2.1 Trees and nite-state machines
We will denote with a ranked alphabet, that is, a nite set of symbols
with an associated function giving the rank
of the symbol. 1 The subset of symbols in having rank m is denoted with
. The set of -trees, T , is dened as the set of strings (made of symbols
in augmented with the parenthesis and the comma) representing ordered
labeled trees or, recursively,
1. 0 T (any symbol of rank 0 is a single-node tree in T ).
2.
having a root node with a label of f rank m and m children
which are valid trees of T belongs to T ).
1 The rank may be dened more generally as a relation r N; both formulations
are equivalent if symbols having more than one possible rank are split.
A deterministic nite-state tree automaton (DFSTA) is a ve-tuple
is the nite set of states,
is the alphabet of labels, ranked by function r, F Q is the
subset of accepting states and is a nite collection of
transition functions of the form -
the maximum rank or valence of the DFSTA.
For all trees t 2 T , the result -(t) 2 Q of the operation of DFSTA A on
a tree t 2 T is dened as
undened otherwise
(2)
In other words, the state -(t) associated to a given tree t depends on the label
of the root node (f) and also on the states that the NDFSTA associates to
its children (-(t 1
By convention, undened transitions lead to unaccepted trees. That is,
M is the maximum number of children for any node of any tree in L(A).
As usual, the language L(A) recognized by a DFSTA A is the subset of
T dened as
One may generalize this denition so that the DFSTA produces an output
label from an
at each node visited, so that it acts
like a (structure-preserving) nite-state tree transducer; two generalizations
are possible, which correspond to the classes of nite-state string transducers
known as Mealy and Moore machines[9, 15]:
Mealy tree transducers are obtained by replacing the subset of accepting
states F in the denition of a DFSTA by a collection of output functions
g, one for each possible rank,
Moore tree transducers are obtained by replacing F by a single output
function whose only argument is the new state:
Conversely, a DFSTA can be regarded as a particular case of Mealy or Moore
machine operating on trees whose output functions return only two values
2.2 Neural architectures
Here we dene two recursive neural architectures that are similar to that
used in related work as that of Frasconi, Gori and Sperduti [5], Sperduti
[17] and Sperduti and Starita [18]. We nd it convenient to talk about
Mealy and Moore neural networks to dene the way in which these networks
compute their output, using the analogy with the corresponding nite-state
tree transducers. The rst architecture is a high-order Mealy RNN and the
second one is a rst-order Moore RNN 2 .
2.2.1 A high-order Mealy recursive neural network
A high-order Mealy recursive neural network consists of two sets of single-layer
neural networks, the rst one to compute the next state (playing the role
of the collection of transition functions in a nite-state tree transducer)
and the second one to compute the output (playing the role of the collection
of output functions in a Mealy nite-state tree transducer).
The next-state function is realized as a collection of M
single-layer networks, one for each possible rank having nX
neurons and m+1 input ports: m for the input of subtree state vectors, each
2 The remaining two combinations are high-order Moore RNN (which may easily be
shown to have the same computational power as their Mealy counterparts) and rst-order
Mealy machines (which need an extra layer to compute arbitrary output functions, see
of dimensionality nX , and one for the input of node labels, represented by a
vector of dimensionality n U .
The node label input port takes input vectors equal in dimensionality to
the number of input symbols, that is n In particular, if is a node
in the tree with label l() and u[] is the input vector associated with this
node, the component u k [] is equal to 1 if the input symbol at node is k
and 0 for all other input symbols (one-hot or exclusive encoding).
For a node with label the next state
x[] is computed by the corresponding m+ 1-th order single-layer neural net
as follows:
i represents the bias for the network of rank m and
If is a leaf, i.e., l() 2 0 the expression above for the component x i []
reduces to
ik
that is, there is a set of jj weights of type w 0
nxk ) which play
the role of the initial state in recurrent networks [4].
The output function is realized as a collection of M
networks having n Y units and the same input structure. The output
function for a node of rank m is evaluated as
2.2.2 A rst-order Moore recursive neural network
The rst-order Moore recursive neural network has a collection of M
next-state functions (one for each rank m) of the form
ik
having the same structure of input ports as its high-order counterpart, and
a single output function of the form
taking nX inputs and producing n Y outputs.
3 Encoding tree automata in discrete-state
recursive neural networks
We will present three dierent ways to encode nite-state recursive transducers
in discrete-state RNN using TLU as activation functions. The rst
two use the discrete-state version of the high-order Mealy RNN described
in Section 2.2.1 and the third one uses the discrete-state version of rst-
order Moore RNN described in Section 2.2.2. The rst two encodings are
straightforward; the third one is explained in more detail.
All of the encodings are based on an exclusive or one-hot encoding of the
states of the nite-state transducers (n the RNN is said to be in
state i when component x i of the nX -dimensional state vector x takes a high
value all other components take a low value In
addition, exclusive encoding of inputs (n outputs (n
used.
Each one of these discrete-state RNN encodings will be converted in Section
5 into sigmoid RNN encodings by using each of the two strategies described
in Section 4.
3.1 A high-order Mealy encoding using biases
Assume that we have a Mealy nite-state tree transducer
Then, a discrete-state high-order Mealy RNN with weights
weights
and bias V m
behaves as a stable simulator for the
nite-state tree transducer (note that uppercase letters are used to designate
weights in discrete-state neural nets). This would also be the case if biases
were set to any value in (0; 1); the value 1=2 happens to be the best
value for the conversion into a sigmoid RNN.
3.2 A high-order Mealy encoding using no biases
A second possible encoding (the tree-transducing counterpart of a string-
transducer encoding described in [12, 2]), which uses no bias, has the next-state
weights
and all biases W m
and the output weights
and all biases V m
(as in the case of the biased construction,
the encoding also works if biases are set to any value in ( 1; 1), with 0 being
the optimal value for conversion into a sigmoid RNN).
3.3 A rst-order Moore encoding
Consider a Moore nite-state tree transducer of the form
Before encoding this DFSTA in a rst-order RNN we need to split its states
rst; state-splitting has been found necessary to implement arbitrary nite-
state machines in rst-order discrete-time recurrent neural networks [8, 1, 2]
and has also been recently described by Sperduti [17] for the encoding of
in RNN.
A easily be shown to be equivalent
to A is easily constructed by using the method described by Sperduti
[17] as follows:
The new set of states Q 0 is the subset of
dened as
The next-state functions in the new set
are dened as follows:
and
where the shorthand notation
has been used.
Finally, the new output function is dened as follows:
The split encoded into a discrete-state RNN in eqs. (7)
and (8) by choosing its parameters as follows: 3
there exist q 0
jm such that - 0
and zero otherwise;
there exist q 0
and l such that
i and zero otherwise;
i and zero otherwise;
It is not di-cult to show that the operation of this discrete-state RNN is
equivalent to that of the corresponding therefore to that
of A (as was the case with the previous constructions, dierent values for
the biases are also possible but the ones shown happen to be optimal for
conversion into a sigmoid RNN).
Stable simulation of discrete-state units on
analog units
4.1 Using Sma's theorem
The following is a restatement of a theorem by Sma [16] which is included
for convenience. Only the notation has been slightly changed in order to
adapt it to the present study.
3 Remember that uppercase letters are used to denote the weights in discrete-state RNN.
A threshold linear unit (TLU) is a neuron computing its output as
where g H the threshold activation function, the W j (j are real-valued
weights and is a binary input vector.
Consider an analog neuron with weights w having an activation
function g with two dierent limits a = lim x!1 g(x) and
The magnitude - max will be called the maximum input tolerance.
Finally, let also the mapping be dened as:
undened otherwise
This mapping classies the output of an analog neuron into three categories:
low (0), high (1), or forbidden (undened). Then, let R be the
mapping r - kg. Sma's theorem states that,
for any input tolerance - such that 0 < - max and for any output tolerance
such that 0 < -, there exists an analog neuron with activation function
g and weights w R such that, for all x 2 f0; 1g n ,
According to the constructive proof of the theorem [16] a set of su-cient
conditions for the above equation to hold is
a
b a
and
b a
with
where and are such that jg(x) aj < for all x < and jg(x) bj < for
all x > and jj and jj are as small as possible. That is,
Sma's prescription
simply scales the weights of the TLU to get those of the analog network and
does the same with the bias but only after shifting it conveniently to avoid
a zero value for the argument of the activation function.
Note that inputs to the analog unit are allowed to be within - of 0 and 1
whereas outputs are allowed to be within of a and b. When constructing a
recursive network , the outputs of one analog unit are normally used as inputs
for another analog unit and therefore the most natural choice is
and -. This choice is compatible, for instance, with the use of the
logistic function g L whose limits are exactly a = 0 and
not with other activation functions such as the hyperbolic tangent
tanh(x). In particular, for the case eqs. (23) and (24) reduce to
and
and (25) becomes
In the following, we rederive simple su-cient conditions for stable simulation
of a TLU by an analog unit which are suitable for any strictly growing
activation function but restricted to exclusive encoding of the input (whereas
Sma's construction is valid for any binary input vector). The simplicity of
the prescriptions allows for an alternate straightforward worst-case analysis
that leads to weights that are, in most common situations, smaller than those
obtained by direct application of Sma's theorem.
4.2 Using a simple scheme for exclusive encoding of
the input
The conditions for stable simulation of nite-state machines (FSM) in DTRNN
have been studied, following an approach related to that of
Sma[16], by Carrasco
et al. [2] (see also [12, 11]; these conditions assume the special but usual
case of one-hot or exclusive encoding of the input and strictly growing activation
functions. These assumptions, together with a worst-case analysis,
allow one to obtain a prescription for the choice of suitable weights for stable
simulation that works at lower saturation levels than the general scheme
(eqs. 23{25). Usually, the prescription can be realized as a single-parameter
scaling of all of the weights in the TLU including the bias; this scaling is
equivalent to nding a nite value of the gain of the sigmoid function which
ensures correct behavior.
Note that, in the case of exclusive encoding (as the ones used in Section 3,
there are only n possible inputs: the binary vectors b
being the
vector whose i-th component is one and the rest are zero). Therefore, the
argument
may only take n dierent values W 0 +W i
(for the binary input however, the analog neuron
with input tolerance - may receive input vectors in r - (b
ig). This property of exclusive encoding
makes it possible to formulate a condition such that (22) holds for all possible
inputs Two cases have to be distinguished:
1. In this case, (22) holds if
As g is strictly
growing, we may also write w
Obviously, the
minimum value of w
with jg. Therefore,
is a su-cient condition for the analog neuron to simulate the corresponding
TLU with input b i .
2. similar argument leads to
the su-cient condition
For instance, if we choose w
eqs. (29) and (30) are fullled either if
and In order to compare with eqs. (23{25), the last
two conditions may be written as a single, more restrictive pair of conditions
and
The simple choice w not adequate in case that W 0
this case does not appear in any of the encodings proposed in Section 3.
5 Encoding tree automata in sigmoid recursive
neural networks
As mentioned before, the theorem in section (4) leads to the natural choice
in addition to - when applying it to neurons in recursive
neural networks. Due to its widespread use, we will consider and compare
in this section various possible encodings using the logistic function g
although results for dierent activation functions having
may also be obtained. Indeed, monotonic growth of the
function along the real line is enough for the following derivation (as it was
the case for eqs. (29) and (30)). In our case, we want to simulate a
with a sigmoid RNN.
Consider rst the high-order Mealy RNN architecture. As the input x j
in (22) is, in the case of the RNN described in (4), the product of m outputs
x jm , each one in the range (0; 1), the product is always in
the range (0; (1 In other words, there is a forbidden
region between (1 It is not di-cult to show that
with the equality holding only if
Therefore, the conditions
and - max su-ce for our purposes, as (-; 1 -)
If we want to use the same scaling factor for all weights and all possible ranks
m, we can use
Consider now the rst-order Moore RNN architecture. In this case, there
are no products, and the conditions
and - max are su-cient.
The previous section describes two dierent schemes to simulate discrete-state
neurons taking exclusive input vectors in sigmoid neurons. This section
describes the application of these two schemes to the three recursive neural
network architectures described in Section 2.2.
5.1 Using Sima's prescription
Sma's construction (Section 4.1) gives for the biased high-order Mealy RNN
in Section 3.1 the following:
and, therefore, w m
with
where the condition (35) has been applied 4 , together with 1
a condition that ensures a positive value of H. As shown in (37),
the minimum value allowed for H depends both on nX and . For a given
architecture, nX and the maximum value of m (M) are xed, so only can
be changed. There exists at least one value of that allows one to choose the
minimum value of H needed for stable simulation. In this sense, minimization
of H by choosing an appropriate can be performed as in [2] and leads to
the values shown in Table 1 (minimum required H as a function of nX and
). The weights obtained grow slower than log(mn m
x
as can be seen, are inordinately large and lead therefore to a very saturated
analog RNN.
Applying Sma's construction to the biasless high-order Mealy RNN (Sec-
tion 3.2), we get
4 We have used equations (4) and (6) have nU (nX ) m terms but, due to
the exclusive encoding of the inputs, (nU 1)(n of terms are identically zero with no
uncertainty at all.
with
together with 1 positive values of H). The weights
obtained by searching for the minimum H satisfying the conditions are shown
in
Table
2; as can be seen, weights (which show the same asymptotic behavior
as the ones in the previous construction) are smaller but still too large to
avoid saturation.
Finally, applying Sma's construction 5 to the rst-order Moore RNN in
Section 3.3, we have, for a next-state function of rank m,
, and, accordingly, weights are:
there exist q 0
jm such that - 0
and zero otherwise;
there exist q 0
and l such that
i and zero otherwise
with
(where (36) has been used), together with < 1
For the output function,
, and
accordingly, weights are
i and zero otherwise;
Sma's construction can be applied provided that we consider each possible W m
ik u k [] in the next-state function as a dierent bias W 0 with u[] 2 f0; 1g n and
choose the safest prescription (valid for all possible values of the bias). In the present
case, this bias has always the value (m 1
(number of additive terms
with
provided that < 1
where we have used - < .
If we want a single value of H to assign weights both to all of the next-state
functions and the output function, we have to use
with . The weights obtained by searching for the
minimum H satisfying the conditions are shown in Table 3; they grow slower
than log(MnX ), and, as can be seen, they are equal or smaller than the ones
for the biased high-order construction, but larger than those for the biased
construction. However, the fact that state splitting leads to larger values of
nX for automata having the same transition function has to be taken into
account.
5.2 Using the encoding for exclusive inputs
If we choose w including biases we obtain for the
biased high-order Mealy encoding in Section 3.1, by substituting in eqs. (31)
and (32),
together with 1
, which happens to be the same expression
as the one obtained in the previous section by using Sma's construction
on the biasless encoding (Section 3.2); results are shown in Table 2. The
results for are obviously identical to those reported for second-order
discrete-time recurrent neural networks using the biased construction in [2].
If we instead apply our alternate encoding to the biasless high-order Mealy
construction in Section 3.2 we get
together with 1
suitable minimization of
H, leads to the best possible weights of all encodings. Weights grow with m
and nX slower than log(mn m
some results are shown in Table 4. As in the
previous case, the results for are obviously identical to those reported
for second-order discrete-time recurrent neural networks using no biases in
[2].
Finally, we apply our alternate encoding scheme to the rst-order Moore
construction in Section 3.3. Now
(the particular form of (33) in this case) has to be valid for all combinations
As W 0 can take any value in
and W i can take any value in
the minimum value of jW 0 exclusive values of all
state vectors, equal to 1. Therefore,
together with - < 1
, which, after suitable minimization, leads weights
that grow with m and nX slower than log(mnX ); values are shown in Table 5.
The values are smaller than the ones obtained with Sma's construction for
the same rst-order network but are still very large, especially if one considers
that splitting leads to very large values of nX .
6 Conclusion
We have studied four strategies to encode deterministic nite-state tree automata
(DFSTA) on high-order sigmoid recursive neural networks (RNN)
and two strategies to encode them in rst-order sigmoid RNN. These six
strategies are derived from three dierent strategies to encode DFSTA in
discrete-state RNN (that is, RNN using threshold linear units) by applying
two dierent weight mapping schemes to convert each one of them into a
sigmoid RNN. The rst mapping scheme is the one described by Sma[16].
The second one is an alternate scheme devised by us. All of the strategies
yield analog RNN with a very simple \weight alphabet" containing only
three weights all of which are proportional to a single parameter H. The best
results (i.e., smallest possible value of H, as would be desired in a derivative-based
learning setting) are obtained by appling the alternate scheme to a
biasless discrete-state high-order RNN (it has to be mentioned that Sma's
mapping yields larger weights in all cases but is more general and would
also work with distributed encodings which allow the construction of smaller
RNN). In all of the constructions, the values of H suggest that, even though
in principle RNN with nite weights are able to simulate exactly the behavior
of DFSTA, it will in practice be very di-cult to learn the exact nite-state
behavior from examples because of the very small gradients present when
weights reach adequately large values.
Smaller weights are obtained at the cost of enlarging the size of the RNN
due to exclusive encoding of states and inputs (
Sma's result also works for
distributed encodings).
Acknowledgements
This work has been supported by the Spanish Comision
Interministerial de Ciencia y Tecnologa through grant TIC97-0941.
--R
Finding structure in time.
Learning the initial state of a second-order recurrent neural network during regular-language inference
A general framework for adaptive data structures processing.
Learning and extracted
Syntactical pattern recognition.
Introduction to automata theory
On the computational power of Elman-style recurrent net- works
Constructing deterministic
Stable encoding of large
Formal Languages.
On the computational power of neural networks for struc- tures
Supervised neural networks for the classi
Tree automata: An informal survey.
--TR
--CTR
Barbara Hammer , Alessio Micheli , Alessandro Sperduti , Marc Strickert, Recursive self-organizing network models, Neural Networks, v.17 n.8-9, p.1061-1085, October/November 2004
Barbara Hammer , Peter Tio, Recurrent neural networks with small weights implement definite memory machines, Neural Computation, v.15 n.8, p.1897-1929, August
Henrik Jacobsson, Rule Extraction from Recurrent Neural Networks: A Taxonomy and Review, Neural Computation, v.17 n.6, p.1223-1263, June 2005 | neural computation;recursive neural networks;analog neural networks;tree automata |
628121 | Generalization Ability of Folding Networks. | AbstractThe information theoretical learnability of folding networks, a very successful approach capable of dealing with tree structured inputs, is examined. We find bounds on the VC, pseudo-, and fat shattering dimension of folding networks with various activation functions. As a consequence, valid generalization of folding networks can be guaranteed. However, distribution independent bounds on the generalization error cannot exist in principle. We propose two approaches which take the specific distribution into account and allow us to derive explicit bounds on the deviation of the empirical error from the real error of a learning algorithm: The first approach requires the probability of large trees to be limited a priori and the second approach deals with situations where the maximum input height in a concrete learning example is restricted. | Introduction
One particular problem of connectionistic methods dealing with structured objects is to find a
possibility which makes the processing of data with a priori unlimited size possible. Connectionistic
methods often use a distributed representation of the objects in a vector space of some fixed
dimension, whereas lists, trees, logical formulas, terms, graphs, etc. consist of an unlimited number
of simple elements which are connected in a structured way. The possibly unlimited size does not
admit a direct representation in a finite dimensional vector space. Often, structured data possesses a
recursive nature. In this case processing these structures is possible with standard neural networks
which are enlarged by recurrent connections which mimic the recursive nature of the structure
[9, 10]. Such networks are capable of dealing with trees or lists of arbitrary height and length, for
example. This dynamics have been proposed in a large number of approaches dealing with adaptive
processing of structured data as the RAAM, the LRAAM, and folding networks to name just a few
[11, 24, 28]. The methods differ in how a single processing step looks and how they are trained but
they share the method of how an entire tree is processed: A simple mapping is applied recursively
to the input tree according to the tree structure. A tree is encoded recursively into a distributed
representation such that this code can be used in standard connectionistic methods. Regarding
folding networks the encoding is trained simultaneously with some classification of the trees which
is to be learned using a modification of back-propagation [11]. This approach has been used very
successfully in several areas of application [7, 11, 21, 22, 25]. The RAAM and LRAAM train the
encoding simultaneously with a dual decoding such that the composition yields the identity [24, 28].
The classification of the encoded trees is trained separately.
Here we focus on the capability of learning with these dynamics in principle. We consider
information theoretical learnability, i.e. the question as to whether a finite number of examples
contains enough information for learning: Given a finite set of data, a function with the above
dynamics can be identified which mirrors the underlying regularity - or it can be decided that the
underlying regularity if any cannot be modeled with such a function. This is the question as to
whether valid generalization from a finite set of examples to the underlying regularity is possible in
the function class. For standard feed-forward networks this question is answered in the affirmative:
Because a combinatorial quantity called the VC dimension is finite for a fixed architecture so-called
PAC learnability can be guaranteed, moreover, any learning algorithm with small empirical error
generalizes well. One can find explicit bounds on the accuracy of the generalization which depend
on the number of parameters and the number of training patterns but which are independent of
the concrete distribution [3].
In order to use folding networks as a learning mechanism it is necessary to establish analogous
results in the recurrent case, too. If the question whether valid generalization with such an architecture
is possible is answered negatively then none of the above approaches can learn in principle.
If the question is answered positively then any learning algorithm with small empirical error is
a good learning algorithm from an information theoretical point of view. Of course there may
exist differences in the efficiency of the algorithms - some algorithm may turn out computationally
intractable whereas another approach yields a good solution after a short time. However, this is
mainly a difference in the empirical error optimization. The number of examples necessary for valid
generalization at least in some cases depends on the function class which is used for learning and
not on the learning algorithm itself.
Unfortunately the situation turns out to be more difficult in the recursive case than for standard
feed-forward networks. There exists some work which estimates the VC dimension of recurrent and
folding networks [14, 19], the combinatorial quantity finiteness of which characterizes distribution
independent learnability. For arbitrary inputs this dimension is infinite due to the unlimited input
length. I.e. the ability of dealing with inputs of arbitrary size even leads to the ability of storing
arbitrary information with a finite number of parameters since the unlimited input space can be
used for this purpose in some way. As a consequence, distribution independent bounds on the
generalization error cannot exist in these situations. In order to take the specific distribution into
account we modify two approaches from the literature which guarantee learnability even for infinite
VC dimension but an adequate stratification of the function class instead [2, 27]. These approaches
are only formulated for binary valued function classes and consider the generalization error of an
algorithm with zero empirical error. We generalize the situation to function classes and arbitrary
error such that it applies to folding networks and standard learning algorithms as well. This allows
us to establish the information theoretical learnability of folding networks, too.
Now we first define the dynamics of folding networks formally. We mention some facts from
learning theory and add the above mentioned two formalisms which allow us to obtain concrete
bounds for the deviation of the empirical error and the real error in some concrete situations.
Estimations for the so-called VC, pseudo-, and fat shattering dimension of folding architectures
These quantities play a key role concerning learnability. The bounds tell us that distribution
independent learnability cannot be guaranteed in principle. But we derive concrete distribution or
data dependent bounds for the generalization error.
Folding networks
For completeness we recall the definition of a standard feed-forward network: A feed-forward
neural network consists of a finite set of neurons which are connected in an acyclic
graph. Each connection equipped with a weight w ij 2 R. The input neurons
are the neurons without predecessor. All other neurons are called computation units. A nonempty
subset of the computation units is specified, the output units. All computation units, which are
not output neurons, are called hidden neurons. Each computation unit i is equipped with a bias
and an activation function inputs and n outputs computes
the function
are the output units and defined recursively for any neuron i as
ae x i if i is an input unit,
The term
is called the activation of neuron i.
An architecture is a network where the weights and biases are not specified and allowed to
vary in R. In an obvious way, an architecture stands for the set of networks that results specifying
the weights and biases. As a consequence, a network computes a mapping which is composed
of several simple functions computed by the single neurons. The activation functions f i of the
single computation units are often identical. We drop the subscript i in these cases. The following
activation functions will be considered: The identity id the perceptron
activation
ae
the standard sigmoidal function
polynomial activation functions.
Feed-forward networks can handle real vectors of a fixed dimension. More complex objects are
trees with labels in a real vector space. We will assume in the following that any tree has a fixed
fan-out k, which means that any nonempty node has exactly k successors. Consequently, a tree is
either the empty tree ? or it consists of a root which is labeled with some value a
subtrees t 1 , . , t k . In the latter case we denote the tree by a(t The set of trees which
can be defined as above is denoted by (R m )
k .
One can use the recursive nature of trees to construct an induced mapping which deals with
trees as inputs from any vector valued mapping with appropriate arity: Assume R l is used for the
encoding, the labels are taken from R m . Any mapping g : R m \Theta R k \Deltal ! R l and initial context y 2 R
induce a mapping ~
l , which is defined recursively as follows:
~
~
This definition can be used to formally define recurrent and folding networks:
folding network consists of two feed-forward networks which compute the functions
respectively, and an initial context y 2 R l . It computes the
mapping
A folding architecture is given by two feed-forward architectures with inputs and l
outputs and with l inputs and n outputs, respectively. The context y is not specified, either.
The input neurons m+ 1, . , m+ k \Delta l of g are called context neurons. g is referred to as the
recursive part of a network, h is the feed-forward part. The input neurons of a folding network
or architecture are the neurons 1, . , m of g. In the following we will assume that the network
contains only one output neuron in h.
To understand how a folding network computes a function value one can think of the recursive
part as an encoding part: A tree is encoded recursively into a real vector in R l . Starting at the
empty tree ?, which is encoded by the initial context y, a leaf encoded via g by
using the code of ?. Proceeding in the same way, a subtree a(t
y
y
y
y
y
y
y
d
a
c
e
f
c
f
a
recurrence
context-
neurons
recurrent part feed-forward part
input of the tree leads to
the computation
Figure
1: Example for the computation of a folding network: If a specific tree serves as input the
network is unfolded according to the structure of the input tree and the output value can be simply
computed in this unfolded network.
via g using the already computed codes of the k subtrees t 1 , . , t k . The feed-forward part maps
the encoded tree to the desired output value. See Fig. 1 as an example for a computation.
lists are dealt with, folding networks reduce to recurrent networks. Except
for the separation in feed-forward and recurrent part this definition coincides with the standard
definition of partial recurrent neural networks in the literature [12].
In practice, recurrent and folding networks are trained with a gradient descent method like
back-propagation through structure or back-propagation through time, respectively [11, 30]. They
have been used successfully in several areas of application including time series prediction, control
of search heuristics, and classification of chemical data and graphical objects [7, 22, 25].
A similar mechanism proposed for the processing of structured data is the LRAAM. It is possible
to define in analogy to the encoding function ~ g y for any mapping
\Deltal and set Y ae R l an induced decoding -
k where
ae
That means a complementary dynamics decodes a value t recursively in order to obtain the label
of the root via G 0 and codes for the k subtrees via G 1 , . , G k .
Definition 2 The LRAAM consists of two feed-forward networks which compute g : R m+k \Deltal ! R l
respectively, and some vector y 2 R l and some set Y ae R l . It computes the
mapping
Frequently, the LRAAM is used in the following way: One chooses fixed architectures for the
networks g and G and trains the weights such that the composition -
y yields the identity
on the considered trees. Afterwards, -
G Y or ~ g y can be combined with standard networks in order
to approximate mappings from trees into a real vector space or vice versa. In the second step,
the feed-forward architectures are trained while the encoding or decoding of the LRAAM remains
fixed.
Although the LRAAM is trained in a different way the processing dynamics is the same if it
is used for the classification of structured data: The encoding part of the LRAAM is combined
with a standard network which is trained for the specific learning problem. Considering the entire
process, we obtain the same function class as represented by folding networks if we restrict to
the learning of functions from trees into a real vector space. Hence the following argumentation
applies to the LRAAM and other mechanisms with the same processing dynamics as well. However,
the situation changes if we fix the encoding and learn only the feed-forward network which is to
be combined with the neural encoding of the LRAAM. Then the situation reduces to learning of
standard feed-forward networks because the trees are identified with fixed real input vectors.
3 Foundations from learning theory
Learning deals with the possibility of learning an abstract regularity if a finite set of data is given.
We fix an input space X (for example, the set of lists or trees) which is equipped with a oe-algebra.
We fix a set F of functions from X to [0; 1] (a network architecture, for example). An unknown
is to be learned with F . For this purpose a finite set of independent,
identically distributed data drawn according to a probability distribution P on
X.
A learning algorithm is a mapping
(X \Theta [0; 1])
which selects a function in F for any pattern set such that this function - hopefully - nearly coincides
with the function that is to be learned. We write hm (f; x) for hm That
is, the algorithm tries to minimize the real error d P (f; hm (f; x)) where
d P (f;
Z
Of course, this error is unknown in general since the probability P and the function f that is to be
learned are unknown. A concrete learning algorithm often simply minimizes the empirical error
dm (f; hm (f; x); x) where
dm (f;
For example, a standard training algorithm for a network architecture fits the weights by means of
a gradient descent on the surface representing the empirical error in dependence on the weights.
We first consider the distribution dependent setting, i.e. there is given a fixed probability distribution
P on X. An algorithm is called probably approximately correct or PAC if
sup
holds for all ffl ? 0. This is the weakest condition such that the following holds: bounds for
the number of examples which guarantee valid generalization exist for some learning algorithm
and these bounds are independent of the unknown function that is to be learned. In practice a
somewhat stronger condition is desirable: The existence of just one maybe inefficient algorithm
is not satisfactory, we want to use any learning algorithm which is efficient and yields a small
empirical error. The property that for any learning algorithm the empirical error is representative
for the real error is captured by the property of uniform convergence of empirical distances
or UCED for short, i.e.
dm (f;
holds for all ffl ? 0. If F possesses the UCED property then any learning algorithm with small
empirical error is highly probably a good algorithm concerning the generalization. The UCED
property is desirable since it allows us to use any algorithm with small empirical error and to rank
several algorithms in dependence on their empirical errors.
The quantity ffl is referred to as the accuracy. If the above probability is explicitly bounded
by some ffi we refer to ffi as the confidence. Then there exists an equivalent characterization of
the UCED property which allows us to test the property for concrete classes F and, furthermore,
allows us to derive explicit bounds on the number of examples such that the empirical and real
error deviate by at most ffl with confidence at least ffi.
For a set S with pseudometric d the covering number N(ffl; d) denotes the smallest number
n such that n points x 1 , . , x n in S exist such that the closed balls with respect to d with radius
ffl and center x i cover S. Now a characterization of the UCED property is possible. The UCED
property holds if and only if
lim
dm )))
where F jx denotes the restriction of F to inputs x and -
dm refers to the empirical distance. An
explicit bound on the deviation of the empirical error and the real error is given by the inequality
f;g2F
jd P (f;
dm (f;
See [29](Example 5.5, Corollary 5.6, Theorem 5.7).
Now we need a possibility of estimating the so-called empirical covering number of F which
occurs in the above inequality. Often this is done estimating a combinatorial quantity associated
with F which measures in some sense the capacity of the function class. We first assume that F
is a concept class, i.e. the function values are contained in the binary set f0; 1g. The Vapnik-Chervonenkis
dimension or VC dimension VC(F) of F is the largest number of points which
can be shattered by F , i.e. any binary mapping on these points can be obtained as a restriction of
a function in F . For a real valued function class F the ffl-fat shattering dimension fat ffl (F) is the
largest size of a set that is ffl-fat shattered by F , i.e. for the points x 1 , . x n there exist reference
points r 1 , . , r n 2 R such that for any binary mapping d on these x i some function f 2 F exists
with for every i. For
quantity is called the pseudodimension PS(F).
For holds that
for an arbitrary P [29] (Theorem 4.2). Consequently, finite VC or pseudodimension, respectively,
ensure the UCED property to hold and bounds on these terms lead to bounds on the number of
examples that guarantee valid generalization. Moreover, they ensure the distribution independent
UCED property as well, i.e. the UCED property holds even if prefixed with a sup P . The fat
shattering dimension which may be smaller than the pseudodimension yields the inequality
even a finite fat shattering dimension guarantees the distribution
independent UCED property and leads to bounds on the generalization error. In the following we
assume that the constant function 0 is contained in F . This is usually the case if F is a function
class computed by a folding architecture. It has been shown that finiteness of the VC dimension if
F is a concept class or finiteness of the fat shattering dimension if F is a function class, respectively,
is even necessary for F to possess the distribution independent UCED property [1, 18]. In general,
only the class of so-called loss functions which is correlated to F has a finite fat shattering dimension
if F possesses the UCED property. However, if the constant function 0 is contained in F the class
of loss functions contains F itself, such that F has a finite fat shattering dimension as well.
Because of the central role of these combinatorial quantities we will estimate the VC and fat
shattering dimension of a folding architecture in the next paragraph. It will turn out that they are
infinite in general. In order to ensure the UCED property the above argumentation needs to be
refined. Fortunately, the input space X can be divided as
are the trees of
height at most t in the case of a folding architecture and the VC dimension of the architecture is
finite if restricted to inputs in X t . This allows us to derive bounds if the probability of high trees
is restricted a priori. For this purpose we will use the following theorem.
Theorem 3 Assume F is a function class with inputs in X and outputs in [0; 1]. Assume
is measurable for any t 2 N and X t ae X t+1 . Assume P is a probability measure
on X, ffl; ffi 2 [0; 1], and t is chosen such that p
f;g2F
jd P (f;
dm (f;
for
is finite this holds even for
Proof: We can estimate the deviation of the real error and the empirical error by
dm (f;
at most m(1 \Gamma ffl=4) points in x are contained in X t )
where P t is the probability induced by P on X t and m This holds because d P differs
from d P t by at most 3ffl=8. If a fraction ffl=4 is dropped in x then -
dm changes by at most ffl=2. Using
the Chebychef inequality we can limit the first term by
As mentioned earlier the second term can be limited by
The expected covering number is limited by' 512e
The entire term is limited by ffi for
"oe
or
respectively 2
However, it is necessary to know the probability of high trees a priori in order to get bounds
on the number of examples which are sufficient for valid generalization. This holds even if the
maximum input height of the trees in a concrete training set is restricted and therefore it is not
very likely for larger trees to occur. Here the luckiness framework [27] turns out to be useful. It
allows us to substitute the prior bounds on the probability of high trees by posterior bounds on
the maximum input height. Since we want to get bounds for the UCED property we generalize the
approach of [27] to function classes in the following way:
Assume that F is a [0; 1]-valued function class with inputs in X and
is a function, the so-called luckiness function. This function measures some quantity, which
allows a stratification of the entire function class into subclasses with some finite capacity. If L
outputs a small value, we are lucky in the sense that the concrete output function of a learning
algorithm is contained in a subclass of small capacity which needs only few examples for correct
generalization. In our case it will simply measure the maximum height of the input trees in a
concrete training set. Define the corresponding function
which measures the number of functions that are at least as lucky as f on x. Note that we are
dealing with real outputs, consequently a quantization according to some value ff of the outputs is
necessary to ensure the finiteness of the above number: F ff refers to the function class with outputs
in f0; ff; which is obtained if all outputs of f 2 F in [kff \Gamma ff=2; kff + ff=2[ are identified
with kff for k 2 N. The luckiness function L is smooth with respect to j and \Phi, which are both
mappings N \Theta (R
indicates that a fraction deleted in x and in y to obtain
x 0 and y 0 . This is a technical condition which we will need in the proof. The smoothness condition
allows us to estimate the number of functions which are at least as lucky as g on a double sample
xy (or a large part of it) if we only know the luckiness of g on the first half of the sample x. Since
in a lucky situation the number of functions which are to be considered is limited and hence good
generalization bounds can be obtained, this condition characterizes some kind of smoothness if we
enlarge the sample set. It is a stronger condition than the smoothness requirement in [27] because
the consideration is not restricted to functions g that coincide on x. Since we want to get results
for learning algorithms with small empirical error, but which are not necessarily consistent, this
generalized possibility of estimating the luckiness of a double sample knowing only the first half is
appropriate in our case.
Now in analogy to [27] we can state the following theorem which guarantees some kind of UCED
property if the situation has turned out to be lucky in a concrete learning task. For this purpose the
setting is split into different scenarios which are more or less lucky and occur with some probability
. Depending on the concrete scenario generalization bounds can be obtained.
Theorem 4 Suppose p t (t 2 N) are positive numbers with
and L is a luckiness function
for a class F which is smooth with respect to j and \Phi. Then the inequality
is valid for any learning algorithm h, real values ffi, ff ? 0, and
ffl(m; t; ffi;
s
Hence if we are in a lucky situation such that \Phi can
be limited by some 2 t 0 +1 , the deviation of the real error from the empirical error can be limited by
some term of order O
with high probability.
Proof: For any f 2 F we can bound the probability
which is fulfilled for ffl as defined above [29] (Theorem 5.7, step 1). It is sufficient
to bound the probability of the latter set for each single t by (p t ffi )=2. Intersecting such a set
for a single t with a set that occurs at the definition of the smoothness of l and its complement,
respectively, we obtain the bound
Denote the above event by A. Consider the uniform distribution
U on the group of permutations in that only swap elements
j. Thus we have
R
where x oe is the vector obtained by applying the permutation oe [29] (Theorem 5.7, step 2). The
latter probability can be bounded by
sup xy
is the length of x 0 and y 0 . f ff denotes the quantized version of f , where outputs
in are identified with kff. Now denote the event of which the probability U is
measured by B, and define equivalence classes C on the permutations such that two permutations
belong to the same class if they map all indices to the same values unless x 0 and y 0 both contain
this index. We find that
If we restrict the events to C we definitely consider only permutations which swap elements in x 0
and y 0 such that we can bound the latter probability by
where U 0 denotes the uniform distribution on the swappings of the common indices of x 0 and y 0 .
The latter probability can be bounded using Hoeffding's inequality for random variables with values
in f\Sigma(error on x 0
by the term
In total, we can therefore obtain the desired bound if we choose ffl such that
jm
which is fulfilled for
r
:Note that the bound ffl tends to 0 for are decreasing, j in such a way
that becomes small. Furthermore, we have obtained bounds on the difference between the
real and empirical error instead of dealing only with consistent algorithms as in [27]. We have
considered functions instead of concept classes which causes an increase in the bound by ff due to
the quantization, and a decrease in the convergence because we have used Hoeffding's inequality in
the function case.
Furthermore, a dual formulation with an unluckiness function L 0 is possible, too. This corresponds
to a substitution of - by - in the definition of l; the other formulas hold in the same
manner. We will use the unluckiness framework later on.
Generalization ability of folding networks
We want to apply the general results from learning theory to folding networks. For this purpose
we first estimate the combinatorial quantities VC(F jX t ), PS(F jX t ), and fat ffl
folding architecture F which is restricted to the set X t of input trees of height at most t. Denote by oe
the activation function of the architecture, by W the number of adjustable parameters, i.e. weights,
biases, and components of the initial context, by N the number of neurons, and by h the depth of
the feed-forward architecture which induces the folding architecture, i.e. the maximum length in a
path of the graph defining the network structure. Upper bounds on the VC or pseudodimension
d t of F jX t can be obtained by first substituting each input tree by an equivalent input tree with
a maximum number of nodes, unfolding the network for these inputs, and applying the bounds
from the feed-forward case to these unfolded networks. For details see [14, 15]. This leads to the
following bounds some of which can be found in [8, 14, 19]
O(W ln(th)) if oe is linear,
O(W th ln d) if oe is a polynomial of degree d - 2,
O(WN +W ln(W t)) if
2:
Note that the bounds do not differ for a polynomial activation function and
respectively. Some lower bounds can be found in [8, 14, 19]
oe is a nonlinear polynomial,
Since a standard feed-forward perceptron architecture exists
points and the
sigmoidal function can approximate the perceptron activation arbitrarily well we can combine
the architecture in the sigmoidal case with this feed-forward architecture to obtain an additional
summand
in the lower bounds for sgd. The detailed construction is described in
[14] (Theorem 11). The following theorem yields a further slight improvement in the sigmoidal
case.
Theorem 5 For an input set (R 2 )
sgd an architecture exists shattering
Proof: We restrict the argumentation to the case 2. Consider the t(t trees of depth
which contain all binary numbers
to
of length t in the first component
of the labels of the leaves, all binary numbers of length in the labels of the next layer, . ,
the numbers 0 and 1 in the first layer, and the number 0 in the root. In the tree T ij (i 2
the second component of the labels is 0 for all except one layer
is 1 at all labels where the already defined coefficient has a 1 as the jth digit. t 2;1 is the tree
(0; 0)((0; 0)((00; 0); (01; 0)); (1; 0)((10; 1); (11; 1))) if the depth t
The purpose of this definition is that the coefficients which enumerate all binary strings are
used to extract the bits number 1, . , t(t + 1)=2 in an efficient way from the context vector: We
can simply compare the context with these numbers. If the first bits correspond, we cut this prefix
by subtracting the number from the context and obtain the next bits for the next iteration step.
The other coefficient of the labels specify the digit of the context vector which is responsible for the
input tree T ij , namely the digit. With these definitions a recursive architecture
can be constructed which just outputs for an input T ij the responsible bit of the initial context and
therefore shatters these trees by an appropriate choice of the initial context.
To be more precise, the architecture is induced by the mapping f
0:1
which leads to a mapping which computes in the third component the responsible bit of y for T ij with
an initial context 0). The role of the first context neuron is to store
the remaining bits of the initial context, at each recursive computation step the context is shifted by
multiplying it by y 2 and dropping the first bits by subtracting an appropriate label of the tree in the
corresponding layer. The second context neuron computes the value 10 height of the remaining tree .
Of course, we can substitute this value by a scaled version which is contained in the range of sgd.
The third context neuron stores the bit responsible for To obtain an output 1 the first bits of
an appropriate context have to coincide with a binary number which has an entry 1 at the position
that is responsible for the tree. This position is indicated by x 2 . f can be approximated arbitrarily
well by an architecture with the sigmoidal activation function with a fixed number of neurons. It
shatters trees.
Now we combine W of these architectures obtaining an architecture shattering W t(t
trees with O(W ) weights. This proceeds by first simulating the initial context with additional
weights, and adding W of these architectures, which is described in [14] (Theorem 11) in detail.
The additional summand W ln W can be obtained as described earlier. 2
Unfortunately, this lower bound still differs from the upper bound by an exponential term in t.
Nevertheless it is interesting due to the following reason: The bounds in the linear or polynomial
case do not differ comparing 2. In the sigmoidal case the 'real upper bound' is
expected to be of order WN t for But the lower bound we obtained is of order t 2
consequently, the capacity increases for tree structured inputs in the sigmoidal case
compared to lists in contrast to the linear or polynomial case.
For all activation functions the bounds become infinite if arbitrary inputs are allowed. For the
perceptron activation function this can be prohibited by restricting the inputs to lists or trees with
nodes in a finite alphabet [19]. In general one could restrict the absolute values of the weights
and inputs and consider the fat shattering dimension instead of the pseudodimension. This turns
out to be useful when dealing with the SVM or ensembles of networks, for example [4, 13, 27].
Unfortunately, even if the activation function coincides with the identical function a lower bound
\Omega\Gammaun ln t) can be found for the fat shattering dimension and restricted weights and inputs [15]. For
the sigmoidal activation a lower
can be found for the fat shattering dimension [15].
The following theorem generalizes this result to a large number of activation functions.
Theorem 6 For any activation function oe which is twice continuously differentiable with non-vanishing
second derivative in the neighborhood of at least one point we obtain the lower bound
fat 0:1 recurrent architecture F with 3 computation neurons with activation
function oe in the recursive part and one linear neuron in the feed-forward part, input lists with
entries in a unary alphabet, and weights restricted by some constant which depends on the activation
function oe.
Proof: The function
has the property
for any x. Therefore the function class f ~
0:1-shatters all sets of sequences with
mutually different length. Starting with the longest sequence in some set and a value in ]0:1; 0:4[
or ]0:6; 0:9[, respectively, corresponding to the desired output of the sequence, one can recursively
choose an inverse image in ]0:1; 0:4[ or ]0:6; 0:9[ because of ( ) in order to get outputs for shorter
sequences and, finally, an appropriate initial context y. Since even ]0; 1[ is entirely contained in
0:9[) the same holds for any continuous function which differs from
f by at most 0:1.
oe is twice continuously differentiable with non-vanishing second derivative in the neighborhood
of at least one point, consequently, points x 0 and x 1 and ffl ? 0 exist with
and
with a maximum deviation of 0:007 for any y 2 [0:1]. Consequently, g(x;
differs from f(x; y) by at most 0:1 for any input y from [0; 1]. Hence f~g y j y 2]0; 1[g shatters all
sequences with mutually different length as well.
g can be implemented by a folding network without hidden layers, 3 context neurons and
activation function oe in the recursive part, and one linear neuron in the feed-forward part, and
some context in a closed set which depends on the activation function oe. This holds because
g is a linear combination of neurons with activation function oe and a constant. The identity
holds for any mappings A and f 1 of appropriate arity, vectors
Therefore the linear mapping in g can be integrated in
the network structure. Except for the initial context which is to be chosen in a compact set the
weights in the architecture are fixed for any set to be shattered and only depend on the activation
function oe. 2
Hence the distribution independent UCED property does not hold and a fixed folding architecture
is not distribution independent PAC learnable under realistic condition for a large number
of activation functions including the standard sigmoidal function. This fact does not rely on the
learning algorithm which is used but is a characteristic of the function class. Because it is possible
to deal with inputs of arbitrary size this unlimited size can be used to store in some sense all
dichotomies of the inputs. Regarding the above argumentation the situation is even worse. The
architecture is very small and only uses the different length of the inputs. In particular, training
sets which typically occur in time series prediction are shattered, i.e. a table-lookup is possible on
those inputs.
However, it is shown in [14] that distribution dependent PAC learnability is guaranteed. More-
over, the arguments from the last section allow us to derive bounds on the deviation of the empirical
error from the real error for any learning algorithm.
Corollary 7 Denote by F a fixed folding architecture with inputs in X and by X t the set of trees
of height at most t. Assume P is a probability measure on X. Assume t is chosen such that
learning algorithm h
jd P (f; hm (f;
is valid if the number of examples m is chosen as specified in Theorem 3. The bound is polynomial
in 1=ffl and 1=ffi if P (X t ) is of order
fat ffl=512
Proof: The bounds follow immediately from Theorem 3. They are polynomial in 1=ffl and 1=ffi if
the VC, pseudo-, or fat shattering dimension is polynomial in 1=ffl and 1=ffi. Because of the condition
the above inequality can be derived. 2
This argumentation leads to bounds if we can limit the probability P (X t ). Furthermore, these
bounds are polynomial if the probability for large trees tends to 0 sufficiently fast where the
necessary rate of convergence depends on the folding architecture which is considered. We can
substitute this prior information using the luckiness framework: We can learn with a concrete
training sample and derive bounds which only depend on the maximum height of the trees in the
training sample and the capacity of the architecture.
Corollary 8 Assume F is a [0; 1]-valued function class on the trees X, P is a probability distribution
on X, and d trees of height - t) is finite for every t; then
for
trees of height - t x being the maximum height of trees in the sample x.
Proof: We want to apply Theorem 4. We use the same notation as in the theorem. The unlucki-
ness function L 0 (x; f) = maxfheight of a tree in xg is smooth with respect to \Phi(m; L 0 (x; f); ffi;
trees of height - L 0 (x; f)) and j(m; L 0 (x; f); ffi;
2), as can be seen as follows:
because the number of functions in jfgjx 0 y height is at most L 0 (x; f ),
can be bounded by \Phi(m; L 0 (x; f); ffi; ff) because the number is bounded from above by the quantity
length of x 0 y 0 . The latter probability equals
Z
where U is the uniform distribution on the swapping permutations of 2m elements and A is the
above event. We want to bound the number of swappings of xy such that on the first half no tree
is higher than a fixed value t, whereas on the second half at least mj trees are higher than t. We
may swap at most all but mj indices arbitrarily. Obviously, the above probability can be bounded
by 2 \Gammamj , which is at most ffi for j - lg(1=ffi)=m.
We choose Now we can insert these values into the inequalities obtained
by the luckiness framework and get the bound
sm
where t is chosen such that \Phi(m; L(x; f); ffi; ff) - 2 t+1 . Hence, it is sufficient to choose t at least
. 2
Conclusions
The information theoretical learnability of folding architectures has been examined. For this purpose
bounds on the VC, pseudo-, and fat shattering dimension which play a key role in learnability
have been cited or improved, respectively. Since the fat shattering dimension is infinite even for
restricted weights and inputs there cannot exist bounds on the number of examples which guarantee
valid generalization and which are independent of the special distribution. Since the results do not
depend on the concrete learning algorithm but only on the dynamics this is in principle a drawback
of many mechanisms proposed for learning of structured data. If a list or tree structure is processed
recursively according to the recursive structure of the data then the a priori unlimited length or
height of the inputs offers the possibility of using this space to store any desired dichotomy on the
inputs in some way.
Upper bounds on the VC and pseudodimension can be given in terms of the number of parameters
in the network and the maximum input height. We have proposed two approaches which allow
a stratification of the situation via the input space or the output of a concrete learning algorithm.
Concerning folding networks a division of the input space in sets of trees with restricted height fits
to the first approach. It allows us to derive bounds on the deviation of the empirical and the real
error for any learning algorithm and any probability distribution for which the probability of high
trees can be restricted a priori. The second approach has been applied to training situations where
the height of the input trees is restricted in a concrete learning example. It allows us to derive
bounds on the deviation of the empirical error and the real error which depend on the concrete
learning set, that means the maximum height of the input trees. Note that in both approaches
the bounds are rather conservative because we have not yet tried to improve the constants which
occur.
As a consequence the structural risk of a learning algorithm can be controlled for folding networks
and other methods with the same processing dynamics as well. The real error of a learning
algorithm can be estimated if the empirical error, the network architecture, the number of patterns,
and additionally, the probability of high trees or the maximum input height in the training set are
known.
Although this fact holds for any learning algorithm some algorithms are to be preferred compared
to others: Since any algorithm with small empirical error generalizes well and needs the
same number of examples, the question arises as to whether a learning algorithm is capable of
minimizing the empirical error in an efficient way. An algorithm is to be preferred if it manages
this task efficiently. Here the folding architecture seems superior to the RAAM, for example, if
used for classification of structured data because it does not try to find encoding, decoding, and
an appropriate classification but only encoding and classification. That means, the same function
class is considered when dealing with the LRAAM instead of folding networks, but a more difficult
minimization task is to be solved. Furthermore, algorithms which start from some prior knowledge
if available for example in form of automata rules [23] highly probably find a small empirical error
faster than an algorithm which has to start from scratch because the starting point is closer to an
optimum value in the first case. Again, the function class remains the same but the initialization
of the training process is more adequate. Actually, training recurrent networks with a gradient
descend method has been proven to be particularly difficult [5, 16] and the same holds for folding
networks dealing with very high input trees as well. This makes further investigation of alternative
methods of learning necessary as the already mentioned method to start from an appropriate
initialized network rather than from scratch [20, 23] or to use appropriate modifications of the architecture
or the training algorithm [6, 16]. But for all algorithms the above bounds on the number
of training samples apply and no algorithm however complicated is able to yield valid generalization
with a number of examples independent of the underlying regularity.
--R
A sufficient condition for polynomial distribution-dependent learnability
Probabilistic analysis of learning in artificial neural networks: The PAC model and its variants.
For valid generalization
Learning long-term dependencies with gradient descent is difficult
Credit assignment through time: Alternatives to backpropagation.
A topological transformation for hidden recursive models.
Sample complexity for learning recurrent perceptron mappings.
A general framework for adaptive processing of data sequences.
Adaptive Processing of Sequences and Data Structures.
Learning task-dependent distributed representations by backpropagation through structure
Special issue on recurrent neural networks for sequence processing.
Approximation and learning of convex superpositions.
On the learnability of recursive data.
On the generalization of Elman networks.
Long short-term memory
Polynomial bounds for the VC dimension of sigmoidal neural networks.
Efficient distribution-free learning of probabilistic concepts
On the correspondence between neural folding architectures and tree automata.
Inductive learning symbolic domains using structure-driven neural networks
Neural net architectures for temporal sequence processing.
Constructing deterministic finite-state automata in recurrent neural networks
Recursive distributed representation.
Relating chemical structure to activity with the structure processing neural folding architecture.
Some experiments on the applicability of folding architectures to guide theorem proving.
Structural risk minimization over data dependent hierarchies.
Labeling RAAM.
A Theory of Learning and Generalization.
The roots of backpropagation.
--TR
--CTR
M. K. M. Rahman , Wang Pi Yang , Tommy W. S. Chow , Sitao Wu, A flexible multi-layer self-organizing map for generic processing of tree-structured data, Pattern Recognition, v.40 n.5, p.1406-1424, May, 2007
Barbara Hammer , Alessio Micheli , Alessandro Sperduti, Universal Approximation Capability of Cascade Correlation for Structures, Neural Computation, v.17 n.5, p.1109-1159, May 2005
Barbara Hammer , Peter Tio, Recurrent neural networks with small weights implement definite memory machines, Neural Computation, v.15 n.8, p.1897-1929, August | luckiness function;folding networks;VC dimension;UCED property;recurrent neural networks;computational learning theory |
628123 | Incremental Syntactic Parsing of Natural Language Corpora with Simple Synchrony Networks. | AbstractThis article explores the use of Simple Synchrony Networks (SSNs) for learning to parse English sentences drawn from a corpus of naturally occurring text. Parsing natural language sentences requires taking a sequence of words and outputting a hierarchical structure representing how those words fit together to form constituents. Feed-forward and Simple Recurrent Networks have had great difficulty with this task, in part because the number of relationships required to specify a structure is too large for the number of unit outputs they have available. SSNs have the representational power to output the necessary $O(n^2)$ possible structural relationships because SSNs extend the $O(n)$ incremental outputs of Simple Recurrent Networks with the $O(n)$ entity outputs provided by Temporal Synchrony Variable Binding. This article presents an incremental representation of constituent structures which allows SSNs to make effective use of both these dimensions. Experiments on learning to parse naturally occurring text show that this output format supports both effective representation and effective generalization in SSNs. To emphasize the importance of this generalization ability, this article also proposes a short-term memory mechanism for retaining a bounded number of constituents during parsing. This mechanism improves the $O(n^2)$ speed of the basic SSN architecture to linear time, but experiments confirm that the generalization ability of SSN networks is maintained. | Introduction
This article explores the use of Simple Synchrony Networks (SSNs) for learning to parse English
sentences drawn from a corpus of naturally occurring text. The SSN has been defined
in previous work [17, 23], and is an extension of the Simple Recurrent Network (SRN) [6, 7].
The SSN extends SRNs with Temporal Synchrony Variable Binding (TSVB) [33], which enables
the SSN to represent structures and generalise across structural constituents.
We apply SSNs to syntactic parsing of natural language as it provides a standard task
on real world data which requires a structured output. Parsing natural language sentences
requires taking a sequence of words and outputting a hierarchical structure representing
how those words fit together to form constituents, such as noun phrases and verb phrases.
The state-of-the-art techniques for tackling this task are those from statistical language
learning [2, 3, 5, 20]. The basic connectionist approach for learning language is based around
the SRN, in which the network is trained to predict the next word in a sentence [7, 8, 30]
or else trained to assess whether a sentence is grammatical or not [24, 25]. However, the
simple SRN has not produced results comparable with the statistical parsers, because its
basic output representation is flat and unstructured.
The reason the simple SRN does not produce structured output representations lies with
the required number of relationships which must be output to specify a structure such as a
parse tree. For the SRN, only O(n) relationships may be output, where n is the number of
words in the sentence. However, a parse tree may specify a structural relationship between
any word and any other word, requiring O(n 2 ) outputs. This is not possible with the
simple SRN because the length of a sentence is unbounded, but the number of output units
is fixed. More fundamentally, even if a scheme is devised for bounding the required number
of outputs to O(n) (such as the STM mechanism discussed below), using large numbers of
output units means learning a large number of distinct mappings and thus not generalising
across these distinctions. Thus this article will focus not only on the representation of
syntactic structures in the network's output, but crucially on a demonstration that the
resulting networks generalise in an appropriate way when learning.
One example of a connectionist parser which uses multiple groups of output units to
represent the multiple structural relationships is the Hebbian connectionist network in [13].
This network explicitly enforces generalisation across structural constituents by requiring
each group of output units to be trained on a random selection of the constituents. However,
the amount of non-trainable internal structure required to enforce this generalisation and
represent the possible forms of structure is a severe limitation. In particular, this component
of the network would need to grow with the length and complexity of the sentences. This
network has only been tested on a toy sublanguage, and has not been demonstrated to scale
up to the requirements of naturally occurring text.
The two alternatives to increasing the number of output units are to increase the amount
of information represented by each unit's activation level, and to increase the amount of
time used to output the structure. Both these approaches are exemplified by the Confluent
Preorder Parser [18]. As with other holistic parsers, the Confluent Preorder Parser first
encodes the sentence into a distributed representation. This representation uses a bounded
number of units, but the use of continuous activation values allows it to, in theory, encode
a sentence of unbounded length. This distributed representation is then decoded into a
different sequence which represents the sentence's syntactic structure (in particular the
preorder traversal of the structure). This output sequence is as long as it needs to be
to output all the required structural relationships, thereby avoiding the restriction on the
SRN that there can be only O(n) outputs. But both this decoding stage and the previous
encoding stage miss important generalisations that are manifested in an explicitly structural
representation, and thus do not scale well to naturally occurring text. For the decoding
stage, structural constituents which are close together in the structure may end up being far
apart in the sequential encoding of that structure. Thus important regularities about the
relationship between these constituents will not be easily learned, while other regularities
between constituents which just happen to occur next to each other in the sequence will
be learned. 1 For the encoding stage, as sentences get longer the ability of a fixed sized
distributed representation to encode the entire sentence degrades. Indeed, [18] point out
that the representational capacity of such approaches is limited, preventing their scale up
beyond toy grammars.
Our approach is to represent structural constituents directly, rather than using one of
the above indirect encodings. Thus our connectionist architecture must be able to output
the O(n 2 ) structural relationships of a parse tree. To achieve this, the SSN extends the O(n)
incremental outputs of the SRN with the O(n) entity outputs provided by TSVB. But it is
not enough to simply provide such a representation; the point of using a direct encoding is
to enable the network to learn the regularities that motivated the use of a structured representation
in the first place. 2 For the SSN, the use of TSVB means that learned regularities
inherently generalise over structural constituents [15, 17], thereby capturing important linguistic
properties such as systematicity [15]. It is this generalisation ability which allows
the SSN parser presented in this article to scale up to naturally occurring text.
In this article we demonstrate how the SSN can represent the structured outputs necessary
for natural language parsing in a way that allows the SSN to learn to parse from a
corpus of real natural language text. To emphasise the generalisation ability required to
learn this task, we also introduce an extension to the SSN, namely a short-term memory
(STM) mechanism, which places a bound on the number of words which can be involved in
any further structural relationships at any given time. This improves the O(n 2 ) speed of the
basic SSN architecture to linear time (O(n)), but it also means that only O(n) relationships
can be output. However, unlike previous connectionist approaches to parsing, this bound
does not affect the ability of SSNs to generalise across this bounded dimension, and thus to
generalise in a linguistically appropriate way. Indeed, the performance of the SSN parser
actually improves with the addition of the STM.
Simple Synchrony Networks
In this section we provide a summary of Simple Synchrony Networks (SSNs) [17, 23]. We begin
by describing the basic principles of Temporal Synchrony Variable Binding (TSVB) [33]
which extend standard connectionist networks with pulsing units; pulsing units enable a
network to provide output for each entity (word) encountered, and not just for the current
one, as with the standard connectionist unit. We briefly summarise the main equations
defining the operation of TSVB networks and describe a training algorithm. Finally we
give three example SSN architectures; SSNs are defined by a restriction on the space of
possible TSVB networks.
1 Methods such as Long Short-Term Memory [19] can help learn regularities between distant items in a
sequence, but they cannot totally overcome this unhelpful bias.
2 Note that this argument applies to any domain where structured representations have been found to be
useful to express important regularities. Thus, although this article focuses on the requirements of parsing
natural language, the SSN architecture would be relevant to any task which is best thought of as a mapping
from a sequence of inputs to a structure.
2.1 Trainable TSVB networks
TSVB [33] is a connectionist technique for solving the "binding problem" through the use
of synchrony of activation pulses. The binding problem arises where multiple entities are
represented each with multiple properties; some mechanism is required to indicate which
properties are bound to which entities. For example, on seeing two objects with the properties
red, green, square and triangle, some mechanism is needed to indicate which colour
relates to which shape. One method is to provide for variables x and y to stand for the two
objects. The scene may then be unambiguously described as: red(x) - green(y) - square(x)
- triangle (y). Another mechanism is to use synchrony binding, in which two units are representing
properties bound to the same entity when they are pulsing synchronously. This
proposal was originally made on biological grounds by Malsburg [36]. Earlier implementations
of TSVB [14, 33] used non-differentiable binary threshold units, and so could not
be trained using standard connectionist techniques. In this section we describe a different
implementation of TSVB, one based on standard sigmoid activation units, which yields a
trainable implementation of TSVB networks.
In order to implement a TSVB network, the central idea is to divide each time period into
a number of phases; each phase will be associated with a unique entity. This correspondence
of phases and entities means phases are analogous to variables; all units active in the same
phase are representing information about the same entity, just as if the units were predicates
on the same variable. Through the analogy with variables, the phase numbers play no role
in determining unit activations within the network. There are two kinds of unit. The first
is the pulsing unit, which computes in individual phases independent of other phases. The
number of phases in each time period, n(t), may vary, and so the pulsing unit's output
activation is an n(t)-place vector, i.e. the activation, ~
of a pulsing unit j at time t
is formed from n(t) values, fo p
(t) is the activation of unit j in
phase p at time t. The second type of unit is the non-pulsing unit, which computes across all
phases equally in the current time period; its output activation,
across every phase in time t.
We define the net input and output activation separately for each type of unit within
a TSVB network, based on the type of unit it is receiving activation from. We index the
pulsing units in the network by a set of integers U ae , and the non-pulsing units by a set of
integers U - . Each non-input unit, j, receives activation from other units, indexed by the set
Inputs j . Recurrent links are handled, without loss of generality, by adding context units to
the network; each context unit's activation value is that which another unit had during the
previous time step. The function, C, maps each unit to its associated context unit. Units
are linked by real-valued weights; the link from unit i to unit j having the weight w ji .
Given these definitions, the output activation of a pulsing unit, j 2 U ae , in phase p at
time t is defined as follows:
net p
i2Inputs j
w
in p
is an input unit
With the standard sigmoid function: Note that the net function for
each phase p takes activation from other pulsing units only in phase p, or from non-pulsing
units, whose activation is the same across all phases. This is achieved by the function R p (i; t)
which represents the activation of unit i in phase p at time t: non-pulsing units (i 2 U - )
have constant activation across each time period, so their activation is pulsing units
(i 2 U ae ) output a separate activation for each phase of the time period, so their activation
is
(t).
The definition of the output activation of a non-pulsing unit is complicated by the
possibility of a non-pulsing unit having inputs from pulsing units. In this case activations
from an unbounded number of phases would have to be combined into a single input value.
As discussed in [22], including such links is not necessary, and decreases the effectiveness
of the architecture. Thus they are not allowed in Simple Synchrony Networks. Given this
simplification, the output activation of a non-pulsing unit, j 2 U - , at time t is defined as:
net
i2Inputs j
in j (t) if j is an input unit
Note that these non-pulsing units act just like a standard unit within an SRN.
In order to train these TSVB networks, we use a novel extension of Backpropagation
Through Time (BPTT) [31]. When applying BPTT to a standard recurrent network, one
copy of the network is made for each time step in the input sequence. Extending BPTT to
networks requires a further copy of the network for every phase in the time period.
The unfolding procedure copies both pulsing and non-pulsing units once per time period
and the pulsing units are copied additionally once per phase. As with standard BPTT,
the unfolded network is a feed-forward network, and can be trained using backpropagation.
However, the unfolded network is a set of copies of the original and so, as training progresses,
the changing weights must be constrained to ensure that each copy of a link uses the same
weight; this is achieved by summing all the individual changes to each copy of the link.
2.2 SSN architectures
Three example SSN architectures are illustrated in Figure 1. The figure depicts layers of
units as rectangles or blocks, each layer containing however many units the system designer
chooses. Rectangles denote layers of non-pulsing units, and blocks denote layers of pulsing
units. Links between the layers (solid lines) indicate that every unit in the source layer is
connected to every unit in the target layer. As discussed above, recurrence is implemented
with context units, just as with SRNs [7], and the dotted lines indicate that activation from
each unit in the source layer is copied to a corresponding context unit in the target layer.
All three architectures possess a layer of pulsing input units and a separate layer of
non-pulsing input units. The procedure for inputting information to the SSNs is only a
little different to that in standard connectionist networks. Consider a sequence of inputs
'a b c . A different pattern of activation is defined for each different input symbol, for
example activating one input unit to represent that symbol and having the rest of the input
units inactive (i.e. a localist representation). With an SRN the sequence of input patterns
would be presented to the network in consecutive time periods. Thus, in time period 1,
Output
Non-Pulsing
Pulsing
Input
Input
Non-Pulsing
Pulsing
Output
Non-Pulsing
Pulsing
Input
Output
Figure
1: Three Simple Synchrony Networks. Rectangles are layers of non-pulsing units,
and blocks are layers of pulsing units.
the SRN would receive the pattern for symbol "a" on its input; in time period 2, it would
receive the pattern for symbol "b" on its input, etc. With the SSNs, the non-pulsing input
units operate in just this way. The input symbol for each time period is simply presented
on the non-pulsing units. For the pulsing units, each input symbol is introduced on a new
phase, i.e. one unused by the input sequence to that point. Thus, in time period 1, the SSN
would receive the pattern for symbol "a" on its pulsing inputs in phase 1; in time period 2,
the pattern for the symbol "b" would be presented on its pulsing inputs in phase 2; and so
on.
The three architectures illustrated in Figure 1 cover three different options for combining
the information from the non-pulsing inputs with the information from the pulsing inputs.
Essentially, the non-pulsing units contain information relevant to the sentence as a whole,
and the pulsing units information relevant to specific constituents. This information can be
combined in three possible ways: before the recurrence (type A), after (type B) and both
(type C). Given the constraint discussed in Section 2.1 that SSNs do not have links from
pulsing to non-pulsing units, these three types partition the possible architectures. The
combination layer in types B and C between the recurrent layers and the output layer is
optional. We empirically compare the three illustrated architectures in the experiments in
Section 4.
3 Syntactic Parsing
Syntactic parsing of natural language has been the center of a great deal of research because
of its theoretical, cognitive, and practical importance. For our purposes it provides a standard
task on real world data which requires a structured output. A syntactic parser takes
the words of a sentence and produces a hierarchical structure representing how those words
fit together to form the constituents of that sentence. In this section we discuss how this
structure can be represented in a SSN and some of the implications of this representation.
3.1 Statistical parsers
Traditionally syntactic parsing was addressed by devising algorithms for enforcing syntactic
grammars, which define what is and isn't a possible constituent structure for a sentence.
More recent work has focused on how to incorporate probabilities into these grammars [2, 20]
and how to estimate these probabilities from a corpus of naturally occurring text. The
output structure is taken to be the structure with the highest probability according to the
estimates. For example, probabilistic context-free grammars (PCFGs) [20] are context-free
grammars with probabilities associated with each of the rewrite rules. The probability of
an entire structure is the multiplication of the probabilities of all the rewrite rules used to
derive that structure. This is a straightforward translation of context-free grammars into
the statistical paradigm. The rule probabilities can be simply multiplied because they are
assumed to be independent, just as the context-free assumption means the rules can be
applied independently. Work in statistical parsing focuses on finding good independence
assumptions, and it often takes as its starting point linguistic claims about the basic building
blocks of a syntactic grammar.
Our SSN parser can be considered a statistical parser, in that the network itself is a
form of statistical model. However there are some clear differences. Firstly, there is no
"grammar" in the traditional sense. All the network's grammatical information is held
implicitly in its pattern of link weights. More fundamentally, there are fewer independence
assumptions. The network decides for itself what information to pay attention to and what
to ignore. Statistical issues such as combining multiple estimators or smoothing for sparse
data are handled by the network training.
But as is usually the case with one-size-fits-all machine learning techniques, more domain
knowledge has gone into the design of the SSN parser than is at first apparent. In
particular, while very general, the input/output representation has been designed to make
linguistic generalisations easy for the network to extract. For example, the SSN incrementally
processes one word at a time and the output required at each time is related to that
word. This not only reflects the incremental nature of human language processing, it also
biases the network towards learning word-specific generalisations. The word-specific nature
of linguistic generalisations is manifested in the current popularity of lexicalised grammar
representations, as in [5]. Also, the short-term memory mechanism discussed later in this
section is motivated by psycholinguistic phenomena. Other particular motivations will be
discussed as they arise.
3.2 Structured output representations for SSNs
The syntactic structure of a natural language sentence is a hierarchical structure representing
how the sentence's words fit together to form constituents, such as noun phrases and
relative clauses. This structure is often specified in the form of a tree, with the constituents
as nodes of the tree and parent-child relationships representing the hierarchy; all the words
that are included in a child constituent are also included in its parent constituent. Thus
we can output a syntactic structure by outputting the set of constituents and the set of
parent-child relationships between them and between them and the words.
The difficulty with outputting such a structure arises because of the number of parent-child
relationships. On linguistic grounds, it is safe to assume that the number of constituents
is linear in the number of words. 3 Thus we can introduce a bounded number of
entities with each word, and have one entity for each constituent. 4 The problem is that this
still leaves O(n 2 ) (quadratic) possible parent-child relationships between these O(n) (linear)
constituents. The solution is to make full use of both the O(n) times at which words are
presented to the network and the O(n) entities at any given time. Thus we cannot wait
until all the words have been input and then output the entire structure, as is done with
a symbolic parser. Instead we must incrementally output pieces of the structure such that
by the time the whole sentence has been input the whole structure has been output.
3 This simply means that there aren't unbounded numbers of constituents that contain the same set of
words (unbranching constituents).
4 Alternatively, we could introduce one entity with each word, and have one entity represent a bounded
number of constituents. As will be discussed below, we are assuming that the bound is 1, so these two
alternatives are equivalent.
The first aspect of this solution is that new constituents are introduced to the SSN with
each word that is input. In other words, during each period new phases (i.e. entities) must
be added to the set of phases that are being processed. As illustrated in Figure 1, SSNs have
two banks of input units, pulsing and non-pulsing. The non-pulsing input units hold the
part of speech tag for the word being input during the current period. (For simplicity we do
not use the words themselves.) Using the pulsing input units, this word-tag is also input to
the new phase(s) introduced in the current period. In this way the number of constituents
represented by the network grows linearly with the number of words that have been input to
it. For the experiments discussed below we make a slightly stronger assumption (for reasons
detailed below), namely that only one constituent is added with each word-tag. This means
that the network can only output parse trees which contain at most as many constituents as
there are words in the sentence. This simplification requires some adjustments to be made
to any preparsed corpus of naturally occurring text, but is not linguistically unmotivated;
the result is equivalent to a form of dependency grammar [27], and such grammars have
a long linguistic tradition. We will return to the adjustments required to the training set
when considering the corpus used in the experiments in Section 4.
Now that we have O(n) constituents, we need to ensure that enough information about
the structure is output during each period so that the entire structure has been specified
by the end of the sentence. For this we need to make use of the pulsing output units
illustrated in Figure 1. Our description of the parsing process refers to the example in
Figure
2, which illustrates the sentence 'Mary saw the game was bad'. This sentence is
represented as a sequence of word-tags as 'NP VVD AT NN VBD JJ', and the sentence
structure (S) contains separate constituents for the subject noun (N) and object clause
which contains a further noun phrase (N). Note that this parse tree has had some
constituents conflated to comply with the constraint that there be only one constituent per
its relation to standard parse-tree representations is covered in Section 4.
The pulsing units in the network, during the n th time period, provide an output for
each of the n constituents represented by the SSN at that period. As mentioned above, we
obviously want these outputs to relate to the n th word-tag, which is being input during that
period. So one thing we want to output at this time is the parent of that word-tag within
the constituent structure. Thus we simply have a parent output unit, which pulses during
each period in the phase of the constituent which is the parent of the period's word-tag.
Examples of these parent relationships are shown in Figure 2, and examples of these outputs
are shown in Table 1. For the experiments discussed below we assume that each constituent
is identified by the first word-tag which attaches to it in this way. So if this is the first word-
tag to attach to a given constituent, then its parent is the constituent introduced with that
word-tag, as for the NP, AT, VVD and VBD in the example. Otherwise the word-tag's
parent is the constituent introduced with its leftmost sibling word, as for the NN and JJ
in the example. In these cases the newly introduced constituent simply does not play any
role in the constituent structure.
The parent output unit is enough to specify all parent-child relationships between constituents
and word-tags, leaving only the parent-child relationships between constituents.
For these we take a maximally incremental approach; such a parent-child relationship needs
to be output as soon as both the constituents involved have been introduced into the SSN.
There are two such cases, when the parent is introduced before the child and when the
child is introduced before the parent. The first case is covered by adding a grandparent
output unit. This output specifies the grandparent constituent for the current word-tag,
as illustrated in Figure 2 and Table 1 for the VBD. This grandparent constituent must be
the parent of the constituent which is the parent of the current word-tag. In other words,
Marysawthegamewasbad
F
Parent Sibling Grandparent
Figure
2: A sample parse tree. The solid lines indicate the parse tree itself, the dotted and
dashed lines the relationships between the words and nodes.
Time Pulsing Inputs Non-pulsing Structure Outputs
Period Phase Inputs Phase
Table
1: The input information and corresponding Grandparent/Parent/Sibling outputs for
the sample structure shown in Figure 2.
the combination of the parent output and the grandparent output specifies a parent-child
relationship between the word-tag's grandparent and its parent. The second case is covered
similarly by a sibling output unit. This output specifies any constituent which shares the
same parent as the current word-tag, as illustrated in Figure 2 and Table 1 for the VVD
and VBD. The combination of the parent output and the sibling output specifies a parent-child
relationship between the word-tag's parent and each of its siblings (possibly more than
one). In the experiments below the grandparent and sibling outputs are only used during
the period in which the word-tag's parent is first introduced.
The interpretation of the outputs is best described with reference to a detailed trace
of activation on the input and output units, as provided in Table 1 for the structure in
Figure
2. The first two columns of the table (other than the period numbers) show the
inputs to the SSN. The first column shows the activation presented to the bank of pulsing
input units. Each row of the column represents a separate time period, and the column
is divided into separate phases, 6 being sufficient for this example. Each cell indicates the
information input to the network in the indicated time period and phase; the word-tag
appearing in the cell indicates which of the pulsing input units is active in that time period
and phase. The second column shows the activation presented to the bank of non-pulsing
units. No phase information is relevant to these units, and so the word-tag given
simply indicates which of the non-pulsing input units is active in each time period.
The third column in the table shows the activation present on the output units. In
every period the parent (P) output is active in the phase for the constituent which the
period's word-tag is attached to. In period 1 an NP (proper noun) is input and this is
attached to constituent number 1. This is the constituent which was introduced in period
1, and it is specified as the word-tag's parent because no previous word-tags have the same
parent (there being no previous word-tags in this case). The same thing happens with the
parent outputs in periods 2, 3, and 5. In period 4 the word-tag NN (noun) is input and
attached to constituent 3, which is the constituent which was introduced during the input
of AT (article) in the previous period. This specifies that this AT and NN have the same
parent, namely constituent 3. The same relationship is specified in period 6 between the JJ
(adverb/adjective) being input and the preceding VBD (verb).
Given the set of constituents identified by the parent outputs, the grandparent (G) and
sibling (S) outputs specify the structural relationships between these constituents. Because
we only specify these outputs during the periods in which a new constituent has been
identified, only periods 2, 3, and 5 can have these outputs (period 1 has no other constituents
to specify relationships with). In period 2, constituent 1 is specified as the sibling of the
VVD word-tag being input. Thus since constituent 2 is the parent of VVD, constituent
must also be the parent of constituent 1. Constituent 2 does not itself have a parent
because it is the root of the tree structure. In period 3 no siblings or grandparent are
specified because constituent 3 has no constituent children and its parent has not yet been
introduced. In period 5 the parent of constituent 3 is specified as constituent 5 through
the sibling output, as above. Also in period 5 the grandparent output unit pulses in phase
2. This specifies that constituent 2 is the grandparent of the VBD word-tag being input
in period 5. Thus constituent 2 is the parent of constituent 5, since constituent 5 is the
parent of VBD. Note that the use of phases to represent constituents means that no special
mechanisms are necessary to handle this case, where one constituent (F) is embedded within
another (S).
In addition to structural relationships, natural language syntactic structures typically
also include labels on the constituents. This is relatively straightforward for SSNs to achieve.
We use an additional set of pulsing output units, one for each label. The network indicates
the label for a given constituent by pulsing the label's unit in phase with the new constituent
when it is introduced to the parse tree.
So far we have described the target output for a SSN, in which units are either pulsing
or not. Being based on SRNs, the actual unit outputs are of course continuous values
between 0 and 1. There are a variety of ways to interpret these patterns of continuous
values as a specification of constituency. In the experiments discussed below we simply
threshold them; all units with activations above 0.6 are treated as 'on'. The indicated set
of structural relationships is then converted to a set of constituents, which may then be
evaluated using the precision and recall measures standard in statistical language learning;
precision is the percentage of output constituents that are correct, and recall the percentage
of correct constituents that are output.
The important characteristic of SSNs for outputting structures is that just three output
units, grandparent, parent, and sibling, are sufficient to specify all the structures allowed by
our assumptions that at most one constituent needs to be introduced with each word. 5 We
call this the GPS representation. As the SSN proceeds incrementally through the sentence,
at each word-tag it outputs the word-tag's parent and (if it is the parent's first word-child)
all the parent-child relationships between that parent and the previous constituents in the
sentence. By the time the SSN reaches the last word of the sentence, its cumulative output
will specify all the parent-child relationships between all the constituents, thus specifying
the entire hierarchical structure of the sentence's constituency.
3.3 Inherent generalisations
In Section 2 we described the SSN, its training algorithm and a variety of possible SSN
architectures. Because the definition of TSVB units retains and augments the properties of
standard feed-forward and recurrent connectionist networks, SSNs retain the advantages of
distributed representations and the ability to generalise across sequences. The SSN has also
been shown to support a class of structured representation. Although this representation
in itself is important in extending the range of domains to which connectionist networks
may be applied, the SSN's use of phases to identify structural constituents also confers a
powerful generalisation ability, specifically the ability to generalise learnt information across
structural constituents.
As an example of this kind of generalisation, consider what is required to generalise from
the sentence "John loves Mary" to the sentence "Mary loves John". In both sentences the
network needs to output that "John" and "Mary" are noun phrases, that the noun phrase
preceding the verb is the subject, and that the noun phrase after the verb is the object. In
order to generalise, it must learn these four things independently of each other, and yet for
any particular sentence it must represent the binding between each constituent's word and
its syntactic position.
The SSN achieves this generalisation ability by using temporal phases to represent these
bindings, but using link weights to represent these generalisations. Because the same link
weights are used for each phase, the information learned in one phase will inherently be
generalised to constituents in other phases. Thus once the network has learned that the
input "Mary" correlates with being a noun phrase, it will produce a noun phrase output
regardless of what other features (such as syntactic position) are bound to the same phase.
5 As mentioned above, the set of allowable structures can be expanded either by increasing the number
of entities introduced with each word, or by expanding the number of structural relationships so as to allow
each entity to represent more than one constituent. In either case the number of constituents still must be
linear in the number of words. This constraint could only be relaxed by allowing unbounded computation
steps per word input.
Similarly the network will learn that a noun phrase preceding a verb correlates with the
noun phrase being the subject of the verb, regardless of what other features (such as the
word "Mary") are bound to the same phase. Then, even if the network has never seen
"Mary" as a subject before, the application of these two independent rules in the same
phase will produce a pattern of synchronous activation that represents that "Mary" is the
subject. Henderson [15] has shown how this inherent ability of TSVB networks to generalise
across constituents relates to systematicity [9].
3.4 Short-term memory
In learning-based systems it is the system's ability to generalise from training sets to testing
sets that determines its value. This implies that the real value of the SSN is in its ability
to generalise over constituents, and not its ability to output O(n 2 ) structural relationships.
This suspicion is confirmed when we consider some specific characteristics of our domain,
natural language sentences. It has long been known that constraints on people's ability to
process language put a bound on constructions such as centre embedding [4], which are the
only constructions which would actually require allowing for O(n 2 ) structural relationships.
For example, 'the rat that the cat that the dog chased bit died' is almost impossible to
understand without pencil and paper, but 'the dog chased the cat that bit the rat that
died' is easy to understand. The basic reason for this difference is that in the first case all
the noun phrases need to be kept in memory so that their relationships to the later verbs
can be determined, while in the second case each noun phrase can be forgotten as soon as
the verb following it has been seen.
Motivated by this observation, and by work showing that in many other domains people
can only keep a small number of things active in memory at any one time [28], we have
added a "short-term memory" (STM) mechanism to the basic SSN architecture. This
mechanism improves the SSN's efficiency to O(n) time. The definition of the SSN so far
has stated that each word-tag input to the network will be input into a new phase of the
network. Information is then computed for all of these phases in every subsequent time
period. However, the bound on the depth of centre embedding implies that, in any given
time period, only a relatively small number of these phases will be referred to by later parts
of the parse tree. The trick is to work out which of the phases are going to be relevant to
later processing, and only compute information for these phases. The idea we use here is a
simple one, based on the idea of the audio-loop proposed by Baddeley [1].
Instead of computing all phases in the current time period, we instead compute only
those in a STM queue. This queue has a maximum size, which is the bound on STM referred
to above. When a new phase is introduced to the network, this phase is added to the head
of the queue. When a phase is referred to by one of the output units, that phase is moved to
the head of the queue. This simple mechanism means that unimportant phases, i.e. those
which are not referred to in the output, will move to the end of the queue and be forgotten.
Note that, in training, the target outputs are used to determine which phases are moved to
the head of the queue, and not the actual outputs, thereby ensuring that the network learns
only about the relevant phases. Also, information held by the network about word-tags and
constituents are specific to phases, not position in STM or the input word order. Therefore,
items cannot be confused during the reordering process which occurs in the STM. Indeed,
it is precisely this use of phases to represent constituents which allows the SSN to keep its
ability to generalise over constituents and still to parse in O(n) time.
4 Experiments in Syntactic Parsing with SSNs
In this section, we describe some experiments training a range of SSNs to parse sentences
drawn from a corpus of real natural language. The experiments demonstrate how the SSN
may be used for connectionist language learning with structured output representations.
Also, the fact that the SSN's performance is evaluated in terms of the precision and recall
of constituents means that the SSN's performance may be directly compared with statistical
parsers. We first describe the corpus used, provide some results, and then give some analysis
of the training data.
4.1 A natural language corpus
We use the SUSANNE corpus as a source of preparsed sentences for our experiments. The
SUSANNE corpus is a subset of the Brown corpus, and is preparsed according to the
classification scheme described in [32]. In order to use the SUSANNE corpus,
we convert the provided information into a format suitable for presentation to our parser.
The SUSANNE scheme provides detailed information about every word and symbol
within the corpus. We use the word-tags as input to the network, due to time constraints
and the limited size of the corpus. The word-tags in SUSANNE are a detailed extension
of the tags used in the Lancaster-Leeds Treebank [12]. In our experiments, the simpler
Lancaster-Leeds scheme is used. Each word-tag is a two or three letter sequence, e.g. 'John'
would be encoded 'NP', the articles `a' and 'the' are encoded `AT', and verbs such as 'is' are
encoded 'VBZ'. Each word-tag is input to the network by setting one bit in each of three
banks of input; each bank representing one letter position, and the set bit indicating which
letter or space occupies that position.
In order to construct the set of constituents forming the target parse tree, we first
need to extract the syntactic constituents from the wealth of information provided by the
classification scheme. This includes information at the meta-sentence level,
which can be discarded, and semantic relations between constituents.
Finally, as described above, the GPS representation used in our experiments requires
that every constituent have at least one terminal child. This limitation is violated by few
constructions, though one of them, the S-VP division, is very common. For example, in
the sentence 'Mary loves John', a typical encoding would be: [S [NP Mary] [VP [V loves]
[NP John]]]. The linguistic head of the S (the verb "loves") is within the VP, and so the
S does not have any tags as immediate children. To address this problem, we collapse the
S and VP into a single constituent, producing: [S [NP Mary] [V loves] [NP John]]. The
same is done for other such constructions, which include adjective, noun, determiner and
prepositional phrases. With the corpus used here, the number of changes is fairly minor 6 .
4.2 Results
One of the SUSANNE genres (genre A for press reportage) was chosen for the experiments,
and training, cross-validation and test sets were selected at random from the total in the
ratio 4:1:1. Sentences of fewer than 15 word-tags were selected from the training set, to
form a set of 265 sentences containing 2834 word-tags, an average sentence length of 10.7.
Similarly, a cross-validation set was selected, containing 38 sentences of 418 word-tags,Out of 1,580 constituents, 265 have been lost to the S-VP change, 28 similar changes were made to
relative clauses, and only 12 adjustments were required to other non-verb constructions - most of the verb
clauses could be artificially reintroduced on output, which leaves around irrecoverable changes to the
corpus structure.
average sentence length of 11.0, and a test set with 34 sentences, containing 346 words,
with an average sentence length of 10.2.
The three SSN types A, B and C were all tested. Twelve networks were trained from
each type, consisting of four sizes of network (between 20 and 100 units in each layer), each
size was tested with three different STM lengths (3, 6 and 10). Each network was trained
on the training set for 100 epochs, using a constant learning rate j of 0.05.
Table
2 gives figures for five networks. For each network, the performance on the three
datasets (training, cross-validation and test) are given under three categories: the number
of correct sentences, a measure of the number of correct constituents (precision and recall)
and the percentage of correct responses on each output unit. The measure of precision and
recall used for constituent evaluation is a standard measure used in statistical language
learning [20]. The precision is the number of correct constituents output by the parser
divided by the total number of constituents output by the parser. The recall is the number of
correct constituents divided by the number in the target parse. Each constituent is counted
as correct if it contains the same set of words as the target, and has the same label. 7 The
presence of this measure in our results is significant because it confirms the similarity of the
input-output representations used by the SSN with those used by statistical parsers, and
therefore some direct comparisons can be made: we return to this point in Section 5.
Considering the figures in the table, the type A networks are not particularly successful,
with only the rare sentence being correctly parsed. Results for the best performing type A
network are given in the first row of the table. The type B and C networks were much more
successful. For each type, the results from two networks are given: the first having the best
average precision/recall measure, and the second having better results on the individual
outputs (i.e. G, P, S and label). Both the type B and C networks produce similar ranges
of performance: around 25% of sentences are correct and between 70-80% is scored in
average precision/recall by the better networks. In particular, the percentage correct for
the constituent labels and the P output exceed 90%. Also notable is that the percentage
results of the networks are similar across the three datasets, indicating that the network
has learnt a robust mapping from input sentences to output parse trees. This level of
generalisation (around 80% average precision/recall) is similar to that achieved by PCFG
parsers [20], although for a fair comparison identical experiments must be performed with
each algorithm: again, we return to this point in Section 5.
4.3 Analysis of results
The basic experimental results above have provided both detailed values of the performance
of the network with specific output relationships as well as their combined performance in
terms of the constituent-level measures of precision and recall. Here, these results are
broken down and compared to see the progress of learning of the networks over time with a
comparison of the effect of the STM queue, and a table of the actual dependencies present
in the data itself.
4.3.1 Effects of STM length
The effects of STM length can be seen by plotting the performance of one type of network
with varying sizes of STM. This is done in Figure 3, in which the performance of a type C
network with 80 units in every hidden layer is shown for the three sizes of STM, i.e. 3, 6 andPrecision may be compared with the standard measure of 'errors of commission', and recall with the
standard 'errors of omission'.
Test
Type A : STM of 10, 100 units
Train 1/265 (0%) 34.7 31.3 35.6 65.9 20.8 89.3
Cross 0/38 (0%) 31.8 29.7 30.5 65.2 18.7 84.7
units in each layer
Train 63/265 (24%) 69.9 68.7 74.1 94.3 73.0 97.5
Cross 10/38 (26%) 71.5 71.2 74.8 95.6 70.7 96.4
Test
units in each layer
Train 62/265 (24%) 66.0 64.4 75.2 92.3 84.6 97.5
Cross 12/38 (32%) 65.3 64.2 82.4 92.6 85.3 96.2
Test
units in each layer
Train 64/265 (24%) 72.6 71.7 70.8 95.7 72.1 98.4
Cross 9/38 (24%) 74.4 73.8 71.8 97.3 70.7 97.8
Test
units in each layer
Train 56/265 (15%) 65.5 63.7 68.1 92.8 82.3 97.1
Cross 8/38 (21%) 63.6 61.1 68.7 91.5 81.3 95.5
Test
Table
2: Comparison of network types in learning to parse
10. The separate graphs show the constituent-level performance of the network, in terms
of average precision/recall, and the performance of the separate output units, grandparent,
parent, sibling and constituent label (this latter, though a group of units, is treated as
a single output). The graphs demonstrate that, at the constituent-level, the shorter STM
lengths perform better. However, the longer STM lengths can achieve greater accuracy with
specific outputs, in particular with respect to the sibling output. This is to be expected, as
the longer lengths preserve more information and so have a greater likelihood of containing
the phase referred to by the specific output.
4.3.2 Dependency lengths in the data set
An important concern in connectionist language learning has been the length of dependency
which the SRN can learn [8]. In this section we provide an analysis of our data set to see
exactly what dependency lengths are present in a corpus of naturally occurring text.
Table
3 contains an analysis of the lengths of each dependency contained in sentences
with a maximum length of words. The length of a dependency is the number of words
between the current word and that indicated by each output. The table lists separately the
lengths for each of the output units, with the final two columns providing a total number
and percentage for that dependency length across the whole corpus. The surprising result
of this table is that most of the dependencies (almost 70%) relate to the current word or
its predecessor. There is a sharp tailing off of frequency as we consider longer dependencies
- the table only shows the shortest lengths; the lengths tail off gradually to a length of 25
words.
With the STM, the network can only process a limited number of words at any one time,
Trainingepochs
Percentagecorrect
Trainingepochs
Percentagecorrect
Grandparentrelationship80400
Trainingepochs
Percentagecorrect
Trainingepochs
Percentagecorrect
Parentrelationship80400
Trainingepochs
Percentagecorrect
Constituentlabel
Figure
3: Comparison of the effects of STM length on a type C network with 80 units in
every hidden layer.
Dependency Length G P S total %=length %!length
Mean length 2.7 0.8 3.3 1.6
Table
3: Dependencies by type and length in sentences with fewer than words. Not
all dependencies are shown, the greatest length is 25 words. Number of sentences: 716,
Number of words: 13,472, Average
STM Dependency Length G P S total %=length %!length
4 284 127 78 489 2.5% 96.1%
Mean length 2.0 0.7 2.7 1.2
Table
4: Dependencies by type and length across the STM for the sentences from Table 3.
and so the length of dependency which the network can handle is altered. With a STM,
the length of dependency will be the number of places down the queue which each phase
has progressed before being required. So, in Table 4, we provide a similar analysis to that
above, but this time, instead of counting the length as the number of intervening phases, we
count the length of each dependency as the position which that phase occupies in the STM.
Thus, if a phase is in the third position of the STM when it is required, then the length
of the dependency will be given as three. This table shows similar effect in the range of
dependency lengths to that in the Table 3, although there is a greater concentration in the
shortest lengths, as desired. The limited number of longer dependencies (not shown) still
extend to length 25; these are isolated words or punctuation symbols which are referred to
only once.
4.3.3 Conclusions on experiments
The impact of the STM is quite considerable in respect to training times, reducing them by
at least an order of magnitude. As discussed above, the actual lengths of dependencies encountered
by the network are not changed much by the addition of a STM. The experiments
show that longer STMs achieve better performance on some specific outputs of the network,
however the shorter STM still yields the best level of constituent accuracy. This difference
is of interest, as the choice of STM length depends on one's measure of performance. A
better performance is achieved on specific outputs with a longer STM, because your desired
output is more likely to appear in the STM. But a better performance is achieved at the
constituent level, based on a competition between different outputs, because the smaller
STM reduces the likelihood of spurious outputs competing with the correct ones. Note
also the domain specificity of this last point: the smaller STM only works because natural
language itself has a bias towards shorter dependencies.
This article has focussed on how SSNs can use an incremental representation of constituent
structure in order to learn to parse. In addition, we have shown that arguments about
the generalisation abilities necessary to learn to parse are distinct from arguments about
bounds on those abilities through the introduction of the STM mechanism, which enhances
the efficiency of the basic SSN without harming its ability to learn to parse. In this section we
consider in brief the importance of these results for connectionist language learning, and how
our model compares with other extensions to SRNs for handling structured representations.
First, the experiments described above demonstrate how a connectionist network can
successfully learn to generate parse trees for sentences drawn from a corpus of naturally
occurring text. This is a standard task in computational language learning using statistical
methods. Because the same performance measures (precision/recall) can be applied to the
output of the SSN as with a typical statistical method, such as the simple Probabilistic Context
Free Grammar (PCFG), direct comparisons can be made between the two approaches.
For instance, the simple PCFG can achieve around 72% average precision/recall [20] on
parsing from sequences of word-tags. In comparison, the SSN in the above experiments
achieves 80% average precision/recall when trained and tested on sentences with fewer than
words. However, this is not a fair comparison, as the corpora sizes and contents are
dissimilar.
In an extension to the work here, Henderson [16] has presented a slight variant of the
basic SSN model and compared its performance directly with that of PCFGs on identical
corpora. In those results, the PCFG, due to the restricted size of the training set, was
only able to parse half the test sentences, with a precision/recall figure of 54%/29%. In
comparison, the SSN was able to parse all the sentences, and yielded a performance of
65%/65%. Even if we only count the parsed sentences, the PCFG only had a performance of
54%/58%, compared to the SSN's performance of 68%/67% on that subset. The variations
introduced by Henderson [16] to the SSN mostly affect the input layer. In this article,
the pulsing inputs to the SSN receive input only for newly introduced phases, requiring
the network to remember the previous periods' input. In [16], the pulsing input from the
previous period is carried forward in its particular phase. An additional pulsing input unit
is then used to distinguish the newly introduced phase from the others. Because of this
change in input representation, the results in [16] have been achieved with a type A SSN.
As noted in the Introduction to this article, experiments with natural language using
SRNs have typically used a restricted form of input representation, either predicting the next
word in a sentence [6, 8, 30] or assessing whether it is grammatical [24, 25]. Our extension
to the SRN, the SSN, corrects this limitation by enhancing the range of output representations
to include structured parse trees. Our approach is designed to generate a structured
representation given a sequence of input data. The generation aspect of this task largely
distinguishes our approach from other extensions to SRNs for handling structured data. For
example, the Backpropagation Through Structure (BPTS) algorithm [35, 11] assumes that
the network is being trained to process structured input data, either for classification [10]
or for transformation [11]. The transformation task is closer to that of training a parser,
but, as the conclusion of [11] makes clear, the use of BPTS relies on the input and output
having the same structural form, which prevents such networks being directly applicable to
the task of generating a parse tree from a sequence of input word-tags. However, there is
a relationship between the BPTS and the SSN in terms of the SSN's temporal structure.
The SSN is trained using an extension of Backpropagation Through Time, where the net-work
is unfolded over its two temporal dimensions, period and phase. This is one specific
instance of BPTS, where the structure in question is the temporal structure. However, the
mapping from this temporal structure to the structures in the domain is done as part of
the interpretation of the network's output activations (using the GPS representation), and
thus does not fall within the BPTS framework. In the broader context of transforming
structured data, the SSN and the incremental parse tree representation described in this
article thereby offer one way for a connectionist network to generate structured output data
from an unstructured input.
Apart from models based on SRNs, other forms of connectionist network have been
proposed for handling the types of structured information required for language learning.
For instance, Hadley and Hayward [13] propose a highly structured network which learns
to generalise across syntactic structures in accordance with systematicity [9]. However,
this approach is limited due to the amount of non-trainable internal structure required to
enforce the appropriate generalisations. As discussed in Section 3.3 (and Henderson [15]),
the SSN relies on temporal synchrony to produce a similar effect, which renders the generalisation
ability of SSNs largely independent of its specific architectural details. Indeed,
in experiments training a type B SSN on the same recursive grammar to that of Hadley
and Hayward [13], a similar ability to generalise across syntactic structures was demonstrated
[21, 22].
As the above discussion makes clear, the identical SSN network learns effectively with
both a specific toy grammar and a corpus of naturally occurring text. This is because the
added ability to generalise over constituents allows the SSN to generalise in a more linguistically
appropriate way, and thus deal with the high variability in naturally occurring
text. This transfer from a toy domain to a real corpus of sentences sets the SSN apart
from a number of other proposals for connectionist language learning, which tend to be
limited to applications involving toy grammars alone. These include the approaches that
encode sentences recursively into a distributed representation, such as holistic parsers [18]
or labelled-RAAMs [34]. The number of cycles in this recursive encoding depends on the
size of the parse tree, which means that the performance degrades as the complexity of the
sentences increases. This makes it difficult to apply these approaches to naturally occurring
text. In our SSN we address this specific problem by not attempting to encode everything
into a distributed representation prior to extracting the parse tree, but by incrementally
outputting pieces of the parse tree. This incremental approach to parsing presents a different
model of connectionist parsing, one more similar to classical deterministic parsers, as
described in Marcus [26], for example.
6 Conclusion
This article has described the use of Simple Synchrony Networks (SSNs) for learning to
parse samples of English sentences drawn from a corpus of naturally occurring text. We
have described an input-output representation which enables the SSN to incrementally
output the parse tree of a sentence. This representation is important in demonstrating how a
connectionist architecture can manipulate hierarchical and recursive output representations.
We have also introduced an important mechanism for improving the O(n 2 ) speed of the basic
SSN architecture to linear time. This mechanism, based on the concept of a short-term
memory (STM), enables the SSN to retain only the necessary constituents for processing.
In the experiments we have demonstrated that a number of SSN architectures provide
reliable generalisation performance in this domain.
The theoretical and experimental results of this article go beyond language learning. It
is apparent that the SSN architectures, modelled on the Simple Recurrent Network, are not
specifically adapted to natural language. Thus their ability to learn about and manipulate
structured information is a general one. Although the specific input-output representation
used in these experiments is carefully tailored to the target domain, its underlying principles
of incremental output and recursively defined structures may be applied more widely.
Also, the STM queue, although again defined based on cognitive limitations on language
processing, may also be used in further domains where appropriate.
Acknowledgements
The authors would like to thank Fernand Gobet, the reviewers and editors for their helpful
comments on this article.
This work was carried out whilst the first author was at the Department of Computer
Science, University of Exeter, funded by the Engineering and Physical Sciences Research
Council, UK.
We acknowledge the roles of the Economic and Social Research Council (UK) as sponsor
and the University of Sussex as grantholder in providing the SUSANNE corpus used in the
experiments reported in this article.
--R
Working Memory.
Statistical Language Learning.
Statistical techniques for natural language parsing.
On certain formal properties of grammars.
Finding structure in time.
Distributed representations
Learning and development in neural networks: the importance of starting small.
a critical analysis.
On the efficient classification of data structures by neural networks.
A general framework for adaptive processing of data structures.
The Computational Analysis of English: a corpus-based approach
Strong semantic systematicity from Hebbian connectionist learning.
Description Based Parsing in a Connectionist Network.
A connectionist architecture with inherent systematicity.
A neural network parser that handles sparse data.
A connectionist architecture for learning to parse.
How to design a connectionist holistic parser.
Long short-term memory
PCFG models of linguistic tree representations.
Simple Synchrony Networks: Learning generalisations across syntactic constituents.
Simple Synchrony Networks: A new connectionist architecture applied to natural language parsing.
Simple Synchrony Networks
Natural language grammatical inference: A comparison of recurrent neural networks and machine learning methods.
Natural language grammatical inference with recurrent neural networks.
A theory of syntactic recognition for natural language.
Dependency Syntax: Theory and Practice.
The magical number seven
Enriched lexical representations
Learning internal representations by error propagation.
English for the Computer.
From simple associations to systematic reasoning: A connectionist representation of rules
Stability properties of labeling recursive auto-associative memory
Supervised neural networks for classification of structures.
--TR
--CTR
James Henderson, A neural network parser that handles sparse data, New developments in parsing technology, Kluwer Academic Publishers, Norwell, MA, 2004
James Henderson, Segmenting state into entities and its implication for learning, Emergent neural computational architectures based on neuroscience: towards neuroscience-inspired computing, Springer-Verlag New York, Inc., New York, NY, 2001
Marshall R. Mayberry, Iii , Risto Miikkulainen, Broad-Coverage Parsing with Neural Networks, Neural Processing Letters, v.21 n.2, p.121-132, April 2005
James Henderson, Discriminative training of a neural network statistical parser, Proceedings of the 42nd Annual Meeting on Association for Computational Linguistics, p.95-es, July 21-26, 2004, Barcelona, Spain
James Henderson, Inducing history representations for broad coverage statistical parsing, Proceedings of the Conference of the North American Chapter of the Association for Computational Linguistics on Human Language Technology, p.24-31, May 27-June 01, 2003, Edmonton, Canada
James Henderson, Neural network probability estimation for broad coverage parsing, Proceedings of the tenth conference on European chapter of the Association for Computational Linguistics, April 12-17, 2003, Budapest, Hungary
Mladen Stanojevi , Sanja Vrane, Knowledge representation with SOUL, Expert Systems with Applications: An International Journal, v.33 n.1, p.122-134, July, 2007 | syntactic parsing;simple synchrony networks;natural language processing;temporal synchrony variable binding;connectionist networks |
628124 | Learning Distributed Representations of Concepts Using Linear Relational Embedding. | AbstractIn this paper, we introduce Linear Relational Embedding as a means of learning a distributed representation of concepts from data consisting of binary relations between these concepts. The key idea is to represent concepts as vectors, binary relations as matrices, and the operation of applying a relation to a concept as a matrix-vector multiplication that produces an approximation to the related concept. A representation for concepts and relations is learned by maximizing an appropriate discriminative goodness function using gradient ascent. On a task involving family relationships, learning is fast and leads to good generalization. | Introduction
Given data which consists of concepts and relations among concepts, our goal is to correctly predict unobserved
instances of relationships between concepts. We do this by representing each concept as a vector in
a Euclidean space and the relationships between concepts as linear operations.
To illustrate the approach, we start with a very simple task which we call the number problem. The
data consists of integers and operations among integers. In the modular number problem the numbers are
integers in the set and the set of operations is
where the subscript indicates that the operations are performed modulo m. The data then consists of all or
(b)
Figure
1: (a) vectors of the hand-coded solution for the number problem when
vectors of the solution found by Linear Relational Embedding. Only 70 out of the 90 possible triplets were
used for training. During testing, the system was able to correctly complete all the triplets.
OPERATION -4 -3 -2
Hand-coded Solution -144 -108 -72 -36 0 36 72 108 144
LRE Solution 72.00 -35.97 -144.01 108.01 0.00 -108.02 144.02 35.98 -71.97
Table
1: Angles, expressed in degrees, of the rotation matrices of the solutions to the number problem with
corresponding to the vectors in g.1. The rotations in the LRE solution dier very slightly
from multiples of 36 because only 70 triplets randomly chosen out of the 90 were used during training.
some of the triplets (num is the result of applying
operation op to number num 1 ; for example, for g.
The main idea in Linear Relational Embedding (LRE) is to represent concepts using n-dimensional
vectors, relations as (n n) matrices, and the operation of applying a relation to a concept (to obtain
another concept) as a matrix-vector multiplication. Within this framework, one could easily hand-code a
solution for the number problem with where the numbers are represented by vectors
having unit length and disposed as in g.1a, while relations are represented by rotation matrices R(),
where the rotation angle is a multiple of 2=10 (rst row of table 1). The result of applying, for example,
operation +3 to number 4, is obtained by multiplying the corresponding matrix and vector, which amounts
to rotating the vector located at 144 degrees by 108 degrees, thus obtaining the vector at 252 degrees, which
corresponds exactly to the vector representing the number 7.
In this paper, we show how LRE nds an equivalent solution, which is presented in g. 1b and in the
second row of table 1. LRE can nd this solution when many of the triplets are omitted from the training
set and once it has learned this way of representing the concepts and relationships it can complete all of the
omitted triplets correctly. Moreover, LRE works well not only on toy problems like the one presented above,
but also in other symbolic domains where the task of generalizing to unobserved triplets is non-trivial.
In the next section we brie
y review related work on learning distributed representations. LRE is then
presented in detail in section 3. Section 4 presents the results obtained using LRE on the number problem
and the family tree task (Hinton, 1986), as well as the results obtained on a much larger version of the
family tree problem that uses data from a real family tree. We also compare these results to the results
obtained using Principal Components Analysis. In section 5 we examine how a solution obtained from an
impoverished data set can be modied to include information about new concepts and relations. Section 6
indicates ways in which LRE could be extended and section 7 presents a nal discussion of the method.
Related work
Several methods already exist for learning sensible distributed representations from relational data. Multi-dimensional
Scaling (Kruskal, 1964; Young and Hamer, 1987) nds a representation of concepts as vectors in
a multi-dimensional space, in such a way that the dissimilarity of two concepts is modeled by the Euclidean
distance between their vectors. Unfortunately, dissimilarity is the only relationship used by multidimensional
scaling so it cannot make use of the far more specic information about concepts contained in a triplet like
\John is the father of Mary".
Latent Semantic Analysis (LSA) (Deerwester et al., 1990; Landauer and Dumais, 1997; Landauer et al.,
1998) assumes that the meaning of a word is re
ected in the way in which it co-occurs with other words.
LSA nds features by performing singular value decomposition on a large matrix and taking the eigenvectors
with the largest eigenvalues. Each row of the matrix corresponds to a paragraph of text and the entry in
each column is the number of times a particular word occurs in the paragraph or a suitably transformed
representation of this count. Each word can then be represented by its projection onto each of the learned
features and words with similar meanings will have similar projections. Again, LSA is unable to make use
of the specic relational information in a triplet.
Hinton (1986) showed that a multilayer neural network trained using backpropagation (Rumelhart et al.,
1986) could make explicit the semantic features of concepts and relations present in the data. Unfortunately,
the system had problems in generalizing when many triplets were missing from the training set. This was
shown on a simple task called the family tree problem. In this problem, the data consists of persons and
relations among persons belonging to two families, one Italian and one English, shown in gure 2. All the
Colin
Charlotte
Alberto Mariemma
Figure
2: Two isomorphic family trees. The symbol \=" means \married to"
information in these trees can be represented in simple propositions of the form (person1, relation, person2).
Using the relations father, mother, husband, wife, son, daughter, uncle, aunt, brother, sister, nephew, niece
there are 112 of such triplets in the two trees. The network architecture used by Hinton is shown in g.3.
It had two groups of input units, one for the role person1 and one for the role relation, and one group of
output units to represent person2. Inputs and outputs were coded so that only one unit was active at the
time, standing for a particular person or relation. The idea was that the groups of 6 units on the second
and fourth layer should learn important features of persons and relations that would make it easy to express
regularities of the domain that were only implicit in the examples given. Figure 4 shows the activity level
for input Colin aunt in the network after learning. Notice how there are 2 units with a high activation in
the output layer, marked by black dots, corresponding to the 2 correct answers, because Colin has 2 aunts
(Jennifer and Margaret). Figure 5 shows the diagrams of the weights on the connections from the 24 input
units to the 6 units that were used for the network's internal, distributed representation of person1, after
learning. It is clear that unit number 1 is primarily concerned with the distinction between English and
Italian. Unit 2 encodes which generation a person belongs to. Unit 6 encodes which branch of the family
a person belongs to. Notice how these semantic features are important for expressing regularities in the
domain, but were never explicitly specied. Similarly, relations were encoded in terms of semantic features
in the other group of 6 units of layer 2.
The discovery of these semantic features, gave the network some degree of generalization. When tested
on four triplets which had not been shown during training, the network was usually able to nd the correct
answers. Notice how any learning procedure which relied on nding direct correlations between the input
and the output vectors, would generalize very badly on the family tree task: the structure that must be
INPUT
local encoding of person 1
OUTPUT
local encoding of person 2
local encoding of relation
INPUT
6 units
learned distributed
encoding of person 1
6 units
learned distributed
encoding of relation
6 units
learned distributed
encoding of relation
Figure
3: The architecture of the network used for the family tree task. It has three hidden layers of 6 units
in which it constructs its own internal representations. The input and output layers are forced to use localist
encodings.
Figure
4: The activity levels in the network after it has learned. The bottom layer has 24 input units on the
left for representing person1 and 12 units on the right for representing the relation. The white squares inside
these two groups show the activity levels of the units. There is one active unit in the rst group (representing
Colin) and one in the second group (representing aunt). Each of the two groups of input units is totally
connected to its own group of 6 units in the second layer. These two groups of 6 must encode the input
terms as distributed pattern of activity. The second layer is totally connected to the central layer of 12 units,
and this layer is connected to the penultimate layer of 6 units. The activity in the penultimate layer must
activate the correct output units, each of which stands for a particular person2. In this case, there are two
correct answers (marked by black dots) because Colin has two aunts. Both the input and the output units
are laid out spatially with the English people in one row and the isomorphic Italians immediately below.
From Hinton (1986).
Figure
5: Diagrams of the weights from the 24 input unit that represent people to the 6 units in the second
layer that learn distributed representations of people. White rectangles stand for excitatory weights, black
for inhibitory weights, and the area of the rectangle encodes the magnitude of the weight. The weights from
the 12 English people are in the top row of each unit. Beneath of each of these weights is the weight from
the isomorphic Italian. From Hinton (1986).
discovered to generalize correctly, is not present in the pairwise correlations between input and output units.
The biggest limitation of the system was that generalization was limited, and the system had problems in
generalizing when more than 4 triplets were missing from the training set. Moreover there was no guarantee
that the semantic features learned for person1 in the second layer would be the same as the ones found in
the fourth layer for person2.
The network used by Hinton (1986) is restricted to completing triplets from their rst two terms. A
more
exible way of applying the backpropagation learning procedure is to have a recurrent network which
receives a sequence of words, one at a time, and continually predicts the next word. The states of the hidden
units must then learn to capture all the information in the word string that is relevant for predicting the next
word. Elman (1990) presented a version of this approach in which the backpropagation through time that
is required to get the correct derivatives is curtailed after one time step to simplify the computation. Bridle
(1990) showed that the forward dynamics of a particular type of recurrent neural network could be viewed
as a way of computing the posterior probabilities of the hidden states of an HMM, and the relationship
between recurrent neural networks and Hidden Markov Models has been extensively studied by Cleermans
et. al. (1989) , Giles et. al. (1992) and others.
Hidden Markov Models are interesting because it is tractable to compute the posterior distribution over
the hidden states given an observed string. But as a generative model of word strings, they assume that
each word is produced by a single hidden node and so they do not seem appropriate if our goal is to learn
real-valued distributed representations of the concepts denoted by words.
Linear dynamical systems seem more promising because they assume that each observation is generated
from a real-valued vector in the hidden state space. However, the linearity of linear dynamical systems seems
very restrictive. This linearity shows up in both the dynamics and the output model:
where x is the hidden state, y is the visible state, R is the linear dynamics, C is the linear output model,
is the noise in the dynamics and is the noise in the output model.
Linear relational embedding can be viewed as a way of overcoming these apparent restrictions so that
linear dynamical systems can be protably applied to the task of modeling discrete relational data. First,
we eliminate the linear output model by using a discrete observation space and assuming that there is a
noise-free table that relates vectors in the hidden state space to discrete observations. The entries in this
table change during learning, but with the table xed the \hidden" state is precisely specied by the observed
discrete symbol.
The linearity in the dynamics can be made far less restrictive by using a switching linear dynamical
system. Instead of treating the relational term in a triplet as an observation produced by the hidden state,
we treat it as a completely dierent kind of observation that provides information about the dynamics,
R, rather than the hidden state, x. Again, there is a learned, noise-free table that exactly species the
linear dynamics associated with each relational term. We allow an additional Gaussian noise process in
the dynamics, . This ensures that there is always some probability density of arriving at any point in the
hidden space wherever we start and whatever the dynamics. In particular it ensures that, starting from
the point in the hidden space specied by the rst term in a triple and using the dynamics specied by the
relational term there is some probability of arriving at the point in the hidden space specied by the third
term. Learning can then adjust the tables so that the probability of arriving at the point specied by the
third term is much greater than the probability of arriving at any of the points that would be specied by
other possible third terms.
The linear dynamical systems perspective is useful in understanding how Linear Relational Embedding
relates to recurrent neural networks as a proposal for learning from relational data, but LRE is su-ciently
dierent from a standard linear dynamical system that this perspective can also be confusing. In the next
section we therefore present LRE as a technique in its own right.
3 Linear Relational Embedding
Let us assume that our data consists of C triplets (concept1, relation, concept2) containing N distinct
concepts and M binary relations. As anticipated in section 1, the main idea of Linear Relational Embedding
is to represent each concept with an n-dimensional vector, and each relation with an (nn) matrix. We shall
the set of vectors, the set of matrices and
the set of all the triplets, where a c ; b c 2 V and R c 2 R . The operation that relates a pair (a c ; R c ) to a
vector b c is the matrix-vector multiplication, R c a c , which produces an approximation to b c .
The goal of learning is to nd suitable vectors and matrices such that for each triplet (a c
b c is the vector closest to R c a c . The obvious approach is to minimize the squared distance between R c a c
and b c , but this is no good because it causes all of the vectors or matrices to collapse to zero. In addition to
minimizing the squared distance to b c we must also maximize the squared distances to other concept vectors
that are nearby. This can be achieved by imagining that R c a c is a noisy version of one of the concept
vectors and maximizing the probability that it is a noisy version of the correct answer, b c , rather than any
of the other possibilities. If we assume spherical Gaussian noise with a variance of 1=2 on each dimension,
the probability that concept i would generate R c a c is proportional to exp(jjR c a c v i jj 2 ) so a sensible
discriminative cost function is:
log e kR c a c b c k 2
e kR c a c
where K c is the number of triplets in D having the rst two terms equal to the ones of c, but diering
in the third term. To understand why we need to introduce this factor, let us consider a set of K triplets,
each having the same rst two terms, a and R, but diering in the third term, which we shall call b i with
We would like our system to assign equal probability to each of the correct answers, and therefore
the discrete probability distribution that we want to approximate can be written as:
where - is the discrete delta function and x ranges over the vectors in V . Our system implements the discrete
distribution:
Z exp(kR a xk 2 ) (5)
where
exp(kR a v i
is the normalization factor. The Kullback-Leibler divergence between P x and Qx , can be written as:
x
Thus, minimizing KL(PkQ) amounts to minimizing:
Z exp(kR a uk 2
for every u that is a solution to the triplet, which is exactly what we do when we maximize eq.3.
The results which we present in the next section were obtained by maximizing G using gradient ascent.
All the vector and matrix components were updated simultaneously at each iteration. One eective method
of performing the optimization is scaled conjugate gradient (Mller, 1993). Learning was fast, usually
requiring only a few hundred updates and learning virtually ceased as the probability of the correct answer
approached 1 for every data point. We have also developed an alternative optimization method which is less
likely to get trapped in local optima when the task is di-cult. The objective function is modied to include
a temperature that divides the exponents in eq. 3. The temperature is annealed during the optimization.
This method uses a line search in the direction of steepest ascent of the modied objective function. A small
amount of weight decay helps to ensure that the exponents in eq. 3 do not cause numerical problems when
the temperature becomes small.
In general, dierent initial congurations and optimization algorithms caused the system to arrive at
dierent solutions, but these solutions were almost always equivalent in terms of generalization performance.
We shall rst present the results obtained applying LRE to the number problem and to the family tree
problem. After learning a representation for matrices and vectors, we checked, for each triplet c, whether
the vector with the smallest Euclidean distance from R c a c was indeed b c . We checked both how well the
system learned the training set and how well it generalized to unseen triplets. Unless otherwise stated, in
all the experiments we optimized the goodness function using scaled conjugate gradient. Two conditions
had to be simultaneously met in order for the algorithm to terminate: the absolute dierence between the
values of the solution at two successive steps had to be less than 10 4 and the absolute dierence between
the objective function values at two successive steps had to be less than 10 8 . All the experiments presented
here were repeated several times, starting from dierent initial conditions and randomly splitting training
and test data. In general the solutions found were equivalent in terms of generalization performance. The
algorithm usually converged within a few hundred iterations, and rarely got stuck in poor local minima.
4.1 Results on the number problem
Let us consider the modular number problem which we saw in section 1. With numbers operations
exist 90 triplets (num1, op, num2).
LRE was able to learn all of them correctly using 2-dimensional vectors and matrices (n = 2). Figure
1 shows a typical solution that we obtained after training with 70 triplets randomly chosen out of the 90.
The scaled conjugate gradient algorithm converged within the desired tolerance in 125 iterations. We see
that all the vectors have about the same length, and make an angle of about 2=10 with each other. The
matrices turn out to be approximately orthogonal, with all their row and column vectors having about the
same length. Therefore each can be approximately decomposed into a constant factor which multiplies an
orthonormal matrix. The degrees of rotation of each orthonormal matrix are shown in the second row of
table 1. The matrices' multiplicative factor causes the result of the rotation to be longer than the second
vector of the triplet. Because the concept vectors lie at the vertices of a regular polygon centered at the
origin, this lengthening increases the squared distance from the incorrect answers by more than it increases
the squared distance from the correct answer, thus improving the discriminative goodness function in Eq. 3.
A 2-dimensional matrix has 4 degrees of freedom. The matrices we obtain after learning have only 2
degrees of freedom: the extent of the rotation and the multiplication factor. It is interesting to see how, for
this simple problem, LRE often nds appropriate vectors and matrices just by using rotation angles, without
having to use the extra degree of freedom oered by the matrices multiplicative factor - which is kept the same
for every matrix (all the vectors can then also be kept of the same length). But as the problem becomes more
complicated, the system will typically make use of this extra degree of freedom. For example if we try to solve
the modular number problem with numbers operations f+1;
in two dimensions, we shall usually nd a solution similar to the one in g. 6, which was obtained training
the system using 350 triplets randomly chosen out of the 450 constituting the data set. The optimization
algorithm met the convergence criteria after 2050 iterations.
Figure
Vectors obtained after learning the modular number problem with numbers operations
in two dimensions. The dots are the result of the multiplication R c a c
for each triplet, c. This solution was obtained optimizing the goodness function (eq.3) using scaled conjugate
gradient for 2050 iterations. 350 triplets randomly chosen out of 450 were used for training.
Let us now consider a non-modular version of the number problem with numbers operations
+0g. When the result of the operation is outside the corresponding
triplet is simply omitted from the data set. In two dimensions LRE was able to nd the correct solution for all
the 430 valid triplets of the problem, after training on 330 randomly chosen triplets for few hundred iterations.
Figure
7 shows a typical vector conguration after learning. For the non-modular number problem, LRE
increases the separation between the numbers by using dierent lengths for the concept vectors so that the
numbers lie on a spiral. In the gure we also indicated with a cross the result of multiplying R a when the
result of the operation is outside how the crosses are clustered, on the \ideal" continuation
of the spiral - the answer to is located at almost exactly the same point as the answers to 48
on. The system anticipates where the vectors representing numbers outside the given interval
ought to be placed, if it had some information about them. We shall discuss this in the next section.
Now consider the non-modular numbers problem with numbers operations
3g. When we tried to solve it in 2 dimensions LRE could not
nd a solution that satised all the triplets. Using gradient ascent to optimize the modied goodness function
while annealing the temperature, LRE found a solution that gave the correct answer for all the addition
and subtraction operations but the matrices representing multiplications and divisions mapped all vectors
tions in two dimensions. Vector endpoints are marked with stars and a
solid line connects the ones representing consecutive numbers. The dots are the result of the multiplication
R c a c for each triplet, c. The crosses are the result of the multiplication R a when the result of the operation
is outside This solution was obtained optimizing the goodness function (eq.3) using scaled
conjugate gradient for 1485 iterations. 330 triplets randomly chosen out of 430 were used for training
to the origin. In 3 dimensions, however, LRE is able to nd a perfect solution. For numbers in [1
the solution found optimizing eq.3 using scaled conjugate gradient, is shown in Figure 8. The optimization
algorithm met the convergence criteria after 726 iterations. 1 .
Generalization results
LRE is able to generalize well. In 2 dimensions, with numbers operations
were able to train the system with just 70 of the 90 triplets in
the training set, and yet achieve perfect results during testing on all the 90 cases. On the same problem,
but using numbers training with 350 triplets randomly chosen out of the 450, we usually got
very few errors during testing, and occasionally no errors. The solution shown in g.6, gave correct answers
in 446 cases. The solution to the non-modular number problem, with numbers operations
shown in g.7, was trained on 330 out of the total 430 triplets, and
yet is able to achieve perfect results during testing. It is worth pointing out that in order to do these
generalizations the system had to discover structure implicit in the data.
similar result is obtained with numbers in [1 but the gure is more cluttered
ations in three dimensions. The dots are the result of the
multiplication R c a c for each triplet, c. This solution was obtained optimizing the goodness function (eq.3)
using scaled conjugate gradient for 726 iterations.
4.2 Results on the Family Tree Problem
Our rst attempt was to use LRE on a modied version of the family tree problem, which used the same
family trees as g.2, but only 6 sexless relations instead of the original 12 of Hinton (1986). These relations
were: spouse, child, parent, sibling, nipote, zii (the last 2 being the Italian words for \either nephew or
niece" and \either uncle or aunt"). As before there were 112 triplets.
Using 2 dimensions LRE was not able to complete all the 112 triplets, while it obtained a perfect solution
using 3 dimensions. However, the 2-dimensional solution is quite interesting, so let us analyzing that.
Fig.9a,b show the weight diagrams of matrices and vectors found after training on all the data, and g.9c is
a drawing of the concept vectors in 2-space, with vectors representing people in the same family tree being
connected to each other.
We can see that nationality is coded using the sign of the second component of each vector, negative for
English people, and positive for Italian people. 2 The rst component of each vector codes the generation to
which that person belongs (a three valued feature): for the English people the rst generation has negative
values, the third generation has large positive values, while the second generation has intermediate positive
2 The sign of the weights typically agrees with the nationality of the researcher who performed the simulations.
Christopher Andrew Arthur James Charles Colin Penelope Christine Margaret Victoria Jennifer Charlotte Aurelio Bortolo Pierino Pietro Marcello Alberto Maria Emma Grazia Giannina Doralice Mariemma
spouse child parent sibling nipote zii
ENGLISH ITALIAN
(a)
(b)
Alberto &
Mariemma
Aurelio
Christopher
Charlotte
Andrew &
Christine
(c)
Figure
9: (a) Diagrams of the matrices and (b) the vectors obtained for the modied family tree problem.
(c) Layout of the vectors in 2D space. Vectors are represented by *, the ones in the same family tree
are connected to each other. The dots are the result of the multiplication R c a c for each triplet c. The
solution shown here was obtained using gradient ascent to optimize the modied goodness function while
the temperature was annealed.
ITALIAN
Figure
10: Layout of the vectors in 3D space obtained for the family tree problem. Vectors are represented by
*, the ones in the same family tree are connected to each other. The dots are the result of the multiplication
R c a c for each triplet, c. The solution shown here was obtained using gradient ascent to optimize the
modied goodness function while the temperature was annealed.
values. The representations of the two families are linearly separable, and the two families are exactly
symmetric with respect to the origin. An interesting fact is that some of the people were coded by identical
vectors (for the English people, Christopher and Penelope, Andrew and Christine, Colin and Charlotte).
This is clever if you notice that these people have the same children, same nephews and nieces and same
uncles and aunts. Clearly this has as side eect that each of them is spouse or sibling of the correct person,
but also of himself. This fact, together with the fact that the other people of each family are very close to
each other, causes 14 errors when the 112 triplets were tested.
We used LRE in 3 dimensions on the family tree problem. When trained on all the data, LRE could
correctly complete all 112 triplets and the resulting concept vectors are shown in gure 10. We can see that
the Italian and English families are symmetric with respect to the origin and are linearly separable. When
more than one answer was correct (as in the aunts of Colin) the two concept vectors corresponding to the
two correct answers were always the two vectors closest to R c a c . Table 2 reports the distances of the each
vector from the result of multiplying concept Colin and relation aunt.
PERSON DISTANCE
Jennifer 1.6064
Margaret 1.6549
Charlotte 3.0865
Penelope 3.2950
Christopher 3.9597
Giannina 4.1198
Marcello 4.4083
Alberto 5.1281
Arthur 5.2167
Colin 5.2673
James 5.4858
Charles 5.5943
Pietro 5.6432
Andrew 6.3581
Aurelio 6.3880
Mariemma 6.5021
Victoria 6.6853
Christine 6.6973
Maria 6.7626
Grazia 7.1801
Doralice 7.4230
Table
2: Distance of each concept vector from the result of multiplying concept Colin and relation aunt for
the solution to the family tree problem shown in gure 10.
Generalization results
On the modied version of the family tree problem, in 3 dimensions the system generalized perfectly on 8
new cases, while it got 1 wrong when it was tested on 12 new cases. On the original family tree problem,
in 3 dimensions LRE generalized perfectly when 12 triplets were held out during training. In particular,
even when all the information on \who are the aunts of Colin" (i.e both triplets (Colin, aunt, Jennifer) and
(Colin, aunt, Margaret)) was held out during training, the system was still able to answer correctly. Notice
how, in order to do this, the system had rst to use the implicit information in the other triplets to gure
out both the meaning of the relation aunt and the relative position of Colin, Margaret and Jennifer in the
tree, and then use this information to make the correct inference.
The generalization achieved by LRE is much better than the neural networks of Hinton (1986) and
O'Reilly (1996) which typically made one or two errors even when only 4 cases were held out during training.
4.3 Results of Principal Components Analysis on number and family tree problem
We have used Principal Components Analysis (PCA) to complete the triplets of the modular and non-modular
number problem and of the family tree problem, in order to see how it compares with LRE. For
the number problems, we used numbers operations f+1; 1; +2; 2; +3; 3; +4; 4; +0g. For
each concept and relation we used a one-out-of-n codication, thus each triplet for the number problems was
a point in dimensions, while a triplet for the family tree problem was a point in
dimensional space. Having chosen a certain number of principal components, we tried to complete the
triplets. For each triplet c, given the rst 2 terms (a c , R c ) we choose as completion the concept b such that
the point (a c was the closest to the PCA plane for every possible choice of b. 3 Figure 11 shows the
number of triplets which were correctly completed vs. the number of principal components which were used
for the modular, non-modular and family tree problem respectively when 0, 10 and 20 triplets were omitted
from the training set. Notice how PCA had an excellent performance on the non-modular numbers problem
but not on the modular version. In general the performance of this method is much worse than LRE.
4.4 Results on the Family Tree Problem with real data
We have used LRE to solve a much bigger family tree task. The tree is a branch of the real family tree of
one of the authors containing 49 people. Using the 12 relations seen earlier it generates a data set of 644
triplets. LRE was able to learn the tree with as few as 6 dimensions using scaled conjugate gradient.
When we used a small number of dimensions, it was sometimes possible to recognize some of the semantic
features in the learned representation. Figure 12 shows the diagram of the components of the vectors, each
column represents a person. The rst 22 vectors represent the males and the others the females in the family
tree. The numbers denote the generation of the tree they belong to. We can see that the sign and magnitude
of the third component of each vector codes the generation that person belongs to: the rst, third and
fth generation have negative sign, and decreasing magnitudes; the second and forth have positive sign and
decreasing magnitude.
The generalization performance was very good. Figure 13 is the plot of the number of errors made by
the system when tested on the whole data set after being trained on a subset of it using 10 dimensions.
Triplets were extracted randomly from the training set and the system was run for 5000 iterations, or
3 We also tried to reconstruct the third term of a triplet by setting all its components to zero, then projecting the resulting
triplet into principal components space, and then back into the original space, but the results that we obtained were not as
good.
number of Principal Components
number
of
correct
3010305070number of Principal Components
number
of
correct
602060100number of Principal Components
number
of
correct
Figure
11: Number of triplets which were correctly completed vs. the number of principal components used.
The solid lines were obtained when all triplets were used for training; the dashed lines were obtained omitting
triplets from the training set; the dash-dotted lines were obtained omitting 20 triplets from the training
set. (a) modular number problem with numbers operations f+1;
(b) number problem with numbers operations f+1; 1; +2; 2; +3; 3; +4; 4; +0g (c) family tree
problem.
Figure
12: Diagram of the components of the vectors, obtained after learning the family tree problem with
real data for 2000 iterations using scaled conjugate gradient. All 644 triplets were used during training.
When testing, the system correctly completed 635 triplets. Each column represents a person. The rst 22
vectors represent the males and the others the females in the family tree. The numbers denote the generation
of the tree they belong to.
1005152535number of triplets omitted during training
number
of
errors
Figure
13: Plot of the errors made by the system when tested on the whole set of 644 triplets vs. the number
of triplets which were omitted during training. Omitted triplets were chosen randomly.
until the convergence criteria were met. The results shown are the median of the number of errors over
3 dierent runs, since the system very occasionally failed to converge. We can see that the performance
degrades slowly as an increasing number of triplets is omitted from the training data.
5 Generalization to new concepts and relations
As pointed out earlier, the system anticipates where the vectors for the numbers outside the learned range
ought to be placed, if it had some information about them. In this section we investigated how a solution
we obtain after learning on a set of data, can be modied to include information about new concepts and
relations.
Let us start by training a system using LRE on the non-modular number problem, with numbers
and operations f+1; 1; +2; 2; +3; 3; +4; 4; +0g but omitting all the information about a certain number
from the training set, i.e. omitting all the triplets which contain that number either as the rst or third
term.
Figure
14a shows the vectors which are found after 242 iterations of scaled conjugate gradient when
learning in two dimensions after having eliminated all the information about number 10 from the training
data. Notice how a point is clearly \missing" from the spiral in the place where number 10 should go. If we
now add some information about the missing number to the training data and continue the training, we see
that in very few iterations the vector representing that number is placed exactly where it is supposed to go.
It is interesting that a single triplet containing information about a new number is enough for the system
to position it correctly. This happens both when we allow all the vectors and matrices to continue learning
after we have added the extra data point, or when we keep all vectors and matrices xed, and we allow only
the new vector to learn. Figure 14b,c shows the solution obtained starting from the solution shown in g.14a
and training using only triplet (10; +1; 11). After having learned the position of the new number from that
one triplet, the system is then able to generalize, and answers correctly to all the triplets in the complete
data set.
We also tried to learn a new relationship. Clearly this is more di-cult since a new matrix to be learned
has more degrees of freedom. In general we saw that in the number problem several triplets were necessary
in order to learn a new matrix that would be able to correctly complete all the triplets in the data.When we
trained a system using LRE on the non-modular number problem in 2 dimensions, with numbers
operations f+1; 1; +2; +3; 3; +4; 4; +0g, it was usually possible to learn the matrix for the 2 operation
using four triplets when all the vectors and matrices were allowed to learn. Six triplets were usually necessary
if only the new matrix was allowed to learn, while everything else was kept xed.
Finally we tried the same experiment on the family tree with real data. Here the situation is not as
(a)
(b)
(c)
Figure
14: Vectors obtained after learning the number problem with numbers operations
+0g. Vector endpoints are marked with stars and a solid line connects
the ones representing consecutive numbers. The dots are the result of the multiplication R c a c for each
triplet, c. (a) The information about number 10 was omitted. The optimization algorithm met the convergence
criteria after 242 iterations using scaled conjugate gradient. Triplet (10; +1; 11) was then added
to the data set and the system was trained starting from this conguration. (b) all matrices and vectors
were allowed to learn, the algorithm met the convergence criteria after 239 iterations (c) only the vector
representing number 10 was allowed to learn, while everything else was kept xed, the algorithm met the
convergence criteria after 2 iterations
straightforward as for the numbers, since not all triplets contain the same amount of information about a
concept or a relation. The triplet \Pietro has wife Giannina" makes it possible to locate Pietro exactly on
the family tree. But the triplet \Pietro has nephew Giulio" leaves a lot of uncertainty about Pietro, who
could be a sibling of one of Giulio's parents or someone married to one of the sibling of Giulio's parents.
Similarly, \Giannina has son Alberto" has more information about relation son than \Giannina has aunt
Virginia" has about relation aunt. For these reason the performance of the system after learning a new
person vector or a new relation matrix depended on which triplets were added for training.
The father of one author is mentioned in 14 triplets in the data set. If such information was omitted,
LRE was able to complete all the remaining triplets correctly after having been trained for 1501 iterations.
Then it was su-cient to add a single triplet stating who that person is married to, in order to locate him
correctly in the family tree. On the other hand it made 5 errors if it was trained adding only a triplet that
specied one of his nephews. When we tried to learn a new matrix, the high dimensionality required by the
problem means that a very high number of triplets was necessary for learning.
6 Further developments
A minor modication, which we have not tried yet, should allow the system to make use of negative data
of the form \Christopher is not the father of Colin". This could be handled by minimizing G instead of
maximizing it, while using Christopher as the \correct answer".
One limitation of the version of LRE presented here is that it always picks some answer to any question
even if the correct answer is not one of the concepts presented during training. This limitation can be
overcome by using a threshold distance so that the system answers \don't know" if the vector it generates is
further than the threshold distance from all known concepts. Preliminary experiments with the non-modular
number problems have been very successful. Instead of ignoring triplets in which the operation produces
an answer outside the set of known numbers, we include these triplets, but make the correct answer \don't
know". If, for example, the largest known number is 10, LRE must learn to make the answer to 9 + 3 be
further than the threshold distance from all the known numbers. It succeeds in doing this and it locates
the answer to 9 + 3 at almost exactly the same point as the answers to 4. In a sense, it has
constructed a new concept. See gure 15.
Another limitation is that a separate matrix is needed for each relation. This requires a lot of parameters
because the number of parameters in each matrix is the square of the dimensionality of the concept vector
space. When there are many dierent relations it may be advantageous to model their matrices as linear
combinations of a smaller set of learned basis matrices. This has some similarity to the work of Tenenbaum
Figure
15: Vectors obtained after learning the number problem with numbers operations
+0g. Vector endpoints are marked with stars and a solid line connects
the ones representing consecutive numbers. The dots are the result of the multiplication R c a c for each
triplet c such that the answer is among the known concepts. The crosses are the result of the multiplication
R k a k for each triplet k such that the correct answer is \don't know". In all cases the system answered
correctly to all the questions in the data set. All the triplets were used for training.
and Freeman (1996).
In this paper, we have assumed that the concepts and relations are presented as arbitrary symbols so
there is no inherent constraint on the mapping from concepts to the vectors that represent them. LRE can
also be applied when the \concepts" already have a rich and informative representation. Consider the task of
mapping pre-segmented intensity images into the pose parameters of the object they contain. This mapping
is non-linear because the average of two intensity images is not an image of an object at the average of the
positions, orientations and scales of the objects in the two images. Suppose we have a discrete sequence
of images of a stationary object taken with a moving camera and we know the camera
motion each successive image pair.
In an appropriately parameterized space of pose parameters, the camera motion can be represented as a
transformation matrix, R(t; t + 1), that converts one pose vector into the next:
The central assumption of LRE is therefore exactly satised by the representation we wish to learn. So it
should be possible to learn a mapping from intensity images to pose vectors and from sensory representations
of camera motions to transformation matrices by backpropagating the derivatives obtained from Eq. 3
through a non-linear function approximator such as a multilayer neural network. Preliminary simulations
by Sam Roweis (personal communication) show that it is feasible to learn the mapping from preprocessed
intensity images to pose vectors if the mapping from camera motions to the appropriate transformation
matrices is already given.
Linear Relational Embedding is a new method for discovering distributed representations of concepts and
relations from data consisting of binary relations between concepts. On the task on which we tried it, it was
able to learn sensible representations of the data, and this allowed it to generalize well.
In the family tree task with real data, the great majority of the generalization errors were of a specic
form. The system appears to believe that \brother of" means \son of parents of". It fails to model the extra
restriction that people cannot be their own brother. This failure nicely illustrates the problems that arise
when there is no explicit mechanism for variable binding.
A key technical trick that was required to get LRE to work was the use of the discriminative goodness
function in Eq. 3. If we simply minimize the squared distance between R c a c and b c all the concept vectors
rapidly shrink to 0. It may be possible to apply this same technical trick to other ways of implementing
relational structures in neural networks. Pollack's RAAM (Pollack, 1990) and Sperduti's LRAAM's (Sper-
duti, 1994) minimize the squared distance between the input and the output of an autoencoder and they
also learn the representations that are used as input. They avoid collapsing all vectors to 0 by insisting
that some of the symbols (terminals and labels) are represented by xed patterns that cannot be modied
by learning. It should be possible to dispense with this restriction if the squared error objective function is
replaced by a discriminative function which forces the output vector of the autoencoder to be closer to the
input than to the alternative input vectors.
Acknowledgments
The authors would like to thank Peter Dayan, Sam Roweis, Zoubin Ghahramani, Carl van Vreeswijk, Hagai
Attias and Marco Buiatti for many useful discussions.
--R
Probabilistic interpretation of feedforward classi
Finite state automata and simple recurrent neural networks.
Indexing by latent semantic analysis.
Finding structure in time.
Learning and extracting
Learning distributed representations of concepts.
Multidimensional scaling by optimizing goodness of
A solution to Plato's problem: The latent semantic analysis theory of acquisition
Learning human-like knowledge by singular value decom- position: A progress report
The LEABRA model of neural interactions and learning in the neocortex.
Learning internal representation by error propaga- tion
Labeling RAAM.
Separating style and content.
The MIT Press
Multidimensional Scaling: History
Lawrence Erlbaum Associates
--TR
--CTR
Petridis , Vassilis G. Kaburlasos, Finknn: a fuzzy interval number k-nearest neighbor classifier for prediction of sugar production from populations of samples, The Journal of Machine Learning Research, 4, p.17-37, 12/1/2003
Peter Dayan, Images, Frames, and Connectionist Hierarchies, Neural Computation, v.18 n.10, p.2293-2319, October 2006 | learning structured data;concept learning;generalization on relational data;Linear Relational Embedding;feature learning;distributed representations |
628132 | Emergent Semantics through Interaction in Image Databases. | AbstractIn this paper, we briefly discuss some aspects of image semantics and the role that it plays for the design of image databases. We argue that images don't have an intrinsic meaning, but that they are endowed with a meaning by placing them in the context of other images and by the user interaction. From this observation, we conclude that, in an image, database users should be allowed to manipulate not only the individual images, but also the relation between them. We present an interface model based on the manipulation of configurations of images. | where i are terminal symbols of the record definition language, Ri are non terminal symbols,
and j is the label of the production rule, then the meaning of R is:
The meaning of the whole record depends on the production rule and on the meaning of the
non terminals on the right side of the production, but not on the syntactic structure of the non
terminals R1, . , Rn.
This property makes the analysis of the meaning of records in traditional databases conceptually
easy but, unfortunately, it does not hold for images. As Umberto Eco puts it:
The most nave way of formulating the problem is: are there iconic sentences and
phonemes? Such a formulation undoubtedly stems from a sort of verbocentric dog-
matism, but in it ingenousness it conceals a serious problem.[2]
The problem is indeed important and the answer to the question, as Eco points out, is no.
Although some images contain syntactical units in the form of objects, underneath this level it
is no longer possible to continue the subdivision, and the complete semantic characterization
of images goes beyond the simple enumeration of the objects in it and their relations (see [20]
for examples).
level beyond the objects are used very often in evocative scenarios, like art and
advertising [1]. There is, for instance, a fairly complex theory of the semantics associated with
color [5], and with artistic representational conventions [3].
The full meaning of an image depends not only on the image data, but on a complex of
cultural and social conventions in use at the time and location of the query, as well as on other
contigiencies of the context in which the user interacts with the database. This leads us to
reject the somewhat Aristotelean view that the meaning of an image is an immanent property
of the image data. Rather, the meaning arises from a process of interpretation and is the result
of the interaction between the image data and the interpreter. The process of querying the
database should not be seen as an operation during which images are filtered based on the
illusory pre-existing meaning but, rather, as a process in which meaning is created through the
interaction of the user and the images.
The cultural nature of the meaning and its dependence on the interpretation process show
that meaning is not a simple function of the image data alone. The Saussurean relation between
the signifiant and the signifie is always mediated. This relation, much like in linguistic signs,
does not stand alone, but is only identifiable as part of a system of oppositions and dierences.
In other words, the referential relation between a sign (be it an icon or a symbol) can only
stand when the sign is made part of a system. Consider the images of Fig. 1. The image at
the center is a Modigliani portrait and, placed in the context of other 20th century paintings
(some of which are portraits and some of not), suggests the notion of painting. If we take
the same image and place it in the context of Fig. 2, the context suggests the meaning Face.
These observations rule out the possibility of extracting some intrinsically determined meaning
from the images, storing it in a database, and using it for indexing. The meaning of the
Figure
1: A Modigliani portait placed in a context that suggests Painting.
Figure
2: A Modigliani portrait placed in a context that suggests Face.
Figure
3: Schematic description of an interaction using a direct manipulation interface.
image data can only emerge from the interaction with the user. The user will provide the
cultural background in which meaning can be grounded. We call this concept emergent se-
mantics. This concept has important consequences for our query model and, ultimately, for
the archtecure of image databases: the filtering approach, typical of symbolic databases, is no
longer viable. Rather, interfaces should support a process of guided exploration of the database.
The interface is no longer the place where questions are asked and answers are obtained, but it
is the tool for active manipulation of the database as a whole.
3 Interfaces for Emergent Semantics
Based on the ideas in the previous sections, we replaced the query-answer model of interaction
with a guided exploration metaphor. In our model, the database gives information about the
status of the whole database, rather than just about a few images that satisfy the query. The
user manipulates the image space directly by moving images around, rather than manipulating
weights or some other quantity related to the current similarity measure3 The manipulation
of images in the display causes the creation of a similarity measure that satisfies the relations
imposed by the user.
An user interaction using an exploratory interface is shown schematically in Fig. 3. In
Fig. 3.A the database proposes a certain distribution of images (represented schematically
as simple shapes) to the user. The distribution of the images reflects the current similarity
interpretation of the database. For instance, the triangular star is considered very similar to
the octagonal star, and the circle is considered similar to the hexagon. In Fig. 3.B the user
moves some images around to reflect his own interpretation of the relevant similarities. The
result is shown in Fig. 3.C. According to the user, the pentagonal and the triangular stars are
3The manipulation of the underlying similarity measure by explicitly setting a number of weights that characterize
it is very common in current image databases but very counterintuitive.
Figure
4: Interaction involving the creation of concepts.
are quite similar to each other, and the circle is quite dierent from both of them.
As a result of the user assessment, the database will create a new similarity measure, and
re-order the images, yielding the configuration of Fig. 3.D. The pentagonal and the triangular
stars are in this case considered quite similar (although they were moved from their intended
position), and the circle quite dierent. Note that the result is not a simple rearrangement of
the images in the interface. For practical reasons, an interface can't present more than a small
fraction of the images in the database. Typically, we display the 100-300 images most relevant
to the query. The reorganization consequent the user interaction involves the whole database.
Some images will disappear from the display (the hexagon in Fig. 3.A), and some will appear
(e.g. the black square in Fig. 3.D).
A slightly dierent operation on the same interface is the definition of visual concepts. In
the context of our database, the term concept has a more restricted scope than in the common
usage. A visual concept is simply a set of images that, for the purpose of a certain query, can be
considered as equivalent or almost equivalent. Images forming a visual concept can be dragged
into a concept box and, if necessary, associated with some text (the text can be used to
retrieve the concept and the images similar to it). The visual concept can be then transformed
into an icon and placed on the screen like every other image.
Fig. 4 is an example of interaction involving the creation of visual concepts. Fig. 4.A contains
the answer of the database to a user query. The user considers the red and green images as
two instances of a well defined linguistic concept. The user opens a concept box and drags the
images inside the box. The box is then used as an icon to replace the images in the display
space.
From the point of view of the interface, a concept is a group of images that occupy the
same position in the display space. In addition, it is possible to attach metadata information
to a concept. For instance, if a user is exploring an art database, he can create a concept
called medieval crucifixion. The words medieval and crucifixion can be used to replace
the actual images in a query. This mechanism gives a way of integrating visual and non visual
queries. If an user looks for medieval paintings representing the crucifixion, she can simply
type in the words. The corresponding visual concept will be retrieved from memory, placed in
the visual display, and the images contained in the concept will be used as examples to start a
visual query.
The interface, as it is presented here, is but a skeleton for interaction that can be extended
along many directions. Its main limitation (and the relative opportunities for improvement)
are the following:
Visual concepts can't be nested: a visual concept can't be used as part of the definition
of another visual concept (although the images that are part of it can).
With the exception of visual concept, which can be stored, the system has no long term
memory, that is, no context information can be transported from one user session to
another.
Contexts (distributions of images) can't be saved and restored, not even within a session.
The system does not allow cooperation between users. There is no possibility of defining
a user profile that can be used to retrieve contexts from other users and use them for
some form of social filtering [].
All these limitations can be overcome without changing tha nature or the basic design of
the interface, by adding appropriate funtions to the system. In this paper we will not consider
them, and we will limit the discussion to the basic interface. Further study is necessary to
determine which solutions are useful, useless, or (possibly) detrimental.
An exploration interface requires a dierent and more sophisticated organization of the
database. In particular, the database must accommodate arbitrary (or almost arbitrary) similarity
measures, and must automatically determine the similarity measure based on the user
interface. In the following section we describe in greater details the principles behind the design
of exploratory interfaces.
4 Exploration Operators
The exploration interface is composed of three spaces and a number of operators [4]. The
operators can be transformations of a space onto itself or from one space to another. The three
space on which the interface is based are:
The Feature space F. This is the space of the coecients of a suitable representation of
the image. The feature space topological, but not metric. There is in general no way to
assign a distance to a pair of feature vectors.
Figure
5: Operator algebra as a mediation between the graphical front-end and the database.
The Query space Q. When the feature space is endowed with a metric, the result is the
query space. The metric of the query space is derived from the user query, so that the
distance from the origin of the space to any image defines the dissimilarity of that image
from the current query.
The Display space D is a low dimensional space (0 to 3 dimensions) which is displayed to
the user and with which the user interacts. The distribution of images in the display space
is derived from that of the query space. We will mainly deal with two-dimensional display
spaces (as implemented in a window on a computer screen.) For the sake of convenience,
we also assume that every image in the visualization space has attached a number of labels
i drawn from a finite set. Examples of labels are the visual concepts to which an image
belongs. The conventional label is assigned to those images that have been selected and
placed in their position (anchored)by the user.
The feature space is a relatively fixed entities, and is a property of the database. The query
space, on the other hand, is created anew with a dierent metric after every interaction with
the user.
The interaction is defined by an algebra of operators between these three spaces. The operators
play more or less the same role that the query algebra plays in traditional databases
(although, as we havementioned in the previous section, the term query is in this case inap-
propriate). In all practical instances of the system, the user does not deal with these operators
directly, but through a suitable graphic front-end (see Fig. 5), whose characteristics vary
depending on the particular application. Later in this paper we will consider the graphical
front-end of our database system El Nin~o ([18]). In this section we will be concerned with the
operators that mediate the exploration of the database.
4.1 Operators in the Feature Space
A feature is an attribute obtained applying some image analysis algorithms to the image data.
Features are often numeric, collected in a feature vector, and immersed in a suitable vector
space, although this is not always the case.
We make a distinction between the raw, unprocessed vector space and spaces that are
adapted from it for reasons of convenience. This distinction is not fundamental (all feature
spaces are the result of some processing) but it will be useful to describe the operators. The
raw feature space is the set of complete feature vectors, as they come out of the image analysis
algorithms. In many cases, we need to adjust these vectors for the convenience of the database
operations. A very common example is dimensionality reduction [12]. In this case, we will say
that we obtain a (reduced-dimensional) stored view of the feature space. The operators defined
on the feature space are used for this purpose. The most common are:
Projection. The feature vector is projected on a low dimensional subspace of the raw feature
space, obtaining a low dimensional view. Operators like Singular Value Decomposition,
projection of Zernike and statistical moments belong to this class.
Quantization. These operators are used in non-vector feature spaces like the set of coe-
cients used in [18]. In this case, we reduce the dimensionality of the feature space by representing
an image with a limited number of coecients (e.g. 50 or 100). This is done by vector quantization
of the image coecients (this kind of operation will be considered more in detail in
section 5).
Apply Function. Applies the function F to all the elements of a set of numbers to create
another set of numbers of the same dimension. Filtering operations applied to color histograms
belong to this class.
These operators prepare the feature space for the database operations. They are not
properly part of the interaction that goes on in the interface, since they are applied o-line
before the user starts interacting. We have mentioned them for the sake of completeness.
4.2 The Query Space
The feature space, endowed with a similarity measure derived from a query, becomes the query
space. The score of an image is determined by its distance from the origin of this space
according to a metric dependent on the query. We assume that every image is represented as
a set of n number (which may or may not identify a vector in an n-dimensional vector space)
and that the query space is endowed with a distance function that depends on m parameters.
The feature sets corresponding to images x and y are represented by xi and yi,
and the parameters by , m. Also, to indicate a particular image in the database
we will use either dierent Latin letters, as in xi, yi or an uppercase Latin index. So, xI is
the I-th image in the database (1 I N), and xj , is the corresponding feature
I
vector.
The parameters are a representation of the current query, and are the values that determine
the distance function. They can also be seen as encoding the current database approximation
to the user's semantic space.
Given the parameters , the distance function in the query space can be written as
with f L2(IRn IRn IRm, IR+). Depending on the situation, we will write f(xi, yi) in lieu of
f(xi,
As stated in the previous section, the feature space per se is topological but not metric.
Rather, its intrinsic properties are characterized by the functional
which associates to each query a distance function:
A query, characterized by a vector of parameters , can also be seen as an operator q
which transforms the feature space into the query space. If L is the characteristic functional of
the feature space, then is the metric of the query space. This is a very important
operator for our database model and will be discussed in section 6.
Once the feature space F has been transformed into the metric query space Q, other operations
are possible [4, 20], like:
Distance from Origin Given a feature set xi, return its distance from the query:
Distance Given a two feature sets xi, yi, return the distance between the two
Select by Distance. Return all feature sets that are closer to the query than a given distance:
k-Nearest Neighbors. Return the k images closest to the query
It is necessary to stress again that these operations are not defined in the feature space F
since that space is not endowed with a metric. Only when a query is defined does a metric
exist.
4.3 The Display Space
The display operator projects image xi on the screen position in such a way
that
We use a simple elastic model to determine the position of images in the display space, as
discussed in section 6. The result is an operator that we write:
The parameter f reminds us that the projection that we see on the screen depends on the
distribution of images in the query space which, in turn, depends on the query parameters
. The notation (XI, ) means that the image xI is placed at the coordinates xI in the
display space, and that there are no labels attached to it (viz. the image is not anchored at
any particular location of the screen, and does not belong to any particular visual concept).
A configuration of the display space is obtained by applying the display operator to the
entire query space:
where I is the set of labels associated to image I. As we said before, it is impractical to display
the whole database. More often, we display only a limited number P of image. Formally, this
can be done by applying the P-nearest neighbors operator to the space Q:
The display space D is the space of the resulting configurations. With these definitions, we can
describe the operators that manipulate the display space.
The Place Operator The place operator moves an image from one position of the display
space to another, and attaches a label to the images to anchor it to its new position. The
operator that places the I-th image in the display is I : Q Q
I (XJ, J
where X~ is the position given to the image by the user.
Visual Concept Creation A visual concept is is a set of images that, conceptually, occupy
the same position in the display space and are characterized by a set of labels. Formally, we
will include in the set of labels the keywords associated to the concept as well as the identifiers
4Capital Greek indices will span 1, l. The most frequent case, in our experience, is the simple two-dimensional
display, that is, l = 2. The arguments presented here, however, hold for any l-dimensional display. Conventionally,
refers to a browser in wihch only ordinal properties are displayed.
of the images that are included in the concept. So, if the concept contains images I1, . , Ik,
the set of labels is
where W is the set of keywords. We call X the set of concept, and we will use the letter to
represent a concept.
The creation of a concept is an operator : D X defined as:
This concept takes a set of images, in positions XJ and attached labels J , and transforms them
in an entity that formally is an image with unspecified position, and a set of labels composed of
the union of all the labels of the images ( J J ), the set of keywords associated to the concept,
and the list of the images that belong to the concept.
Visual Concept placement The insertion of a concept in a position Z of the display space
is defined as the action of the operator defined as:
Metadata Queries Visual concepts can be used to create visual queries based on semantic
categories. Suppose a user enters a set of words A. It is possible to define the distance from
the keyword set A to the textual portion of a visual concept label using normal information
retrieval techniques [13]. Let d(A, ) be such a distance. Similarly, it is possible to determine
the distance between two concepts d(1, 2). Then the textual query A can be transformed in
a configuration of the display space
XI, I (19)
where
and
The resulting metadata query operator takes a set of keywords A and returns a configuration
of images in the display space that satisfies the relation (21):
In other words, we can use the distance between the concepts and the query, as well as the
distances between pairs of concepts, to place the corresponding images in the display space,
thus transforming the textual query in a visual query.
The description of the interaction as the action of a set of operators provides a useful algebraic
language for the description of the query process. Operators provide the necessary
grounding for the study of problems of query organization and optimization in the exploratory
framework. Operators, however, still constitute a functional description of the exploration
components. They don't (nor do they purport to) oer us an algorithmic description of the
database. The implementation of the operators requires considering a suitable image representation
in a space with the required metric characteristics. We will study this problem in the
next section.
5 Image Representation
A good image representation for image database applications should permit a flexible defini-
tion of similarity measures in the resulting feature space. There are two ways in which this
requirement can be interpreted. First, can require that some predefined distance (usually the
Euclidean distance or some other Minkowski distance) give us sound results when applied to
the feature space within a certain (usually quite narrow) range of semantic significance of these
features. Extracting features like color histograms or Gabor descriptors for textures [9] may
satisfy this requirement for certain classes of images.
A second sense in which the requirement can be understood is that of coverage. In this
case we require that the feature space support many dierent similarity criteria, and that it
be possible to switch from one criterion to another simply by changing the definition of the
distance in the feature space.
This possibility is a property both of the distance and of the feature space. On one hand,
we must define a class of distances rich enough to represent most of the similarity criteria that
we are likely to encounter in practice. On the other hand, this eort would be fruitless if the
feature space (i.e. the image representation) were not rich in information, that is, if it didn't
represent enough aspects of the images to support the definition of these similarity criteria.
To make a simple example, consider a feature space consisting of a simple histogram of the
image. No matter what similarity measure we define, in this space it is impossible to formulate
a query based on structure or spatial organization of the image.
In this section we introduce a general and ecient image representation that constitutes a
feature space with wide coverage for image database applications.
A color image is a function f : S R3. C is the color space, which we assume
endowed with a metric, whose exact characteristics depend on the specific color system adopted.
In order to measure distances, we need to endow the space F of image functions with a suitable
metric structure.
It is a well known fact that groups of transformations can be used to generate continuous
wavelet transforms of images [7]. If the transformation group from which we start is endowed
with a metric, this will naturally induce a metric in the space of transformed images.
Consider the space L2(X, m), where m is a measure on X, and a Lie group G which acts
freely and homogeneously on X[14]. We will assume that G, as a manifold, is endowed with a
Riemann metric. If g G, then a representation of G in L2(X) is a homomorphism
The irreducible unitary representation of G on L2(X, Y ) (where Y is a given smooth manifold)
is defined as
where m is a measure on X and f is a function f : X Y . This representation can be
used to generate a wavelet transform. Starting from a mother wavelet L2(X), we define
for the moment, we assume (as would be the case, for instance, for a
grayscale image), the transform of f is:
Note that the wavelet transform of f is defined on G. In the case of images, we have
In case of color images, the assumption does not hold, and it not clear how to define
the inner product f, g. A possibility, often used in literature, is to treat the color image as
three separate grayscale images, and to process them independently [6]. Another possibility,
introduced in [16], is to define an algebra of colors. In this case the values of Tf (g) in (30) are
also colors.
It is possible to extend the same definition to discrete groups [15] by consider a discrete
subgroup GD G. If G is a Lie group of dimension n, consider a set of indices I Zn. We
consider the set
under the usual closeness conditions that characterize subgroups.
Given a representation of G, : G L2(X), the restriction of to GD is obviously a
representation of GD. In particular, the canonical unitary representation of GD is
dm(g-1 . x)
and the GD-frame transform of a function f with mother wavelet is:
The system of indices j plays an important role in implementation: the discrete transform
is usually stored in a multidimensional array and accessed through the indices of the array.
We can endow the set of indices I with a group structure such that there is a homomorphism
of I into GD. This homomorphism induces a distance function in the group I by
We can now define the discrete transform
In the case of color images, if the image is divided in three independent channels, we will have
three seprate multidimensional arrays of coecients, while in the case of a color algebra we will
have a single multidimensional array whose entries are colors. In the following, we will consider
the latter case, since the former can easily be reduced to it.
Thanks to the distance dI, this transform has a metric structure just like the continuous
transform. Note that the functions will not in general form an orthogonal basis
of L2(IR2), but an overcomplete frame [8].
An image can be completely represented by the set of coecients of its transform. As we
will see in the following section, this representation is very convenient for defining metrics in the
image space, but it is too costly in terms of storage space for applications in image databases.
For instance, applying the ane group to an image of size 128128 will result in about 21, 000
coecients, each of which is a color and therefore is represented by three numbers.
Each coecient in the image representation is defined by the element g G (its position
in the group) and its color c C. Therefore, a coecient in the representation can be seen as a
both G and C are metric spaces, so is G C. Therefore, given
two coecients 1 and 2 it is possible to define their distance d(1, 2). With this distance
function, it is possible to apply vector quantization to the set of coecients and obtain a more
compact representation [15]. In our tests, we apply Vector Quantization to represent the image
with a number of coecients between 100 and 300.
5.1 Distance Functions
The representation introduced in the previous section gives us the necessary general feature
space. In order to obtain a complete foundation for an image search engine, it is now necessary
to define a suitably comprehensive class of distances into this space. For the sake of simplicity,
we will begin by considering representations obtained from the continuous group G. In this
case, the representation of an image is a function f and the graph of the representation
if a subset of GC. Both G and C are metric spaces. If gij is the metric tensor of G interpreted
as a Riemann manifold, and cuv is the metric tensor of C, then the metric of
and the distance between two points of H isq 2
rs
where the integral is computed on a geodesic u(t) between p and q. In the discrete case d(p, q)
is the distance between two coecients of the transform.
The distance between a point x H and a set A H is defined as:
and the distance between two sets A, B H as
weighting function that we will use in the following to
manipulate the distance function. Note that with this definition the distance is not symmetric.
For a justification of this, see [19, 23]. This distance can be used to measure dissimilarities
between images.
An important property of this class of distances is given by the following theorem (proved
in [15]):
Theorem 5.1 Let H be a subgroup of G, and assume that the metric tensor in G is such that
the corresponding distance function is
If are two functions such that
then d(f1,
In other words, we can select a subgroup of transformations such that two images obtained
one from the other through a transformation in the subgroup have distance 0.
By acting on the tensors g and c and the function w it is possible to define a number of
dierent similarity criteria. Consider the following:
If we are interested in the global color of an image, we will define the distance in G to be
invariant to translation, and the metric tensor to be non-null only at low frequencies.
If we are interested in the structure of an image, we can define the tensor g to be non-null
only at high frequencies, and the tensor c to depend only on the brightness of the image.
To search for a certain object, we define the tensor to be nonzero only in the region of
the object in the query image, and enforce translation invariance.
It is possible to enforce quasi invariance: the measure is not exactly invariant to a
given transformation, but the distance increases very slowly when the images change.
Quasi-invariance gives often more robust results than complete invariance.
More importantly, it is possible to parameterize the tensors g and c and the function w and
define distances based on user interaction [15].
It is possible to extend the distance computation to the approximate representation. Let us
start with an idealized case. In vector quantization, an image is represented by a set of Voronoi
polygons that cover all G. Assume that the Voronoi polygons of the two images coincide on
the subspace G, and that they dier only in color.
We have the following:
Definition 5.1 Let the set of Voronoi polygons of an image, Sk V and
a Sk. The point a is at distance d from the border of Sk if
We will use the notation (a) = d
Definition 5.2 Let SA and SB two overlapping Voronoi polygons, and let the distance between
the colors of these polygons be . A point a SA is internal if (a) > , and is a boundary
point otherwise. The interior of SA, SA, is the set of internal points of SA, and the border of
SA, SA, is the set of boundary points.
Lemma 5.1 Let SA and SB be two overlapped Voronoi polygons with color distance . If a is
an internal point of SA, then
The distance between SA and SB is:
A aSA
A aSA aSA
If SA SA, then
d (SA, SB) (40)
(see [15]).
The assumption that the Voronoi polygons are overlapping is obviously false. Given a
Voronoi polygon VA belonging to the image A, however, it is possible to compute its distance
from image B by doing some approximation:
Assume that only a fixed number N of Voronoi polygons of image B overlap VA, and that
the area by which VB,i overlaps VA depends only on the distance between the respective
centroids.
Find the N polygons of B whose centroids are closer to that of VA, and compute the
distance di between their centroids and that of VA. Let
Let i be the distance between the color of VA and the color of VB,i.
Compute
In this way, we can compute (approximately) the distance between one of the Voronoi
polygons of image A and image B. If VA is the set of all Voronoi polygons that define A, then
the approximate distance betwen A and B is given by:
|VA|
with wV > 0 and V VA
6 Query and Projection Operators
The two operators that most characterize the exploratory approach are the projection operator
, and the query operator q. The first operator must project the images from the query space
to the image space in a way that represent as well as possible the current database similarity
(which, as we have argue, is the origin root of database semantics). The second operator must
generate a metric for the query space that reflects as well as possible the similarity relation that
the user communicated by placing images on the interface.
6.1 Projection Operator
The projection operator in its entirety is written as For the present
purposes, we can ignore the set of labels, and we can remove the explicit reference to the metric
f, so we can write XI. The distance between images xI and xJ in the query space
is f(xI, xJ ; ), while for the display space we assume that the perception of closeness is well
represented by the Euclidean distance: d(XI, XJ
As mentioned in the previous sections, we will not display the whole database, but only the
images closest to the query (with, usually, 50 P 300). Let I be the set of indices of
such images, with P. The operator needs not be concerned with all the images in the
database, but only with those that will be actually displayed.
First, we query the database so as to determine the P images closer to the query. We
imagine to attach a spring of length E(XI, images I and J.
The spring is attached only if the distance between I and J in the feature space is less than
a threshold . If the distance is greater than , the two images are left disconnected. That
is, their distance in the display space is left unconstrained. The use of the threshold can be
shown to create a more structured display space []. The operator is the solution of the
following optimization problem
E( (xI), (xJ )))2 (42)
I, J C
E((xI), (xJ
In practice, of course, the operator is only defined by the positions in which the images are
actually projected, so that the minimization problem would rewrite as
XI :II
I, J C
E(XI, XJ ) <
Standard optimization techniques can be used to solve this problem. In our prototype we use
the simplex method described in [11].
6.2 Query Operator
The query operator solves the dual problem of deriving the metric of the query space from the
examples that the user gives in the display space. Let us assume that the user has selected, as
part of the interaction, a set D of images that are deemed relevant. The relative position of the
images in the display space (as determined by the user) gives us the values
D. The determination of the query space metric is then reduced to the solution of the
optimization problem
I,JD
Note that this is the same optimization problem as in (43), with the exception of the set of
images that are considered relevant (in this case it is the set D of selected images, rather than
the set of displayed images), and the optimization is done over rather than over the projection.
The latter dierence is particularly relevant, since in general it makes the problem severely
underconstrained. In most cases, the vector can contain a number of parameters of the order
of 100, while the user will select possibly only a few images. In order to regularize the problem,
we use the concept of natural distance.
We define one natural distance for every feature space, the intuition being that the natural
distance is a uniform and isotropic measure in a given feature space. The details of the determination
of the natural distance depend on the feature space. For the remainder of this section,
let N be the natural distance function in a given feature space F, and let F be a distance
functional defined on the space of distance functions defined on F. In other words, let F (f)
be the logical statement
y, z F (d(x, y) 0 d(x,
(i.e. F (f) is true if f is a distance function in the space F). Let
The set of distance functionals in D2(F) is then
Note that for the functions F we do not require, at the present time, the satisfaction of the
distance axioms. That is, we do not require D2(F)(F ). Whether this assumption can be
made without any negative eect is still an open problem.
If we have identified a function F (we will return to this problem shortly), then the metric
optimization problem can be regularized (in the sense of Tihonov [21]) as
I,JD
that is, we try to find a solution that departs as little as possible from the natural distance in
the feature space.
For the definition of the function F , an obvious candidate is the square of the L2 metric
in the Hilbert space of distance functions:
x,y
In practice, the integral can be reduced to a summation on a sample taken from the database,
or to the set of images displayed by the database.
An alternative solution can be determined if the natural distance N belongs to the family
of distance functions f(.; ) that is, if there is a parameter vector 0 such that, for all x, y,
and if the function y. Note
that, since the only thing that changes in the family of distances f is the parameters vector
, we can think of F (f) as a function of the parameters: F (). Because of the regularity
hypothesis on f, F () is also regular in 0. In this case, we can write
,
where H is the Hessian of the function F . Since F is the square of a regular function, we
have The Hessian contains elements of the form
x,y f(x,
x,y 0
If we define the matrix
x,y 0
we have a regularization problem of the type
I,JD
This form is particularly convenient since the matrix depends only on the natural distance
of the feature space, and therefore can be precomputed.
7 The Interface at Work
We have used the principles and the techniques introduced in he previous section for the design
of our database system El Nin~o[18, 20].
The feature space is generated with a discrete wavelet transform of the image. Depending on
the transformation group that generates the transform, the space can be embedded in dierent
manifolds. If the transformation is generated by the two dimensional ane group, then the
space has dimensions x, y, and scale, in addition to the three color dimensions R, G, B. In this
case the feature space is dieomorphic to IR6.
In other applications, we generate the transform using the Weyl-Heisenberg group [15, 22],
obtaining transformation kernels which are a generalization of the Gabor filters [7]. In this case,
in addition to the six dimensions above we have the direction of the filters, and the feature
space is dieomorphic to IR6 S1. After Vector Quantization, an image is represented by a
number of coecients between 50 and 200 (depending on the particular implementation of El
Nin~o).
El Nin~o uses a class of metrics known as Fuzzy Feature Contrast model [19], and
derived from Tversky's similarity measure [23]. Consider an n-dimensional feature space (in
our case for the ane group, and for the Weyl-Heisenberg group). We define two
types of membership functions, the first of which is used for linear quantities, the second for
angular quantities, like the angle of the Weyl-Heisenberg group.
Let xi and yi be two elements in this feature space. In other words, xi and yi are the
components of one of the coecients that define images X and Y . Define the following
and
and the quantities
xi, yi are linear features (56)
xi, yi are angular features
xi, yi are linear features
xi, yi are angular features
The distance between the two coecients is defined as
where > 0 is a weighting coecient.
The determination of the distance between two coecients depends on 13 parameters for
the six-dimensional feature space and on 15 parameters for the seven-dimensional feature space
(viz. the i's, the i's, and ). These distances must be inserted in the distance function (41).
For an image represented by 100 coecients, this would bring the total number of parameters to
up to 1,500. This large number of parameters makes the regularization problem unmanageable.
In order to reduce this number, we have collected the coecients in 10 bins, depending on their
location in the coecient space, and enforce equality of parameters within a bin. This brings
the number of independent parameters to less than 150. These parameters are set by the query
operator based on the user interaction by solving the optimization problem (48).
In El Nin~o we use two types of image selections, represented as a rectangle around the
image and as a cross, respectively. The dierence between the two is in the type of parameters
that they contributer to determine. The parameters i individuate a point in the feature space
around which the query is centered. All the other parameters determine the shape of the
space centered around this point. We noticed early on in our tests that users would like to
select images that are not relevant for their query in order to give the database a distance
reference between the query images and the rest of the database. Therefore, we introduced the
cross-selected images that contribute to the adaptation of all the parameters except the i.
The interface of El Nin~o is shown in Fig. 6 with a configuration of random images displayed.
In a typical experiment, subjects were asked to look for a certain group of car images, that
look like the image in Fig. 7 (not necessarily that particular one). The car was showed to the
subjects on a piece of paper before the experiment started, and they had no possibility of seeing
it again, or to use it as an example for the database. This procedure was intended to mimick
in a controlled way the vague idea of the desired result that users have before starting a query.
In the display of Fig. 6 there are no cars like the one we are looking for, but there are a couple
of cars. The user selected one (the subjectively most similar to the target) and marked another
one with a cross. The second car image will be used to determine the similarity measure of the
database, but it will not be considered a query example (the database will not try to return
images similar to that car).
After a few interaction, the situation is that of Fig. 8 (for the sake of clarity, from now on
we only show the display space rather than the whole interface). Two images of the type that
we are looking for have appeared. They are selected and placed very close to one another. A
third car image is placed at a certain distance but excluded from the search process. Note that
the selection process is being refined as we proceed. During some of the previous interactions,
the car image that is now excluded was selected as a positive example because, compared to
Figure
The interface of El Nin~o with the beginning of a search process.
what it was presented at the time, it was relatively similar to our target. Now that we are
zeroing in to the images that we are actually interested in, the red car is no longer similar to
what we need.
At the next iteration, the situation is that of Fig. 9. At this time we have a number
of examples of the images we are looking for. Further iterations (e.g. with the selection
represented in the figure) can be used to obtain more examples of that class of images. Note
that the negative examples are placed much farther away from the positive examples than
in the previous case. This will lead to a more discriminating distance measure which, in eect,
will try to zoom in the class of images we are lookig for.
During this interaction, the subjects' idea of what would be an answer to the query changed
continuously as they learned what the database had to oer, and redefined their goals based
on what they saw. For instance, some of the positive examples that the subjects used at the
beginning of the query-when their ideas were more generic and confused-are not valid later
Figure
7: One of our target images.
on, when they have a more precise idea of what they are looking for.
This simple experiment is intended only to give a flavor of what interaction with a database
entails, and what kind of results we can expect. Evaluation of a highly interactive system like
El Nin~o is a complex task, which could be the subject of a paper of its own. The interested
reader can find some general ideas on the subject in [17].
Conclusions
In this paper we have defined a new model of interface for image databases. The motivation
for the introduction of this model comes from an analysis of the semantics of images in the
context of an image database. In traditional databases, the meaning of a record is a function
from the set of queries to a set of truth values. The meaning of an image, on the other hand,
is a function from the cartesian product of the feature space times the set of queries to the
positive real values. This definition embodies the observation that the meaning of an image
can only be revealed by the contraposition of an image with other images in the feature space.
These observations led us to define a new paradigm for database interfaces in which the role
of the user is not just asking queries and receiving answer, but a more active exploration of the
image space. The meaning of an image is emergent, in the sense that it is a product of the dual
activities of the user and the database mediated by the interface.
We have proposed a model of interface for active exploration of image spaces. In our
interface, the role of the database is ot focus the attention of the user on certain relation that,
given the current database interpretation of image meanings, are relevant. The role of the user
is exactly the same: by moving images around, the user focuses the attention of the database
on certain relations that, given the user interpretation of meaning, are deemed important.
Our interface is defined formally as a set of operators that operate on three spaces: the
feature space, the query space, and the display space. We have formalized the work of an
interface as the action of an operators algebra on these spaces, and we have introduced a
possible implementation of the most peculiar operators of this approach.
Finally, a word on performance. Evaluating the performance of an interactive system like
this is a rather complex task, since the results depend on a number of factors of dicult
characterization. For instance, a system with a more ecient indexing (faster retrieval) but a
weaker presentation can require more iterations-and therefore be less ecient-than a slower
system with a better interface. A complete characterization of the performance of the system
goes beyond the scope of this paper, but we can oer a few observations. An iteration in a
system with our interface entails two steps: a k-nearest neighbors query to determine the k
images that will be presented to the user, and the solution of an optimization problem to place
them in the interface. The optimization problem is O(k2), but its complexity is independent
of the number of images in the database (it depends only on the number of images that will
be shown). Therefore, we don't expect it to be a great performance burden, not to present
scalability problems. Early evaluations seem to support these conclusions.
--R
Computer analysis of TV spots: The semiotics per- spective
A Theory of Semiotics.
Art and Illusion.
In search of information in visual media.
The art of Color.
Fast multiresolution image querying.
Texture features for browsing and retrieval of image data.
Numerical Recipes
Dimensionality reduction for similarity searching in dynamic databases.
Introduction to lie Groups and Lie Algebras.
Explorations in Image Databases.
Distance preservation in color image transforms.
Evaluation vademecum for visual information systems.
Similerity measures.
User interfaces for emergent semantics in image databases.
Regularization of incorrectly posed problems.
Features of similarity.
--TR
--CTR
Jeannie S. A. Lee , Nikil Jayant, Mixed-initiative multimedia for mobile devices: a voting-based user interface for news videos, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Philippe H. Gosselin , Matthieu Cord, Feature-based approach to semi-supervised similarity learning, Pattern Recognition, v.39 n.10, p.1839-1851, October, 2006
W. I. Grosky , D. V. Sreenath , F. Fotouhi, Emergent semantics and the multimedia semantic web, ACM SIGMOD Record, v.31 n.4, December 2002
Baback Moghaddam , Qi Tian , Neal Lesh , Chia Shen , Thomas S. Huang, Visualization and User-Modeling for Browsing Personal Photo Libraries, International Journal of Computer Vision, v.56 n.1-2, p.109-130, January-February 2004
Jamie Ng , Kanagasabai Rajaraman , Edward Altman, Mining emergent structures from mixed media For content retrieval, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Rahul Singh , Rachel Knickmeyer , Punit Gupta , Ramesh Jain, Designing experiential environments for management of personal multimedia, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Ricardo S. Torres , Celmar G. Silva , Claudia B. Medeiros , Heloisa V. Rocha, Visual structures for image browsing, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Steffen Staab, Emergent Semantics, IEEE Intelligent Systems, v.17 n.1, p.78-86, January 2002
Zoran Steji , Yasufumi Takama , Kaoru Hirota, Genetic algorithm-based relevance feedback for image retrieval using local similarity patterns, Information Processing and Management: an International Journal, v.39 n.1, p.1-23, January
Rahul Singh , Zhao Li , Pilho Kim , Derik Pack , Ramesh Jain, Event-based modeling and processing of digital media, Proceedings of the 1st international workshop on Computer vision meets databases, June 13-13, 2004, Paris, France
Giang P. Nguyen , Marcel Worring, Query definition using interactive saliency, Proceedings of the 5th ACM SIGMM international workshop on Multimedia information retrieval, November 07-07, 2003, Berkeley, California
Philippe H. Gosselin , Matthieu Cord, A comparison of active classification methods for content-based image retrieval, Proceedings of the 1st international workshop on Computer vision meets databases, June 13-13, 2004, Paris, France
Giang P. Nguyen , Marcel Worring , Arnold W. M. Smeulders, Similarity learning via dissimilarity space in CBIR, Proceedings of the 8th ACM international workshop on Multimedia information retrieval, October 26-27, 2006, Santa Barbara, California, USA
John Restrepo , Henri Christiaans, From function to context to form: precedents and focus shifts in the form creation process, Proceedings of the 5th conference on Creativity & cognition, April 12-15, 2005, London, United Kingdom
Jos L. Bosque , Oscar D. Robles , Luis Pastor , Angel Rodrguez, Parallel CBIR implementations with load balancing algorithms, Journal of Parallel and Distributed Computing, v.66 n.8, p.1062-1075, August 2006
Lina Zhou, Ontology learning: state of the art and open issues, Information Technology and Management, v.8 n.3, p.241-252, September 2007 | interactive databases;relevance feedback;man-machine interface;image databases;content-based image retrieval |
628133 | Data Semantics for Improving Retrieval Performance of Digital News Video Systems. | AbstractWe propose a novel four-step hybrid approach for retrieval and composition of video newscasts based on information contained in different metadata sets. In the first step, we use conventional retrieval techniques to isolate video segments from the data universe using segment metadata. In the second step, retrieved segments are clustered into potential news items using a dynamic technique sensitive to the information contained in the segments. In the third step, we apply a transitive search technique to increase the recall of the retrieval system. In the final step, we increase recall performance by identifying segments possessing creation-time relationships. A quantitative analysis of the performance of the process on a newscast composition shows an increase in recall by 59 percent over the conventional keyword-based search technique used in the first step. | Introduction
A challenging problem in video-based applications is achieving rapid search and retrieval of content
from a large corpus. Because of the computational cost of real-time image-based analysis for searching
such large data sets we pursue techniques based on off-line or semi-automated classification,
indexing, and cataloging. Therein lies the need for "bridge" techniques that have rich semantics
for representing motion-image-based concepts and content, yet are supported by fast and efficient
algorithms for real-time search and retrieval. At this intersection we have been investigating techniques
for video concept representation and manipulation. In particular we have sought the goal
of automatic composition of news stories, or newscasts based on an archive of digital video with
supporting metadata.
In The 8th IFIP 2.6 Working Conference on Database Semantics, Rotorua, New Zealand, January 1999. This
work is supported in part by the National Science Foundation under Grant No. IRI-9502702.
Introduction
Field Scene
Interview
Figure
1: Scenes from an Example News Item
To retrieve video clips we need to process video data so that they are in clip-queryable form.
We need to extract information from video clips, represent the information in a manner that can be
used to process queries, and provide a mechanism for formulating queries. Presentation of discrete
clips matching a query is not engaging. After retrieval, composition of these clips towards a theme
(e.g., a news topic) adds value to the presentation.
The general process in automatic composition of news (or any) digital video towards a theme
is based on selecting desired video data within some domain (e.g., sports), filtering redundant
data, clustering similar data in sub-themes, and composing the retrieved data into a logical and
thematically-correct order [1]. All of these tasks are possible if we have sufficient information
about the content of the video data. Therefore, information (metadata) acquisition and techniques
to match, filter, and compose video data are critical to the performance of a video composition
system. The quality of data retrieved depends on the type of metadata and the matching technique
used.
However, news audio and video (and associated closed-captioning) do not necessarily possess
correlated concepts (Fig. 1). For example, it is common in broadcast news items that once an
event is introduced, in subsequent segments the critical keywords are alluded to and not specifically
mentioned (e.g., Table 1, the name Eddie Price is mentioned only in the third scene). Segments can
share other keywords and can be related transitively. If a search is performed on a person's name,
then all related segments are not necessarily retrieved. Similarly, related video segments can have
different visuals. It is not prudent to rely on a single source of information about the segments in
retrieval and composition (e.g., transcripts or content descriptions). The information tends to vary
among the segments related to a news item. Therefore, we require new techniques to retrieve all
the related segments or to improve the recall [16] of the video composition system.
In this paper, we propose a transitive video composition and retrieval approach that improves
Table
1: Example Transcripts of Several Segments
Introduction
Field Scene Interview
A ONE-YEAR-OLD A MAN EMERGED DARYN: JUST IN THE
BABY BOY IS SAFE FROM HIS CAR AT RIGHT PLACE AT
WITH HIS MOTHER THE U.S. MEXICAN RIGHT TIME
THIS MORNING, THE BORDER, CARRYING HIS ESPECIALLY FOR THIS
DAY AFTER HIS OWN LITTLE SON, AND A LITTLE BABY. CAN
FATHER USED HIM AS KNIFE. WITNESSES YOU TELL US WHAT
A HOSTAGE. POLICE WITNESSES SAY HE HELD YOU WERE SAYING
SAY IT WAS A THE KNIFE TO HIS SON, TO THE MAN
TO MAKE IT ACROSS AND IT ALL EDDIE PRICE AND
BORDER TO AVOID LIVE TV. ON BACK TO YOU?
ARREST. CNN'S ANNE OFFICIALS AND POLICE I JUST ASSURED HIM
MCDERMOTT HAS THE FROM BOTH SIDES OF THAT THE BABY
DRAMATIC STORY. THE BORDER. WOULD BE OKAY.
Table
2: Content Metadata
Entity Tangible object that are part of a video stream.
The entities can be further sub-classified,
(e.g., persons, and vehicles).
Location Place shown in video.
(e.g., place, city, and country).
Event Center or focus of a news item.
Category Classification of news items.
recall. That is, once a query is matched against unstructured metadata, the components retrieved
are again used as queries to retrieve additional video segments with information belonging to the
same news item. The recall can be further enhanced if the union of different metadata sets is used
to retrieve all segments of a news item (Fig. 2). However, the union operation does not always
guarantee full recall as a response to a query. This is because no segment belonging to a particular
instance of a news item may be present among the segments acquired after the transitive search
(data acquired from different sources or over a period of time containing data about the same news
event).
This work is an outcome of our observations of generative semantics in the different forms of
information associated with news video data. The information can be in the visuals or in the audio
associated with the video. We also study the common bond among the segments belonging to a
single news item. The composition should possess a smooth flow of information with no redundancy.
Annotated metadata are the information extracted from video data. In our previous work
[3, 12] we have classified annotated metadata that are required for a newscast composition as content
metadata and structural metadata. The content metadata organize unstructured information
within video data (i.e., objects and interpretations within video data or across structural elements).
Some of the information extracted from news video data is shown in Table 2. Information such as
the objects present in visuals, the category of a news item, and the main concept (focus or center
depicted by the new item are stored as metadata. The structural metadata organize linear
video data for a news item into a hierarchy [2] of structural objects as shown in Table 3.
Table
3: Structural Metadata
1. Headline Synopsis of the news event.
2.
Introduction
Anchor introduces the story.
3. Body Describes the existing situation.
a. Speech Formal presentation of views
without any interaction
from a reporter.
b. Comment Informal interview of people
at the scene in the
presence of wild sound.
c. Wild Scene Current scenes from the
location.
d. Interview One or more people answering
formal structured questions.
e. Enactment Accurate scenes of situations
that are already past.
4. Enclose Contains the current closing lines.
The development of the proposed hybrid video data retrieval technique is based the availability
of segment metadata. We have explored the use of these data for the following reasons.
ffl By utilizing both annotated metadata and closed-caption metadata, precision of the composition
system increases. For example, keywords of "Reno, Clinton, fund, raising," if
matched against closed-caption metadata, can retrieve information about a place called
"Reno" (Nevada). Therefore, annotated metadata can be used to specify that only a person
called should be matched. The results from annotated and closed-captioned
searching can be intersected for better precision.
ffl Recall of a keyword-based search improves if more keywords associated with an event are
used. Transcripts provide enriched but unstructured metadata, and can also be used to
improve recall. Utilizing transcripts increase the number of keywords in a query; therefore, in
some cases precision of the results will be compromised (irrelevant data are retrieved). The
transitive search technique is based on this principle (Section 4).
ffl If the relationships among segments of a news event are stored, recall of a system can be
increased. For example, if news about "Clinton" is retrieved, then related segment types can
be retrieved even if the word "Clinton" is not in them.
As a result of the above observations, we propose a hybrid approach that is based on the union
of metadata sets and keyword vector-based clustering as illustrated in Fig. 2. The precision of
vector-based clustering improves by using multiple indexing schemes and multiple sets of metadata
(annotated and unstructured). Unstructured data describe loosely organized data such as free-form
text of the video transcripts.
The organization of the remainder of this paper is as follows: In Section 2 we describe existing
techniques for video data retrieval. In Section 3 we discuss metadata required for query processing,
classification of annotated metadata, and the proposed query processing technique. In Section 4 we
Query
Semi-Structured
Objects
Clustered
Semi-Structured
Objects
Retrieve
Corresponding
Semi-Structured
Metadata
Semi-Structured
Metadata
Increase
Recall
Composition/
Presentation
Semi-Structured
Metadata
Matched
Semi-Structured
Objects ID
Clustered
Object ID
Transitive Search/
Union Operation
Items
User Query
Search
Figure
2: Process Diagram for Newscast Video Composition
present an analysis of the proposed approach. In Section 5 we present of our observations. Section
6 concludes the paper.
Related Work in Video Information Retrieval
A variety of approaches have been proposed for the retrieval of video data. They can be divided into
annotation-metadata-based, transcript-metadata-based, and hybrid-metadata-based techniques. Each
is described below.
For annotation-based techniques, manual or automatic methods are used for extraction of information
contained in video data. Image processing is commonly used for information extraction
in the automatic techniques. Techniques include automatic partitioning of video based on information
within video data [4], extraction of camera and object motion [5, 18], and object, face,
texture, visual text identification [6, 10, 11, 13, 14, 15, 17]. The metadata describing large digital
video libraries can also be extracted off-line and stored in a database for fast query processing and
retrieval [6].
Transcripts associated with video data can provide an additional source of metadata associated
with video segments. Brown et al. [8] use transcript-metadata to deliver pre-composed news data.
Wachman [19] correlates transcripts with the scripts of situation comedies. The Informedia project
[20] uses a hybrid-metadata approach to extract video segments for browsing using both the visual
and transcript metadata.
In the above works, keyword searching is either used to retrieve a pre-assembled news item
or the segments associated with the query keywords. In this work, our objective is to search
Table
4: Symbols Used to Define the Retrieval Technique
Symbols Descriptions
s A video segment
S Universe of video segments
N Size of the universe S
binary relationship on S for transitive search
Ru A binary relationship on S for related segment search
Frequency of a concept (term) i in unstructured metadata
Number of unstructured metadata components with term i
weight assigned to a concept i for query match
Final weight assigned to a concept i for query match
Final weight assigned to a concept i for transitive search
q A query
Sq A set of segments returned as a result of a query
b) The similarity distance between two sets of keywords
QS A subset of Sq
threshold
cluster
query comprised of unstructured metadata component
segment retrieved as a result of a query q(s)
S q(s) Set of segments s t retrieved as a result of a query q(s)
extended cluster CL i resulting from a transitive search
Sa A candidate set resulting from cluster TCL i
for segments that belong to the various instances of the same event and to cover various time
periods (e.g., retrieve information about Albright's trip to the Middle East). Therefore, we seek
to maximize the availability of information to support the creation of a cohesive video piece. For
this purpose we require, in addition to the the segments matching a query, any segments that are
related via a transitive or structural relationship. In this manner, segments belonging to various
instances of a news event can be merged to create a new composition. Our technique uses a four-step
approach applied to both annotation-based and transcript-based (unstructured) metadata. We
use a transitive search on transcripts and the union operation on structural metadata to retrieve
related video segments.
3 The Proposed Four-Step Hybrid Technique
The four-step hybrid retrieval technique is based on establishing transitive relationships among
segment transcripts and the use of annotated metadata. After introducing our terminology (symbols
used throughout the paper are summarized in Table 4), we describe the different types of metadata
and how they are used to support the four-step process.
3.1 Preliminaries
Metadata described in this paper include unstructured metadata, such as free-form text and annotation
metadata. The former is used for transitive search. The latter is comprised of content
metadata and structural metadata.
Unstructured Metadata and Transitivity Transcripts originating from closed-caption data
(audio transcripts), when available, are associated with video segments when the segments enter
the content universe S. These transcripts comprise the unstructured metadata for each segment.
Unstructured metadata are used for indexing and forming keyword vectors for each semi-structured
metadata segment. Indexing is the process of assigning appropriate terms to a component
(document) for its representation. Transitivity on the unstructured data is defined below.
Let R f define a binary relationship f on the universal set of video segments S (i.e., (s a
R f () s a is similar to s b ). If similarity distance, defined as d(s a ; s b ) for segments s a and s b , is
greater than an established value then the two segments are considered to be similar. The transitive
search satisfies the following property (for all s a 2
Therefore, in a transitive search we first match a query with unstructured metadata in the
universe S. The results are applied as a query to retrieve additional unstructured metadata (tran-
scripts) and associated segments, increasing the the recall of the process.
Annotated Metadata Annotated metadata consist of content and structural metadata as described
in Section 1. Structural metadata exist if segments are annotated as such when they enter
the segment universe S, either as video shot at a single event (e.g., a sporting event) or as decomposed
segments originating from preassembled news items (as is the case for our dataset). We call
such segments siblings if they posses either of these relationships.
A shortcoming of the aforementioned transitive search is that it may not retrieve all segments
related via siblings. This can be achieved by the following.
Let R u define a binary relationship u on the universal set S (i.e., (s a
are part of the same news event). The final step expands the set of segments as a union operation
as follows:
S a / S a [ fs b j 9s a 2 S a : (s a
where, S a represents the candidate set of segments used as a pool to generate the final video
piece (or composition set) [1]
Table
5: Sample Unstructured Metadata
.videoFile:
d65.mps
Justice correspondent Pierre Thomas looks at the long-awaited decision.
After months of intense pressure, attorney general Janet Reno has made
a series of decisions sure to ignite a new round of political warfare.
Regarding fund raising telephone calls by Mr. Clinton at the White
House: no independent counsel. On vice president Gore's fund raising
calls: no independent counsel. Controversial democratic campaign
fund-raiser Johnny Chung has alleged he donated 25,000 to O'Leary's
favorite charity in exchange for a meeting between O'Leary and a
Chinese business associate. Three calls for an independent counsel.
All three rejected.
Hierarchical structure of related segments is stored as structural metadata that are utilized in
the proposed hybrid retrieval technique (Table 3).
3.2 Segment Keyword Analysis and Weighting
We use text indexing and retrieval techniques proposed by Salton [16] and implemented in SMART
[9] for indexing the unstructured metadata. To improve recall and precision we use two sets of
indices, each using different keyword/term weighing. In the remainder of the paper we use s
interchangeably to represent a video segment or its associated unstructured metadata. The similarity
distance of a segment with a query or a segment is measured by the associated unstructured
metadata.
The selection process is comprised of an initial segment weighting followed by a clustering step.
Initial Segment Weighting Initially, a vector comprised of keywords and their frequency frequency
(term frequency tf) is constructed using the unstructured metadata of each segment without
stemming and without common words. The frequency of a term or keyword indicates the importance
of that term in the segment. Next, we normalize the tf in each vector with segment (document)
frequency in which the term appears by using Eq. 1.
log
where N is the number of segments in the collection, and N i represents the number of segments
to which term i is assigned. The above normalization technique assigns a relatively higher weight w 1 i
to a term that is present in smaller number of segments with respect to the complete unstructured
metadata. Finally, w 1 i is again normalized by the length of the vector (Eq. 2). Therefore, the
influence of segments with longer vectors or more keywords is limited.
(2)
Clustering and Transitive Weighting Here we use word stemming along with stop words to
make the search sensitive to variants of the same keyword. In segments belonging to a news item,
the same word can be used in multiple forms. Therefore, by stemming a word we achieve a better
match between segments belonging to the same news item. For the transitive search and clustering,
we use the complete unstructured metadata of a segment as a query, resulting in a large keyword
vector because we want only the keywords that have a high frequency to influence the matching
process. Therefore, we use a lesser degree of normalization (Eq. 3) as compared to the initial
segment weighting.
Table
Weight Assignment
barred
weapons 0.15533 2.50603
continues 0.31821 2.58237
sights 0.50471 4.04180
log
Table
6 shows a comparison of the weighting schemes for the same unstructured metadata.
The two concepts "Iraq" and "Iraqi" in the second scheme are treated as the same and hence the
concept "Iraq" gets a higher relative weight.
For the purpose of a query match we use the cosine similarity metric (Eq. proposed by Salton.
The metric measures the cosine or the measure of angle between two unstructured metadata segment
vectors. The product of the length of the two segment vectors divides the numerator in the cosine
metric. The longer length vectors produce small cosine similarity. In Eq. 4, n is the number of
terms or concepts in the universe.
cosine( ~
The proposed query processing technique is a bottom-up approach in which the search starts
from the unstructured metadata. We describe the details next.
3.3 The Selection Mechanism
The four-step selection mechanism is illustrated Fig. 2. A query enters the system as a string of
keywords. These keywords are matched against the indices created from the unstructured metadata.
The steps of this process are query matching, clustering the results, retrieval based on the transitive
search, and sibling identification. These are described below.
Query Matching This stage involves matching of a user-specified keyword vector with the available
unstructured metadata. In this stage we use indices that are obtained as a result of the initial
segment weighting discussed in the previous section. As the match is ranked-based, the segments
are retrieved in the order of reduced similarity. Therefore, we need to establish a cut-off threshold
Semi-Structured
Objects
Semi-Structured
Metadata
Retrieve
Corresponding
Semi-Structured
Metadata
Object IDs
Semi-Structured
Metadata
Object ID
Semi-Structured
Metadata
Figure
3: Process Diagram of the Clustering Process
below which we consider all the segments to be irrelevant to the query. Unfortunately it is difficult
to establish an optimum and static query cut-off threshold for all types of queries as the similarity
values obtained for each query are different. For example, if we are presented with a query with
keywords belonging to multiple news items then the similarity value with individual object in the
corpus will be small. If the query has all keywords relevant to single news item then the similarity
value will be high. Because of this observation, we establish a dynamic query cut-off threshold
(D \Theta maxfd(s; q)g) and we set it as a percentage D of the highest match value maxfd(s; q)g
retrieved in set S q . The resulting set is defined as:
(D \Theta maxfd(s; q)g)g;
where s is the segment retrieved and d(s; q) is the function that measures the similarity distance
of segment s returned as a result of a query q.
Results Clustering In this stage, we cluster the retrieved segments with each group containing
yet more closely related segments (segments belonging to the same event). We use the indices
acquired as a result of the transitive scheme (Fig. 3). During the clustering process, if the similarity
(d(s a ; s b )) of the two segments is within a cluster cut-off threshold T c , then the two segments are
considered similar and have a high probability of belonging to the same news event. Likewise,
we match all segments and group the segments that have similarity value within the threshold,
resulting in a set
where CL i are a clusters (sets) each consisting of segments belonging to a single potential news
item. An algorithm for forming the clusters is as follows:
clusters
Results from the initial search
d 22
d 21 One of the formed clusters
Universe of segments (S)
Figure
4: Similarity Measure based on the Transitive Search
For each s a 2 QS Loop on segments in QS
segment to the cluster
For each s b 2 QS Loop on remaining segments
If d(s a Segments similar to the reference?
Assign segment to the cluster
QS Remove the elements from the set QS
cluster
This algorithm, although fast, is neither deterministic nor fair. A segment, once identified
as similar to the reference, is removed from consideration by the next segment in the set. An
alternative approach does not remove the similar element from QS but results in non-disjoint
clusters of segments. We are exploring heuristic solutions that encourage many clusters while
maintaining them as disjoint sets.
Transitive Retrieval We use the transitive search (Fig. 4). The transitive search increases
the number of segments that can be considered similar. During query matching, the search is
constrained to the similarity distance (d 1 ) and segments within this distance are retrieved. During
the transitive search we increase the similarity distance of the original query by increasing the
keywords in the query so that segments within a larger distance can be considered similar. In the
transitive search we use unstructured metadata of each object in every cluster as a query, q(s), and
retrieve similar segments. Again, we use item cut-off threshold that is used as a cut-off point for
retrieved results and the retained segments are included in the respective cluster.
Clustered
Semi-Structured
Objects
Retrieve
Related
Objects
Clustered
Object ID
Metadata
Annotated Object ID
Figure
5: Process Diagram for Retrieving Related Segments
The transitive cut-off threshold (T \Theta maxfd(s t ; q(s))g) is set as the percentage (T ) of the highest
similarity value retrieved maxfd(s t ; q(s))g. For example, the distances d
fall within the transitive cut-off thresholds of respective segments.
Consider a cluster CL formed in the results clustering step. The extended
cluster resulting from the transitive search can be defined as:
where, s t is a segment returned as a result of a transitive search of a segment s 2 CL i , d(s t ; q(s))
is the function that measures the similarity value of a segment s t to query q(s).
Sibling To further improve recall we use the structural metadata associated
with each news item to retrieve all other related objects (Fig. 5). Structural information about
each segment in a cluster is annotated; therefore, we have the information about all the other
segments that are structurally related to a particular segment. We take the set of segments that
are structurally related to a segment in a cluster and perform a union operation with the cluster.
Suppose is one of the cluster resulting from the third step then the final
set can be defined as:
Here R(s) is a set of segments related to the segments s. Likewise, the union operation can be
performed on the remaining clusters.
Using the four-step hybrid approach we are able to increase the recall of the system. Next we
discuss the quantitative analysis of the retrieval, clustering, and proposed transitive search process.
4 Analysis of the Proposed Hybrid Technique
We evaluated the performance of our technique based on 10 hours of news video data and its
corresponding closed-caption data acquired from the network sources. Our results and analysis of
the application of our techniques on this data set are described below.
Because the objective of our technique is to yield a candidate set of video segments suitable
for composition, we focus on the inclusion-exclusion metrics of recall and precision for evaluating
performance. However, subsequent rank-based refinement on the candidate set yields a composition
set that can be ordered for a final video piece [1].
The data set contains 335 distinct news items obtained from CNN, CBS, and NBC. The news
items comprise a universe of 1,731 segments, out of which 537 segments are relevant to the queries
executed. The most common stories are about bombing of an Alabama clinic, Oprah Winfrey's
trial, the Italian gondola accident, the UN and Iraq standoff, and the Pope's visit to Cuba. The
set of keywords used in various combinations in query formulation is as follows:
race relation cars solar planets falcon reno fund raising
oil boston latin school janet reno kentucky paducah rampage
santiago pope cuba shooting caffeine sid digital genocide
compaq guatemala student chinese adopted girls isreal netanyahu
isreal netanyahu arafat fda irradiation minnesota tobacco trial
oprah beef charged industry fire east cuba beach varadero
pope gay sailor super bowl john elway alabama clinic italy
gondola karla faye tuker death advertisers excavation Lebanon
louise woodword ted kaczynski competency
The number of keywords influences the initial retrieval process for each news item used in a
query. If more keywords pertain to one news item than the other news items, the system will tend
to give higher similarity values to the news items with more keywords. If the query cut-off threshold
is high (e.g., 50%), then the news items with weaker similarity matches will not cross the query
cut-off threshold (the highest match has a very high value). Therefore, if more than one distinct
news item is desired, a query should be composed with equal number of keywords for each distinct
news item. All the distinct retrieved news items will have approximately the same similarity value
with the query and will cross the query cut-off threshold.
For the initial experiment we set the query cut-off threshold to 40% of the highest value retrieved
as a result of a query, or 0:4 \Theta max(S q ). The transitive cut-off threshold is set to 20% of the highest
value retrieved as a result of unstructured metadata query, or 0:2 \Theta max(S q (s)). The results of 29
queries issued to the universe are shown in Fig. 6. Here we assume that all the segments matched
the query (we consider every retrieved segment a positive match as the segments contain some or
all keywords of the query).
Not all the keywords are common among the unstructured metadata of related segments, nor
are they always all present in the keywords of a query. Therefore, to enhance the query we use a
transitive search with a complete set of unstructured metadata. The probability of a match among
related segments increases with the additional keywords; however, this can reduce precision.
Query Number
Number
of
Segments
Segments in Initial Retrieval
Segments in Transitive Retrieval
Segments in Related Retrieval
Relevant Segments
Figure
Summary of Performance of Different Retrieval Techniques
Table
7: System Performance
Search Technique Total Segments Relevant Segments Recall Precision
Retrieved Retrieved
Query Match 137 137 25% 100%
Transitive Search 293 262 48% 89%
Sibling 517 517 96% 100%
As the result of the transitive search the recall of the system is increased to 48% from 25%
(another level of transitive search may increase it further). The precision of the results due to this
step is reduced to 89% from 100%.
A cause of such low recall of the initial retrieval and subsequent transitive search is the quality
of the unstructured metadata. Often this quality is low due to incomplete or missing sentences and
misspelled words (due to real-time human transcription).
Using the structural hierarchy (Section 3.1) we store the relationships among the segments
belonging to a news item. Therefore, if this information is exploited we can get an increase in
recall without a reduction in precision (as all segments belong to the same news item). In the last
step of the query processing we use structural metadata to retrieve these additional segments. As
observed from the above results, the recall is then increased to 96%. The remaining data are not
identified due to a failure of the prior transitive search.
Query Query
Object IDs
Metadata
Annotated Semi-Structured
Metadata
Matched
Object IDs
Matched
Semi-Structured
Object ID
Intersect
Matched
Object IDs
Figure
7: Process Diagram for Using Visual Metadata to Increase Precision
The results demonstrate that the combination of different retrieval techniques using different
sources of metadata can achieve better recall in a news video composition system as compared to
a the use of a single metadata set.
5 Observations
To emulate news items which encompass multiple foci (i.e., concepts from each are associated with
many segments), it becomes difficult to balance the clustering of segments for these foci with our
techniques. For example, the query "State of the Union Address" applied to our data set will yield
foci for the address and the intern controversy. However, there are many more segments present in
the data set for the intern controversy.
The query precision can also be increased by forming the intersection of the keywords from the
content and unstructured metadata sets.
For example, consider the scenario for composing a news item about Clinton speaking in the
White House about the stalemate in the Middle East. From the content metadata, we might be
able to retrieve segments of type Speech for this purpose. However, many of the returned segments
will not be associated with the topic. In this case an intersection of the query results of the salient
keywords applied to the unstructured metadata will give us the desired refinement (Fig. 7).
If a query retrieves a set of new items based on a date or period then access can be achieved
directly from the content metadata. For the process of composition, the broader set of metadata
need to be used.
6 Conclusion
In this paper we proposed a four-step hybrid retrieval technique that utilizes multiple metadata sets
to isolate video information for composition. The technique relies on the availability of annotated
metadata representing segment content and structure, as well as segment transcripts that are
unstructured. The retrieval applies a conventional approach to identifying segments using the
segment content metadata. This is followed by clustering into potential news items and then a
transitive search to increase recall. Finally, creation-time relationships expand the final candidate
set of video segments.
Experimental results on our data set indicate a significant increase in recall due to the transitive
search and the use of the creation-time relationships. Additional work will seek a heuristic clustering
algorithm that balances performance with fairness.
--R
"Automatic Composition Techniques for Video Production,"
"A Language to Support Automatic Composition of Newscasts,"
"A System for Customized News Delivery from Video Archives"
"A Survey of Technologies for Parsing and Indexing Digital Video,"
"Video Tomography; An Efficient Method for Camerawork Extraction and Motion Analysis,"
"Automatic Video Database Indexing and Retrieval,"
"Narrative Schema,"
"Automatic Content-Based Retrieval of Broadcast News,"
Implementation of the SMART Information Retrieval System.
"Visual Information Retrieval from Large Distributed Online Repositories,"
"Efficient Color Histogram Indexing for Quadratic Form Distance Functions,"
"The Use of Metadata for the Rendering of Personalized Video Delivery,"
"Video Abstracting,"
"Chabot: Retrieval from a Relational Database of Images,"
"Vision Texture for Annotation,"
Introduction to Modern Information Retrieval.
"Similarity is a Geometer,"
"Active Blobs,"
"A Video Browser that Learns by Example,"
"Intelligent Access to Digital Video: The Informedia Project,"
--TR | recall;news video composition;content metadata;structural metadata;precision;unstructured metadata;keyword vector;retrieval |
628140 | Global Scheduling for Flexible Transactions in Heterogeneous Distributed Database Systems. | AbstractA heterogeneous distributed database environment integrates a set of autonomous database systems to provide global database functions. A flexible transaction approach has been proposed for the heterogeneous distributed database environments. In such an environment, flexible transactions can increase the failure resilience of global transactions by allowing alternate (but in some sense equivalent) executions to be attempted when a local database system fails or some subtransactions of the global transaction abort. In this paper, we study the impact of compensation, retry, and switching to alternative executions on global concurrency control for the execution of flexible transactions. We propose a new concurrency control criterion for the execution of flexible and local transactions, termed F-serializability, in the error-prone heterogeneous distributed database environments. We then present a scheduling protocol that ensures F-serializability on global schedules. We also demonstrate that this scheduler avoids unnecessary aborts and compensation. | Introduction
A heterogeneous distributed database system (HDDBS) integrates a set of autonomous database
systems to provide global database functions. In a HDDBS environment, transaction management
is handled at both the global and local levels. As a confederation of pre-existing local
Current address: MCC, 3500 West Balcones Center dr, Austin,
databases, the overriding concern of any HDDBS must be the preservation of local autonomy
[Lit86, GMK88, BS88, Pu88, Vei90, VW92]. This is accomplished through the superimposition
of a global transaction manager (GTM) upon a set of local database systems (LDBSs). Global
transactions are submitted to the global transaction manager, where they are parsed into a set of
global subtransactions to be individually submitted to local transaction management systems at local
sites (LSs). At the same time, local transactions are directly submitted to the local transaction
management systems. Each local transaction management system maintains the correct execution
of both local and global subtransactions at its site. It is left to the global transaction manager to
maintain the correct execution of global transactions.
The preservation of the atomicity and isolation of global transactions is fundamental in achieving
the correct execution of global transactions. Preserving the atomicity or semantic atomicity
[GM83] of global transactions in the HDDBS systems has been recognized as an open and difficult
issue [SSU91]. The traditional two-phase commit protocol (2PC) developed in distributed
database environments has been shown [LKS91a, SKS91, MR91, BZ94] to be inadequate to the
preservation of the atomicity of global transactions in the HDDBS environment. For example, some
local database systems may not support a visible prepare-to-commit state, in which a transaction
has not yet been committed but is guaranteed the ability to commit. In such situations, a local
database system that participates in a HDDBS environment may unilaterally abort a global sub-transaction
without agreement from the global level. Moreover, even if the local database systems
are assumed to support a prepare-to-commit state (as in traditional distributed database systems),
the potential blocking and long delays caused by such states severely degrade the performance. The
concept of compensation, which was proposed [GM83] to address the semantic atomicity of long-running
transactions, has been shown [LKS91a] to be useful in the HDDBS environment. Using
this technique, the global subtransactions of a global transaction may commit unilaterally at local
sites. Semantic atomicity guarantees that if all global subtransactions commit, then the global
transaction commits; otherwise, all tentatively committed global subtransactions are compensated.
Mehrotra et al. [MRKS92] have identified the class of global transactions for which the semantic
atomicity can be maintained in the HDDBS environment. Each global transaction contains a
set of subtransactions which are either compensatable, retriable, or pivot, and at most one sub-transaction
can be pivot. In [ZNBB94], it was shown that this class can be extended by specifying
global transactions as flexible transactions. Flexible transaction models, such as ConTracts, Flex
Transactions, S-transactions, and others [DHL91, ELLR90, BDS increase the failure resilience
of global transactions by allowing alternate subtransactions to be executed when an LDBS fails or a
subtransaction aborts. In a non-flexible transaction, a global subtransaction abort is followed either
by a global transaction abort decision or by a retry of the global subtransaction. With the flexible
transaction model, there is an additional option of switching to an alternate global transaction
execution. The following example is illustrative:
Example 1 A client at bank b 1 wishes to withdraw $50 from her savings account a 1 and deposit it
in her friend's checking account a 2 in bank b 2 . If this is not possible, she will deposit the $50 in her
own checking account a 3 in bank b 3 . With flexible transactions, this is represented by the following
set of subtransactions:
savings account a 1 in bank b 1 ;
checking account a 2 in bank b 2 ;
checking account a 3 in bank b 3 .
In this global transaction, either is acceptable, with preferred. If t 2 fails,
. The entire global transaction thus may not have to be aborted even if t 2 fails. 2
Flexibility allows a flexible transaction to adhere to a weaker form of atomicity, which we term
semi-atomicity, while still maintaining its correct execution in the HDDBS. Semi-atomicity allows
a flexible transaction to commit as long as a subset of its subtransactions that can represents the
execution of the entire flexible transaction commit. By enforcing semi-atomicity on flexible trans-
actions, the class of executable global transactions can be enlarged in a heterogeneous distributed
database system [ZNBB94]. The effect of retrial and compensation methods were investigated to
preserve semi-atomicity on flexible transactions. However, the design of scheduling approaches to
global concurrency control for the execution of flexible transactions has not been carefully investigated
Global concurrency control considering the effect of compensation with respect to traditional
transaction model has been extensively studied. In [KLS90], a formal analysis is presented of those
situations in which a transaction may see the partial effect of another transaction before these
partial effects are compensated. It is then proposed in [LKS91a] that, to prevent an inconsistent
database state from being seen in a distributed database environment, a global transaction should
be unaffected by both aborted and committed subtransactions of another global transaction. A
concurrency control correctness criterion, termed serializability with respect to compensation (SRC),
is further proposed in [MRKS92] to preserve database consistency in the HDDBS environment
throughout the execution of global transactions possessing no value dependencies among their
subtransactions. This criterion prohibits any global transaction that is serialized between a global
transaction G i and its compensating transaction CG i from accessing the local sites at which G i
aborts. All these proposed approaches are inadequate to a situation in which value dependencies
are present among the subtransactions of a global transaction. Value dependencies, which specify
data flow among the global subtransactions of each global transaction, are important characteristics
of flexible transactions.
In this paper, we will propose a concurrency control criterion for the execution of flexible and
local transactions in the HDDBS environment. We will carefully analyze the effects of compensa-
tion, retry, and switching to alternate executions on global concurrency control. We will propose
a specific correctness criterion for schedules of concurrent flexible and local transactions, called
F-serializability, in the HDDBS environment. We will then demonstrate that an F-serializable
execution maintains global database consistency. We will also present a graph-based scheduling
protocol for flexible transactions that ensures F-serizalibility.
This paper is organized as follows. Section 2 introduces the system and flexible transaction
models. In Section 3, we discuss the issues relevant to global concurrency control on the execution
of flexible and local transactions. Section 4 proposes a global concurrency control criterion. In
Section 5, we offer a scheduling protocol to implement the proposed criterion. Concluding remarks
are presented in Section 6.
Preliminaries
In this section, we shall introduce the system and transaction models that will be used in the rest
of the paper.
2.1 System Model
The system architecture under consideration is shown in Figure 1. A HDDBS consists of a set of
each LDBS i is a pre-existing autonomous database management
system on a set of data items D i at a local site (LS i ), superimposed on which is a global transaction
manager (GTM). We assume that there is no integrated schema provided and users know the
existence of local database systems. The set of data items at a local site LS i is partitioned into
local data items, denoted LD i , and global data items, denoted GD i , such that LD
and . The set of all global data items is denoted GD,
transactions are submitted to the GTM and then divided into a set of subtransactions which are
submitted to the LDBSs individually, while local transactions are directly submitted to LDBSs.
We assume that the GTM submits flexible transaction operations to the local databases through
servers that are associated with each LDBS.
In a HDDBS, global consistency means that no integrity constraints among the data in the
different local databases are violated. As with transactions on a database, global and flexible
transactions on a HDDBS should be defined so that, if executed in isolation, they would not violate
the global consistency of the HDDBS. The concurrency control protocol schedules the concurrent
execution of the flexible subtransactions in the HDDBS such that global consistency is maintained.
However, unlike monolithic databases where all the data are strictly controlled by a single trans-
Computer Network
GTM interpreter
User
local
transactions
local
transactions
Server Server
Server
GTM GTM GTM
Gi
Figure
1: The HDDBS system model.
action manager, HDDBSs submit transactions to autonomous local databases. Thus, the HDDBS
transaction manager cannot ensure that transactions submitted independently to local databases
do not violate global integrity constraints. Following the previous research commonly proposed
in the community [BST90], we assume that all local transactions do not modify the global data
items in GD. Note that this will not prevent the local transactions to read global data items, as
long as the execution of the local transaction maintains local integrity constraints. Based on this
assumption, local transactions maintain both local and global integrity constraints.
2.2 Flexible Transaction Model
From a user's point of view, a transaction is a sequence of actions performed on data items in
a database. In a HDDBS environment, a global transaction is a set of subtransactions, where
each subtransaction is a transaction accessing the data items at a single local site. The flexible
transaction model supports flexible execution control flow by specifying two types of dependencies
among the subtransactions of a global transaction: (1) execution ordering dependencies between
two subtransactions, and (2) alternative dependencies between two subsets of subtransactions. A
formal model has been offered in [ZNBB94]. Below, we shall provide a brief introduction of this
model.
be a repertoire of subtransactions and P(T ) the collection of all subsets
of T . Let t We assume two types of control flow relations to be defined
on the subsets of T and on P(T ), respectively:
and (2) (preference) T i T j if T i is preferred to T j (i 6= j). If T i T j , we also say that T j is
an alternative to T i . 1 Note that T i and T j may not be disjoint. Both precedence and preference
relations are irreflexive and transitive. That is, they define partial order relations. In other words,
for each t
The precedence relation defines the correct parallel and sequential execution ordering dependencies
among the subtransactions. The semantics of the precedence relation refers to the execution
order of subtransactions. t 1 OE t 2 implies that t 1 finishes its execution before t 2 does. Note that
may start before or after t 1 finishes. The preference relation defines the priority dependencies
among alternate sets of subtransactions for selection in completing the execution of T . For instance,
implies that either t j and t k must abort when t i commits or t j and t k should not be
executed if t i commits. In this situation, ft i g is of higher priority than to be chosen for
execution.
A flexible transaction is defined as follows:
transaction T is a set of related subtransactions
on which the precedence (OE) and preference ( ) relations are defined.
The execution of a flexible transaction may contain several alternatives. Let T i be a subset
of T , with a precedence relation OE defined on T i . It is defined that (T i ; OE) is a partial order of
subtransactions. is a representative partial order, abbreviated as OE-rpo, if the execution
of subtransactions in T i represents the committed execution of the entire flexible transaction T .
Clearly, if (T i ; OE) is a OE-rpo, then there are no subsets T i1 and T i2 of T i such that T i1 T i2 .
The structure of a flexible transaction T can thus be depicted as a set of OE-rpos f(T
of subtransactions, with S k
may contain more than one subtransaction
at a local site. Let (T ; OE) be a OE-rpo of T . A partial order (T 0 ; OE 0 ) is a prefix of (T ; OE),
denoted
ffl for all
ffl for each t 2 T 0 , all predecessors of t in T are in T 0 .
A partial order (T 0 ; OE 0 ) is the prefix of (T ; OE) with respect to t 2 T , denoted (T 0
prefix of (T ; OE) and T 0 contains only all predecessors of t in T . A partial order (T 0 ; OE 0 )
is the suffix of (T ; OE) with respect to t 2 T , denoted
in contains only t and all successors of t in T .
In general, the alternate relationship need not exist only between two individual subtransactions; one subtransaction
may be a semantic alternative of several subtransactions.
2 Note that when transaction becomes a traditional global transaction.
We now use prefixes and suffixes to show how a flexible transaction can switch from executing
one OE-rpo to executing a lower-priority alternative. Intuitively, if OE-rpos
some prefix and the subtransactions t immediately following that prefix in the execution of
continue execution from the point where the shared prefix completed.
In this case, the set of ft forms a switching set, formally defined as follows:
be subtransactions in OE-rpo (T ; OE) of a flexible trans-action
forms a switching
set of (T ; OE) if
ffl there is a OE-rpo (T 0 ; OE 0 ) of T such that prefix of (T 0 ; OE 0 ), and
A switching point is a subtransaction in a switching set which relates one OE-rpo to another
OE-rpo.
be two OE-rpos of flexible transaction T . We say that
has higher priority than p 2 in T , denoted are T 1i ' T 1 and T 2j ' T 2 such that
. The preference relation defines the preferred order over alternatives. We state that two
subsets have the same priority if there is a T i ae T such that T i T j and T i T k , but
The execution of a flexible transaction T at any moment must be uniquely determined. We say
that a flexible transaction T is unambiguous if the following conditions are satisfied:
ffl For any switching set in a OE-rpo (T ; OE) of T ,
has no two alternatives with the same priority.
ffl None of the OE-rpos of T are in a priority cycle such that p i 1
for a
permutation
Note that the set of all OE-rpos of a flexible transaction may not be clearly ranked, even if it is
unambiguous. The aborting of subtransactions determines which alternative OE-rpo will be chosen.
In the remainder of this paper, we assume that all flexible transactions are unambiguous. The
following example is given in [ZNBB94]:
Example 2 Consider a travel agent information system arranging a travel schedule for a customer.
Assume that a flexible transaction T has the following subtransactions:
the plane fare from account a 1 ;
the plane fare from account a
reserve and pay for a non-refundable plane ticket;
rent a car from Avis;
limo seat to and from the hotel.
The following OE-rpos are defined on the above subtransactions:
is the switching set of p 1 and ft 4 g is the switching set of both p 1 and p 3 . With these
switching sets, we have g. Clearly, the set of OE-rpos in this
flexible transaction is unambiguous. Note that cannot be
ranked in any preferred order. 2
In each OE-rpo of subtransactions, the value dependencies among operations in different sub-transactions
define data flow among the subtransactions. Let (T ; OE) be a OE-rpo and T have sub-transactions
We say that
is value dependent on t j 1
the execution of one or more operations in t j t
is determined by the values read by t
Each subtransaction is categorized as either retriable, compensatable, or pivot. We say that
a subtransaction t i is retriable if it is guaranteed to commit after a finite number of submissions
when executed from any consistent database state. The retriability of subtransactions is highly
determined by implicit or explicit integrity constraints. For instance, a bank account usually has
no upper limit, so a deposit action is retriable. However, it usually does have a lower limit, so a
withdrawal action is not retriable.
A subtransaction is compensatable if the effects of its execution can be semantically undone
after commitment by executing a compensating subtransaction at its local site. We assume that
a compensating subtransaction ct i for a subtransaction t i will commit successfully if persistently
retried. 3 ct i must also be independent of the transactions that execute between t i and ct i . Local
database autonomy requires that arbitrary local transactions be executable between the time t i is
committed and the time ct i is executed, and these local transactions must be able to both see and
overwrite the effects of t i during that time. For example, consider a HDDBS that has account a in
LS 1 and account b in LS 2 , with the integrity constraints a - 0 and b - 0. Suppose a transaction T 1
transfers $100 from a to b. The withdrawal subtransaction t 1 at LS 1 is compensatable, while the
deposit subtransaction t 2 at LS 2 is not. The compensation of t 2 may violate the integrity constraint
local transaction which is executed between t 2 and its compensating subtransaction takes
the amount of b. Note that both t 1 and t 2 are compensatable in the traditional distributed database
environment, which ensures that the transactions that are executed between t 2 and its compensating
subtransaction ct 2 are commutative with ct 2 [KLS90, BR92].
A subtransaction t i is a pivot subtransaction if it is neither retriable nor compensatable. For
3 This requirement, termed persistence of compensation, has been discussed in the literature [GM83].
example, consider a subtransaction which reserves and pays for a non-refundable plane ticket.
Clearly, this subtransaction is not compensatable. This subtransaction is also not retriable, since
such a ticket might never be available.
The concept of semi-atomicity was introduced in [ZNBB94] for the commitment of flexible
transactions. The execution of a flexible transaction T is committable if the property of semi-
atomicity is preserved, which requires one of the following two conditions to be satisfied:
ffl All its subtransactions in one OE-rpo commit and all attempted subtransactions not in the
committed OE-rpo are either aborted or have their effects undone.
effects of its subtransactions remain permanent in local databases.
We will now define the commit dependency relationships between any two subtransactions of
a flexible transaction that should be obeyed in the commitment of these subtransactions. We say
that t j is commit dependent on t i , denoted t if the commitment of t i must precede that of
t j to preserve semi-atomicity. Clearly, if t i OE t j in (T i These de-
pendencies, which are determined by the execution control flow among subtransactions, are termed
e-commit dependencies. To ensure that the execution of a OE-rpo can terminate, the commitment of
compensatable subtransactions should always precede that of pivot subtransactions, which in turn
should precede the commitment of retriable subtransactions. These dependencies are termed t-
commit dependencies. Also, for those subtransactions which are retriable, value dependencies must
be considered in determining a commitment order. Each retriable subtransaction remains retriable
without resulting in any database inconsistency, as long as all other subtransactions that are value
dependent upon it have not committed. Such dependencies are termed v-commit dependencies.
We say that a flexible transaction is well-formed if it is committable. Well-formed flexible
transactions have been identified in [ZNBB94] for which the semi-atomicity can be maintained.
Well-formed global transactions have been identified in [MRKS92] for which the semantic atomicity
[GM83] can be maintained. We assume, in this paper, that the flexible transactions are well-formed
for the discussion of global concurrency control. We say that a database state is consistent if it
preserves database integrity constraints. As defined for traditional transactions, the execution of a
flexible transaction as a single unit should map one consistent HDDBS state to another. However,
for flexible transactions, this definition of consistency requires that the execution of subtransactions
in each OE-rpo must map one consistent HDDBS state to another.
In the above discussion, we have used banking and travel agency as application examples. It has
been recognized that the concept of flexible transactions can be extended to specify the activities
involving the coordinated execution of multiple tasks performed by different processing entities
93]. For instance, in manufacturing applications, flexible transactions are used to
specify and control the data flow between agile partner applications. Typical inter-task dependencies
include: (1) Ordering dependencies which define the parallel and sequential executions among
tasks, (2) Trigger dependencies which define the contingency executions among tasks, and (3) Real-time
dependencies which define real-time constraints on tasks; e.g., a chronological dependency is
defined by specifying the start time and the expected completion time of tasks. These dependencies
can be realized using the flexible transaction model. The extension of these dependencies have also
been recognized in the the concept of workflow which has been used as a specification facility to
separate control and data flows in a multi-system application from the rest of the application code
3 Issues in Global Concurrency Control
In this section, we will discuss various inconsistent scenarios that may arise when compensation
or retrial are allowed. These observations are important input for the establishment of a suitable
global concurrency control correctness criterion.
3.1 Global Serializability
Global serializability [BS88, BGMS92] is an accepted correctness criterion for the execution of
(non-flexible) global and local transactions in the HDDBS environment. A global schedule S is
globally serializable if the committed projection from S of both global transactions in the HDDBS
environment and transactions that run independently at local sites is conflict-equivalent to some
serial execution of those transactions. 4 In the traditional transaction model, it has been shown that
a global schedule S is globally serializable if and only if all m) are serializable and there
exists a total order O on global transactions in S such that, for each local site LS k (1 - k - m), the
serialization order of global subtransactions in S k is consistent with O [GRS91, MRB
Note that each global transaction can have more than one subtransactions at a local site, as long
as their serialization order is consistent with O. That is, if global transaction G 2 precedes global
transaction G 3 and follows global transaction G 1 in the serialization order, then the serialization
order of all subtransactions of G 2 must precede that of all subtransactions of G 3 and follow that
of all subtransactions of G 1 at each local site.
Following the definition of semi-atomicity on flexible transactions, a committed flexible transaction
can be considered as a traditional global transaction that contains only the subtransactions in
its committed OE-rpo. Some subtransactions in the flexible transaction which do not belong to the
committed OE-rpo may have committed and their effects are compensated. In this discussion, we
call such subtransactions invalid subtransactions. Invalid subtransactions and their compensating
transactions are termed surplus transactions, as their effects are not visible in the HDDBS once
4 See [BHG87] for the definitions of committed projection and conflict equivalence.
the flexible transaction has committed.
Let S be the global schedule containing the concurrent executions of both the subtransactions
(and compensating subtransactions) of flexible transactions l and a set of local transactions.
c be the projection from S of the committed local transactions and the subtransactions of
the committed OE-rpos of We extend global serializability to treat surplus transactions as
local transactions:
Definition 3 (Global serializability) A global schedule S is globally serializable if the projection
of committed local, flexible, and surplus transactions in S is conflict-equivalent to some serial
execution of these transactions.
In the rest of this paper, for the sake of similicity, we assume that each global transaction has
at most one subtransaction at each local site. However, all the theorems can be directly applied to
the general situation where multiple subtransactions are permitted in each global transaction. In
the case of flexible transaction, we consider that each OE-rpo of a flexible transaction has at most
one subtransaction at a local site. Note that T may still contain more than one subtransaction at
a local site, provided that they are in different OE-rpos.
We denote executed before operation im be the
subtransactions at local sites LS 1 . LSm in the committed OE-rpo of flexible transaction T i , and
jm be the analogous subtransactions in the committed OE-rpo of flexible transaction T j . t ip
and t jp both are executed at local site LS p . Let ! s be a serialization ordering on transactions, with
indicating that the execution of t ip must be serialized before that of t jp on the LDBS on
which they executed. Applying the above global serializability theory in the execution of flexible
transactions, we have the following theorem:
Theorem l be the well-formed flexible transactions in global schedule S. Assume that
all LDBSs maintain serializability on the local transactions and subtransactions at their sites. The
global serializability of S is preserved if, for S c , there exists a permutation T
of
such that, for any T i
p for all local sites LS p , where
Several solutions have been proposed to enforce the serializable execution of global transactions,
including forced local conflicts [GRS91]. These approaches are readily applicable to flexible trans-
actions. However, globally serializable schedules may no longer preserve database consistency in
the execution of flexible and local transactions, because the global serializability criterion fails to
consider the constraints on the committed subtransactions which are not a part of the committed
OE-rpos, and on their compensating subtransactions. In fact, every commit protocol which uses compensation
during the execution of flexible transactions may face difficulties with the preservation
of the consistency of globally serializable schedules.
3.2 Problematic Situations
We will now consider some scenarios that may arise if compensation or retrial are allowed. Before
discussing these scenarios, we first introduce the concept of the serialization point, which is similar
to the existing notion of the serialization event [ED90] and serialization function [MRB
Definition 4 (Serialization point) Let t ip and t jp be two subtransactions executed on local site
LS p . Operation o ip of t ip is a serialization point of t ip in global schedule S if, for any subtransaction
t jp , there exists an operation o jp of t jp such that t ip !
The determination of serialization points depends on the concurrency control protocol of the
LDBS. For instance, if the local database system uses strict two-phase locking (pessimistic concurrency
control), then the serialization point can fall anywhere between the moment when the
subtransaction takes its last lock and its commitment [MRB local conflicts are forced,
each subtransaction updates a shared data item at the local site. The order of these updates forms
the serialization order. A detailed discussion of this procedure can be found in [MRB
that, in general, any individual transaction may not have a serialization point in the schedule. This
is the case when serialization graph testing [BHG87] is used as the concurrency control protocol.
However, in enforcing globally serializable schedules, we must determine the serialization orders of
subtransactions at local sites in order to deal with local indirect conflicts [GRS91, MRB
Thus, we assume that the serialization point of each subtransaction can be determined at the local
site. We also assume that, with the help of forced local conflicts, we can ensure that the serialization
point is reached after the transaction begins, and before it commits (the bounded serialization
point assumption).
First, let us consider the following example:
Example 3 Consider a HDDBS that has data items a, b at LS p and data item c at LS p+1 . Let
the integrity constraints be a ? c and b ? c. Two flexible transactions T 1 and T 2 at local site LS p
are executed as follows:
which does a := b, executes its serialization point, enforcing t 1p ! s t 2p . t 2p has read data
item b that was written by t 1p .
ffl t 1p+1 , which does c := c \Gamma 1, aborts; T 1 makes a global decision to abort.
Compensating subtransaction ct 1p , which does b := b + 1, is executed. 2
In this example, all the effects of T 1 are eventually removed from the execution, including the
effects of t 1p . Global serializability is preserved, because flexible transaction T 1 was aborted and
correctly compensated for, even though its effects were read by flexible transaction T 2 . However,
proceeds based on the reading of the data item that was updated by t 1p and thus may be
inconsistent. Consider an initial database state a = 5; 4. The resulting database state
after the execution of ct 1p would be a 4, which is inconsistent.
The concept of isolation of recovery [LKS91b, LKS91a] states that a global transaction should
be unaffected by both the aborted and the committed subtransactions of other global transactions.
In addition, the above example shows that, if a global transaction is affected by a committed sub-transaction
which must later be compensated, the task of constructing the compensating subtransaction
will be greatly complicated by the need to restore database consistency. Such compensating
subtransactions must be capable of undoing any effects that may have been seen by other global
transactions. For example, in the above example, ct 1p must restore not only data item b but also
data item a. However, it is also undesirable for ct 1p to have to check for reads by other global sub-
transactions. Note that the effects of the compensated subtransactions on local transactions need
not be considered if we assume that the execution of a subtransaction transfers the local database
from one consistent state to another. This leads us to the following observation:
necessary condition for maintaining database consistency in an execution containing
concurrent flexible transactions in which compensating subtransactions undo only the effects
of their corresponding compensatable subtransactions is that, for each compensatable subtransaction
t ip of T i at given local site LS p (1 - p - m), subtransactions that are subsequently serialized at the
local site must not read the data items updated by t ip until either T i makes a global decision to commit
the OE-rpo containing t ip or the compensating subtransaction for t ip has executed its serialization
point.
As a further complication, a compensating subtransaction ct ip may still unilaterally abort and
need to be retried. Some conflicting subtransactions may have to be aborted to ensure that they
are serialized after ct ip . To avoid such undesirable cascading aborts, we must delay the execution
of the serialization point of such conflicting subtransactions until one of the following conditions
holds for
ffl The flexible transaction containing t ip has made a decision to commit the OE-rpo which contains
t ip .
ct ip has committed.
The following example is also problematic:
Example 4 Two flexible transactions T 1 and T 2 at local site LS p are executed as follows:
makes a global decision to commit.
executes its serialization point.
executes its serialization point, enforcing t 1p ! s t 2p .
which is retriable, unilaterally aborts.
ffl t 1p is resubmitted and re-executes its serialization point. Now we have t 2p ! s t 1p , which
contradicts the order t 1p ! s t 2p that was enforced.
At this point, if the serialization order is not consistent with those at other local sites, t 2p must be
aborted. 2
To avoid cascading aborts, we make the following observation:
necessary condition for avoiding cascading aborts when a subtransaction is retried
is to ensure that a subtransaction does not execute its serialization point until all retriable
subtransactions that precede it in the serialization order of the execution and have not been aborted
have successfully committed.
If, in this situation, we do not avoid the cascading abort, we also can allow a situation where
compensation must be cascaded:
makes a global decision to commit.
executes its serialization point.
executes its serialization point, enforcing t 1p ! s t 2p . Then it commits.
which is retriable, unilaterally aborts.
ffl t 1p is resubmitted and re-executes its serialization point. Now we have t 2p ! s t 1p , which
contradicts the order t 1p ! s t 2p that was enforced.
In this case, T 2 must backtrack to compensate for t 2p , so that the correct serialization order can be
attained at LS p . Unfortunately, if t 2p is not compensatable, this may be impossible.
Whether or not the global transaction is flexible, using either compensation or retrial to regain
consistency leads to blocking. Furthermore, for flexible transactions, switching to an alternate
OE-rpo can also create problems. Let us consider an alternate in flexible transaction T 1 . Let
1n be the preferred OE-rpo, and let t 11 OE t 1j be the second alternate OE-rpo. Note that
t 1j is at a different local site than any subtransaction in the preferred OE-rpo. We then have the
following example, where T 1 should be serialized before
Example 5 The executions of two flexible transactions T 1 and T 2 are given as follows:
which is pivot, commits (therefore T 1 makes a global decision to commit).
executes its serialization point.
executes its serialization point.
unilaterally aborts.
chooses an alternate OE-rpo and submits t 1j . It has yet to execute its serialization point, so
we now have t 2j ! s t 1j , which contradicts the serialization order that was enforced.
At this point, t 2j must be aborted to maintain global serializability. 2
Again, if the cascading abort is not avoided, we can also allow a cascading compensation
situation:
which is compensatable, commits.
which is pivot, commits (and T 2 makes a global decision to commit).
issuing compensating subtransaction ct 1p .
different OE-rpo, issuing subtransaction t 0
1p .
We now have t
1p , which contradicts the serialization order Furthermore, since
t 2p is not compensatable, the only way to regain global serializability is to abort the entire flexible
transaction T 1 .
We observe and later prove that if the cascading abort is avoided, then the cascading compensation
cannot occur. To avoid cascading aborts in this situation, we have the following observation:
Observation 3 A necessary condition for avoiding cascading aborts when an alternate OE-rpo is
attempted is to ensure that, at any local site, an LDBS does not execute the serialization point
of a subtransaction until, for all uncompleted flexible transactions that precede it in the global
serialization order, no alternate subtransaction can possibly be initiated.
Observations 1, 2 and 3 above indicate that some blocking of the execution of subtransactions
which reach their serialization point early may be unavoidable with a concurrency control algorithm
designed to avoid cascading aborts. This blocking will result in the delay of the commitment
operations of these subtransactions. Observations 1 and 2 relate directly to delays which are
caused by compensation or retrial and which cannot therefore be avoided by any global transaction
model that uses these techniques to regain consistency after a subtransaction aborts.
Observation 3 concerns delays that arise only with flexible transactions. There are conflicting
considerations here; the more flexible (and therefore more failure-resilient) global transactions will
be prone to greater delays. Global transactions with less flexibility, which are less resilient to failure,
will have fewer delays. Consequently, while the flexible transaction approach can indeed extend
the scope of global transactions, it does cause more blocking than does the traditional transaction
model.
The observations we have made concerning concurrency control will play a dominant role in the
design of concurrency control algorithms for maintaining global serializability on the execution of
flexible and local transactions. Following these observations, we see that the execution of a flexible
transaction may be greatly affected by the concurrent execution of other flexible transactions.
4 A Global Scheduling Criterion
We will now propose a new concurrency control criterion for the execution of flexible and local
transactions. This criterion, termed F-serializability, places restrictions on global serializability.
Thus, the set of F-serializable schedules is a subset of globally serializable schedules. Following
Observations 1-3, we see that only Observation 1 has an effect on the definition of such a criterion,
while Observations 2 and 3 will impact on the design of a concurrency control protocol.
4.1 Serializability with Flexible Transactions
Clearly, the database inconsistency that may be caused by compensation can be prevented in a
globally serializable execution simply by requiring that, for each compensatable subtransaction
t i of T i at given local site LS conflicting subtransactions that are subsequently
serialized at the local site do not execute their serialization points until either T i makes a global
decision to commit the OE-rpo containing t i or the compensating subtransaction for t i has executed
its serialization point.
A less restrictive approach is also possible. By definition, a compensating subtransaction ct i
should be able to compensate the effect of t i regardless of what transactions execute between t i
and ct i . However, such transactions may propagate the effect of t i to other transactions and such
propagation cannot be statically considered in ct i . For flexible transaction T j following T i in the
global serialization order, if subtransaction t j of T j accesses (reads or writes) the data items not
written by t i of T i , t j will definitely not propagate the effect of t i and can then be serialized between
ct i in the same manner as any local transaction. Only when t j accesses the data items written
by t i may database inconsistency result. Let AC(t) denote the set of data items that t accesses and
commits, RC(t) denote the set of data items that t reads and commits, and WC(t) denote the set
of data items that t writes and commits. We define a compensation-interference free property on
global schedules as follows:
Definition 5 (Compensation-interference free) A global schedule S is compensation-interference
subtransaction t j which is serialized between a subtransaction t i and its compensating
transaction ct i in S, WC(t
We now propose a new global concurrency control criterion as follows:
Definition 6 (F-serializability) Let S be a global schedule of a set of well-formed flexible transactions
and local transactions. S is F-serializable if it is globally serializable and compensation-
interference free.
Note that, in Definition 6, the execution of a well-formed flexible transaction may result in
both a committed flexible transaction and some surplus transactions. Comparing the definition of
F-serializable schedules given above with that of global serializable schedules given in Definition
3, we can easily see that the set of F-serializable schedules is a subset of globally serializable
schedules. However, if we had followed the traditional definition of global serializability in which
all subtransactions and their compensating subtransactions of a flexible transaction at a local site
are treated as a logically atomic subtransaction, then the set of F-serializable schedules would be a
superset of globally serializable schedules. For example, consider a committed flexible transaction
T 1 which has generated a surplus pair of subtransaction t 1 and its compensating subtransaction
ct 1 . Let another flexible transaction T 2 contain data items such that RC(t 1
can be F-serialized between t 1 and ct 1 , but cannot be globally serialized
between t 1 and ct 1 . In addition, since the surplus pair of the subtransactions belong to a different
representative partial order from the committed one, these subtransactions can be treated as a
separate global transaction.
Theorem 2 given below demonstrates that F-serializability ensures global database consistency.
We first show that the compensation-interference free property in an F-serializable global schedule
is inherited in its conflict equivalent schedule.
Given an F-serializable global schedule S, any schedule S 0 that is conflict-equivalent to
S is compensation-interference free.
Proof: The proof proceeds by contradiction. Suppose we do have a schedule S 0 that is conflict-
equivalent to S and is not compensation-interference free. There then exists a subtransaction t j
such that it is serialized between a subtransaction t i and its compensating transaction ct i in S 0 ,
and t j accesses some data item d written by t i . Since S 0 is conflict equivalent to S, t j must also
access d written by t i in S and is serialized between t i and ct i in S. Consequently, S is not
compensation-interference free, contradicting the given condition. 2
Without loss of generality, we assume that, in the below theorem, all local, flexible, or surplus
transactions in S are committed. Thus, S is identical to S c .
Theorem 2 An F-serializable schedule S preserves global database consistency.
Proof: Let S be an F-serializable schedule that transfers a consistent database state DS 0 to a
new database state DS 1 . Let S be its equivalent global serial schedule, where T i
n) is a committed local, flexible, invalid or compensating (sub)transaction. By lemma 1
we know that S 0 is compensation-interference free. We now demonstrate that DS 1 is consistent.
We first prove that every local, flexible, or invalid (sub)transaction reads consistent database
state. The proof proceeds by induction on the position of each transaction in S
Basis: Obviously, T 1 can only be either local, flexible or invalid (sub)transaction. Since T 1 reads
from DS 0 , it reads consistent database state.
Induction: Assume that for all transactions T i , is not a compensating
subtransaction, then T i reads consistent state. Consider T k . There are two cases:
is a local transaction.
Since all transactions preserve local database consistency, T k thus reads locally
consistent database state.
is an invalid subtransaction or a committed flexible transaction.
Let D be the set of all data items existing in the global database. Let D 0 be D\Gammafthe set
of data items updated by those invalid subtransactions appeared in T 1
compensating subtransactions appeared after T k g. Since S 0 is compensation-interference free,
reads only the state of D 0 . Thus, T k reads only consistent database state.
By the semantics of compensation, the partial effects of invalid subtransactions in S 0 are semantically
compensated by their compensating subtransactions. Since no effects of invalid subtransactions
are seen by other transactions before they are compensated, any inconsistencies caused by
these invalid subtransactions are restored by their compensating subtransactions. Let S 00 be S 0
restricted to those transactions that are neither invalid subtransactions or their compensating sub-
transactions. Thus, S 00 consists only the serial execution of atomic local and flexible transactions.
Since each transaction in S 00 sees a consistent database state, then S 00 preserves the global database
consistency. Therefore, DS 1 is consistent. 2
4.2 Avoiding Cascading Aborts and Compensations
be maintained in global schedule S. The following rules are necessary
and sufficient for a flexible transaction scheduler to follow in order to avoid cascading aborts for
serialization reasons:
ffl No subtransaction of flexible transaction T j can execute its serialization point until all retriable
subtransactions of flexible transactions T i such that T
ffl No subtransaction of flexible transaction T j can execute its serialization point until all alternative
subtransactions of flexible transactions T i such that T have committed or
can no longer participate in T i 's committed OE-rpo.
Proof: The necessary condition was shown in Observations 2 and 3.
The sufficient condition can be shown by the observation that cascading aborts really occur when
some subtransaction t j of some flexible transaction T j is serialized before a subtransaction t i of some
flexible transaction T i on the same local database, with . The above conditions ensure that
the serialization point of a subtransaction, which fixes its place in the local database's serialization
order, is not made until all subtransactions that could precede it in the global serialization order
are either committed or decided against. Since no earlier subtransaction can attempt to execute a
new subtransaction on the local database, no cascading abort can occur. 2
Recall from Example 5 that cascading compensations can occur when a subtransaction t 2 reads
from some uncommitted subtransaction t 1 and then commits. If t 1 later aborts, t 2 must be compensated
for. We show the following theorem with respect to cascading compensations:
Theorem 3 A flexible transaction scheduler avoids cascading compensations if it avoids cascading
aborts.
Proof: Assume that the scheduler avoids cascading aborts. This means that it never needs to force
the abort of an uncommitted subtransaction t because of the violation of serialization. By Lemma
2, this is guaranteed by delaying the execution of the serialization point of t until all subtransactions
which must be serialized previously have committed. Therefore, t cannot have committed before
these subtransactions commit. Thus, t need never be compensated for. So cascading compensation
is also avoided. 2
5 A Scheduling Protocol
In this section, we present a GTM scheduling protocol that ensures F-serializability on the execution
of local and flexible transactions, and avoids cascading aborts. This protocol is based on the
assumption that if the concurrency control protocol of a local database does not allow the HDDBS
to determine the serialization point for each subtransaction, a ticket scheme similar to [GRS91] can
be implemented on the local database. Thus, the serialization point for a subtransaction is always
reached between the time the subtransaction begins and the time it commits.
For the GTM scheduling protocol, we propose a execution graph testing method to avoid the
high overhead of keeping track of serialization points, to ensure F-serializability, and to avoid
aborts. For scheduling purposes, we maintain a stored subtransaction execution graph (SSEG)
among subtransactions to be scheduled. The SSEG is defined as follows:
Definition 7 (Stored Subtransaction Execution Graph) The Stored Subtransaction Execution
Graph (SSEG) of a set of flexible transactions in global schedule S is a directed graph whose
nodes are global subtransactions and compensating subtransactions for those flexible transactions,
and whose edges must serialize before t i due to preference, precedence, or
conflict.
Global subtransaction nodes are labeled t m
ip for flexible transaction T i running on local site p.
If more than one global subtransaction is defined for T i on LS p , then the nodes can be ordered
by the order of their OE-rpos, and m indicates this node's position in that order. If t m
ip is
compensatable, its compensating subtransaction's node is ct m
ip . We begin with a few definitions.
A flexible transaction commits once its pivot subtransaction commits. We say that a flexible
transaction robustly terminates once all subtransactions in the committed OE-rpo have committed
and all compensating subtransactions for committed subtransactions not in the committed OE-rpo
have also committed.
The GTM scheduling protocol assumes that each global subtransaction and compensating sub-transaction
predeclares its read- and write- sets. It includes node and edge insertion and deletion
rules, and an operation submission rule. All nodes and edges associated with a flexible transaction
are inserted as a unit. If some edge insertion fails for flexible transaction T i , no edges may be
inserted for subsequent flexible transaction T j until either the insertion succeeds or all edges for T i
have been deleted. Nodes and edges for flexible transaction T i are inserted into the SSEG according
to the following rules:
Node Insertion Rule: Insert a node for each subtransaction defined for T i . For each
compensatable subtransaction insert a node ct m
ip .
Edge Insertion Rule: For subtransaction t m
ip , where edge insertion does not cause a
cycle:
1. For each previously-scheduled t n
hp .
2. For each previously-scheduled ct n
ct m
hp .
3. If t m
ip is compensatable, insert edge ct m
ip .
4. If t m
ip is (e-, v-, or t-) commit-dependent on t n
iq .
5. For all
ip . If t n
ip is compensatable, insert edge t m
ct n
ip .
The first two edge insertion cases ensure F-serializability. The third rule ensures that in a
surplus pair the invalid subtransaction precedes its compensating transaction. The rest of the cases
ensure that, for all flexible transactions, the resulting schedule is commit-dependency preserving
and all alternatives are attempted in order.
Nodes and edges are deleted from the SSEG according to the following rules:
Node Deletion rule:
1. Upon completion of a flexible transaction backtrack, delete all nodes representing
subtransactions in the current switching set or its successors, as well as the nodes
representing their compensating subtransactions.
2. Upon commitment of a subtransaction or compensating subtransaction, delete its
node.
3. Upon commitment of a pivot or a retriable subtransaction, delete all nodes representing
its alternatives and their successors in the flexible transaction, as well as
the nodes representing the compensating subtransactions of these deleted nodes.
4. Upon robust termination of a flexible transaction, delete its remaining nodes.
Edge Deletion Rule: Delete all edges incident on deleted nodes.
The operations of a global subtransaction of T i are submitted to the local databases according
to the following rule.
Operation Submission Rule: Submit operations of a subtransaction (including begin
and commit) to its local database only if its node in the SSEG has no outgoing edges.
The SSEG algorithm is defined based on the above rules. Note that the implementation of
the Operation Submission Rule can vary depending on what concurrency control mechanism is
used at each local site. If each local DBMS uses the strict two-phase locking as its concurrency
control mechanism, then the serialization point of a subtransaction can be controlled by the GTM
As a result, some operations of a subtransaction t i of a flexible transaction may be
submitted before other subtransactions which are serialized before t i reach their serialization points.
However, for other concurrency control mechanisms, we generally cannot have such gain from the
local DBMSs.
We now show that the SSEG algorithm maintains global consistency. We begin with a basic
lemma on the restraints the SSEG places on the execution, then apply that to the scheduling
algorithm.
Lemma 3 If there is an edge t jq ! t ip in the SSEG, then if both t ip and t jq execute and commit,
t ip must serialize before t jq .
Proof: By the operation submission rule, no operation of t jq (including begin and commit) can be
executed until the node t jq has no outgoing edges. Therefore, t jq cannot begin (and consequently
by the bounded serialization point assumption cannot execute its serialization point) until the edge
deleted. By the edge deletion rule, the edge is only deleted once the node t ip is deleted.
By the node deletion rule, the node t ip is only deleted once the subtransaction commits. By the
bounded serialization point assumption, t ip must have executed its serialization point before it
commits. Therefore, t ip must have executed its serialization point before t jq could possibly have
executed its serialization point, so the two subtransactions must serialize in the order that t ip
precedes t jq . 2
Note also that if there is an edge t jq ! t ip in the SSEG, but one of them does not commit, but
either fails to execute or aborts, then at most one of them is present in the serialization order, so
how they are actually executed is unimportant.
Theorem 4 Consider two flexible transactions T i and T j . T i is F-serialized before T j if the nodes
and edges of T i are inserted before those of T j .
Proof: Since all nodes are inserted for T i before any nodes are inserted for T j , we know by the
edge insertion rule that all edges between subtransactions of T i and subtransactions of T j must be
directed from some subtransaction of T j to some subtransaction of T i in the same local database.
Consider some local site LS p at which T i and T j conflict. By the first edge insertion rule, there is
an edge t jp ! t ip in the SSEG. By Lemma 3 this means that if both subtransactions execute and
commit, t ip serializes before t jp at LS p . If one of the two subtransactions does not commit, then
the two committed OE-rpos of the flexible transactions may not both have subtransactions at LS p .
In such a case, LS p will not have any effect on the serialization order of T i and T j .
If t ip is executed and is later compensated for, and t jp is also executed, we have the following
two cases:
Case 1: The compensation-interference free condition does not hold between the execution of t ip
and t jp . By the second edge insertion rule, there is an edge t jp ! ct ip in the SSEG, so the operations
of t jp could not be submitted until this edge is deleted. By Lemma 3, if both t jp and ct ip execute
and commit, this means that ct ip must serialize before t jp .
Case 2: The compensation-interference free condition does hold. There is then no edge
ct ip , and t jp may be interleaved between t ip and ct ip . However, the resulting schedule is still
compensation-interference free.
Consequently, T i is F-serialized before T j . 2
Following Theorem 4, we can see that the SSEG algorithm maintains global consistency. We
now also show that the SSEG algorithm has the additional desirable property of avoiding cascading
aborts and cascading compensations. We know by Theorem 3 that if it avoids cascading aborts,
then it avoids cascading compensations. Therefore, we show the following:
Theorem 5 The SSEG protocol for scheduling flexible transactions avoids cascading aborts.
Proof: By Lemma 3, we know that if, for subtransactions t i and t j , there is an edge
must serialize before t j . In fact, similar reasoning shows us that if such an
edge exists in the SSEG, then t i must either commit, abort, or be removed from consideration
before t j can begin. Thus, for this proof, we merely need to show that edges are inserted into the
SSEG that are sufficient to prevent a concurrent execution that allows for a cascading abort.
The first condition for avoiding cascading aborts is that no subtransaction of flexible transaction
T j can execute its serialization point until all retriable subtransactions of any T i such that T
have committed. The first case in the edge insertion rule enforces this by inserting edges t m
ip
for all previously-scheduled t n
ip .
The second condition for avoiding cascading aborts is that no subtransaction of flexible trans-action
T j can execute its serialization point until all alternative subtransactions of any flexible
transaction T i such that T have committed or can no longer participate in T i 's
committed OE-rpo. The first case in the edge insertion rule also enforces this.
Since the SSEG protocol ensures that both conditions necessary for avoiding cascading aborts
are enforced, the theorem holds. 2
Thus, we have shown that the SSEG algorithm maintains global consistency. We have also
shown that the SSEG algorithm has the additional desirable property of avoiding cascading aborts
and cascading compensations. Based on the rules for insertion and deletion of nodes and edges in
SSEG as well as the operation submission rule, the SSEG algorithm can be efficiently implemented
in the HDDBS environment.
6 Conclusions
This paper has proposed a new correctness criterion on the execution of local and flexible transactions
in the HDDBS environment. We have advanced a theory which facilitates the maintenance of
F-serializability, a concurrency control criterion that is stricter than global serializability in that it
prevents the flexible transactions which are serialized between a flexible transaction and its compensating
subtransactions to affect any data items that have been updated by the flexible transaction.
Consequently, no effect of a compensatable subtransaction is spread to other flexible transactions
before it is compensated. In order to prevent cascading aborts, the effects of retrial and alternatives
on concurrency control must also be considered. These factors generate unavoidable blocking
on the execution of flexible transactions. Thus, trade-off between flexibility of specifying global
transactions and high concurrency on the execution of flexible transactions remains.
--R
Using Flexible Transactions to Support Multi-System Telecommunication Applications
Merging Application-centric and Data-centric Approaches to Support Transaction-oriented Multi-system Workflows
Overview of Multidatabase Trans-action Management
Concurrency Control and Recovery in Databases Systems.
Multidatabase Update Issues.
Reliable Transaction Management in a Multidatabase System.
Scheduling with Compensation in Multidatabase Systems.
A transactional model for long-running activities
A paradigm for concurrency control in heterogeneous distributed database systems.
A Multidatabase Transaction Model for InterBase.
Using Semantic Knowledge for Transaction Processing in a Distributed Database.
Node Autonomy in Distributed Systems.
On Serializability of Multidatabase Transactions Through Forced Local Conflicts.
A Formal Approach to Recovery by Compensating Transactions.
A multidatabase interoperability.
A Theory of Relaxed Atomicity.
An Optimistic Commit Protocol for Distributed Transaction Management.
Atomic commitment for integrated database systems.
The Concurrency Control Problem in Multidatabases: Characteristics and Solutions.
A transaction model for multidatabase systems.
Superdatabases for Composition of Heterogeneous Databases.
On Transaction Workflows.
Database systems: Achievements and opportunities.
Transaction Concepts in Autonomous Database Environments.
Prepare and commit certification for decentralized trans-action management in rigorous heterogeneous multidatabases
A theory of global concurrency control in multidatabase systems.
Ensuring Relaxed Atomicity for Flexible Transactions in Multidatabase Systems.
--TR
--CTR
Heiko Schuldt , Gustavo Alonso , Catriel Beeri , Hans-Jrg Schek, Atomicity and isolation for transactional processes, ACM Transactions on Database Systems (TODS), v.27 n.1, p.63-116, March 2002 | flexible transactions;concurrency control;heterogeneous and autonomous database;serializability;transaction management |
628151 | Segmented Information Dispersal (SID) Data Layouts for Digital Video Servers. | AbstractWe present a novel data organization for disk arraysSegmented Information Dispersal (SID). SID provides protection against disk failures while ensuring that the reconstruction of the missing data requires only relatively small contiguous accesses to the available disks. SID has a number of properties that make it an attractive solution for fault-tolerant video servers. Under fault-free conditions, SID performs as well as RAID 5 and organizations based on balanced incomplete block designs (BIBD). Under failure, SID performs much better than RAID 5 since it significantly reduces the size of the disk accesses performed by the reconstruction process. SID also performs much better than BIBD by ensuring the contiguity of the reconstruction accesses. Contiguity is a very significant factor for video retrieval workloads, as we demonstrate in this paper. We present SID data organizations with a concise representation which enables the reconstruction process to efficiently locate the needed video and check data. | Introduction
Digital video server systems must provide timely delivery of video stream data to an ensemble of users even in
degraded modes in which one or more system disks are not operational. One design problem is the following:
for a given set of disks, which video data layout can provide this service while avoiding buffer starvation and
overflow as well as providing fault-tolerance? We present a novel data layout scheme to solve this problem.
These layouts can greatly reduce the added workload served by the operational disks when the disk array
experiences failures.
The most widely considered approaches for this problem are based on RAID 5 or RAID 3 data layouts [21].
The staggered striping data organization of Berson, Ghandeharizadeh, Muntz and Ju provides effective disk
bandwidth utilization for both small and large workloads [4]. The video servers studied by Berson, Golubchik
and Muntz [3] employ a RAID 5 data layout and utilize a modified and expanded read schedule in degraded
mode requiring additional buffering. There can be transition difficulties in which data is not delivered on
time. A method proposed by Vin, Shenoy and Rao [28] does not rely on RAID 5 parity redundancy but
on redundancy properties of the video data itself. Streaming RAID of Tobagi, Pang, Baird, and Gang is
one of the first commercial video servers [26]. There has been considerable work on disk array declustering
as well; much of this centers on studies and application of balanced incomplete block designs BIBD [11]
to transaction processing with workload characteristics vastly different from those of video streams. This
includes the work of Holland and Gibson [14], Ng and Mattson [18], Reddy, Chandy and Banerjee [22],
Alvarez, Burkhard and Cristian [1], Alvarez, Burkhard, Stockmeyer, and Cristian [2] as well as Schwarz,
Steinberg, and Burkhard [25]. Disk array declustering for video servers based on BIBD has been considered
by Cohen [9] and independently by Ozden, Rastogi, Shenoy and Silberschatz [20]. The very recent overlay
striping data layout of Triantafillou and Faloutsos [27] maintains high throughput across a wide range of
workloads by "dynamically" selecting an appropriate stripe size.
August 1998. Work supported in part by grants from Symbios Logic, Wichita,
Kansas, and the University of California MICRO program.
Principal contact.
A considerable advantage of our data layout scheme is that it achieves a large number of concurrent
streams while requiring only small buffering per stream. At the same time, the layouts are simple and obtain
load balancing across the system in spite of vastly differing preferences for various "movies." Moreover, we
obtain reasonable performance in degraded operation with one or more disk failures without adverse effects
on the fault-free performance. The cost of this degraded performance improvement is additional storage
space.
The paper is an elaboration of the abstract presented within the ACM Multimedia Conference, November
1996 [7]. The paper is organized as follows: Section 2 contains an example of various candidate data layout
schemes for video servers, Section 3 presents our segmented information dispersal data layout, Section 4 contains
our analysis of the performance results where we determine the buffer space required per video stream
for RAID 3, RAID 5, and SID, the effect of varying the dispersal factor, and the impact of discontiguous
data layouts. We present several video server designs in Section 5. Our conclusions are given within the
final section.
Alternatives Overview
To familiarize the reader with our approach and demonstrate its simplicity, we describe the architecture of
a simple dedicated video server consisting of five disks. The disk array contains only video and check data;
system files and metadata are stored elsewhere. Movies are divided into equal sized portions referred to as
slices. We use this terminology to emphasize the difference between our approach, in which the size of each
redundant data fragment is typically smaller than the size of a slice, and the RAID 5 approach in which the
sizes of slices and redundant data fragments are exactly the same. Each slice is stored contiguously (except
for the RAID 3 organization.) Our approach is directly applicable to constant bit rate and variable bit rate
compression schemes such as MPEG-1 [16] and MPEG-2 [13, 15]. For MPEG encoded movies, a slice size is
typically a few hundred kilobytes.
The videos are stored using disk striping, a technique in which consecutive slices are allocated to the
disks in a round robin fashion except for RAID 3. Two immediate benefits are that this approach allows
presentation of multiple concurrent streams and that the approach provides load balancing across the disks
without knowledge of access frequencies for the stored videos. The RAID 3 data layout provides fine-grain
striping while the other data layouts provide coarse-grain striping. Fine-grain striping works well for very
lightly loaded systems but coarse-grain striping is much better for heavily loaded systems [6, 10, 27]
Our scheme provides for the inevitable disk failure. We present the single failure resilient version here but
multiple failures can be accommodated [9]. For all the schemes presented here, the redundancy calculations
utilize the ubiquitous and efficient exclusive-or operation.
We present four distinct data layouts; the first is an array of data disks often referred to as "just a bunch
of disks" (JBOD), the second is RAID 3, the third is RAID 5, and finally an example of our scheme segmented
information dispersal (SID). Each layout has some advantages and disadvantages which we mention as well.
For "larger" video servers, one approach utilizes several instances of these data organizations; we will consider
such examples in the design section of the paper. We assume that all video materials will be stored within
the server. Typically, our servers will be able to service more streams than the number of full length videos
stored within them. We assume the server contains the most frequently requested video materials and many
clients will be independently requesting access to the same video. We return to these points in the final
section of the paper.
The JBOD data organization is shown in Figure 1. The notation Sx:y designates slice y of movie x. The
round-robin assignment of slices to disks achieves load balancing. The scheme utilizes 100 percent of the
storage capacity of the five disks for video data. However, a disk failure results in a non-delivered slice every
fifth delivery for all movies; it essentially results in complete loss of service. The shaded slice indicates a
typical fault-free access for a single video stream.
The RAID 3 data organization [21] is shown in Figure 2. Each slice is decomposed into four fragments
each residing on a distinct disk. The notation Sx:y:z designates fragment z of slice y of movie x, and P x:y
denotes the check data of the fragments of slice y of movie x (e.g.
The data within the shaded rectangles is read in parallel by all disks. The use of slice fragments also
achieves load balancing. The scheme utilizes 80 percent of its storage capacity for video data. The array
can accommodate a single disk failure. The shaded slice fragments indicate the typical fault-free access for
a single video stream. No degradation of performance occurs since the parity can be computed fast enough.
S1.
S1.
S2.
S2.
S1.
S1.
S2.
S1.
S1.
S2.
S1.
S1.
S2.
S1.
S1.
S2.
S1.
S1.
S2.
Figure
1: JBOD with slice striping
Note that we will always access the parity data (even though it is not shaded) in parallel with the video slice
data. However, this organization suffers from poor fault-free performance due to the dedicated parity disk
which is not utilized under the fault-free mode and the cumulative effects of fine-grained striping.
S1.1.4
S2.1.4
P1.
P1.1
P2.
P2.1
Figure
2: RAID 3 Data Layout
The RAID 5 data organization [21] is shown is Figure 3. Individual slices are stored contiguously on
single disks; consecutive slices are stored in round robin fashion. Sx:y denotes slice y of movie x. P x:z
denotes the parity of the slices y such that z = by=4c for movie x; that is, for the slices that appear in the
same row. The shaded slice indicates the typical fault-free access for a single video stream. The RAID 5
data layout obtains load balancing and the scheme utilizes 80 percent of its storage capacity for video data.
The array can accommodate a single disk failure. This organization has good fault-free performance, but it
suffers from poor performance under failure since the workload on the surviving disks doubles. The doubling
arises by assuming only that the read load is evenly divided among the disks. Each surviving disk will
have to read one additional slice for each video stream that was to be serviced by the failed disk. Here
resides on disk (4 \Gamma z) mod 5.
S1.
S1.
S1.
S1.
P1.515S1.
S1.
S1.
P1.
S1.616S1.
S1.
P1.
S1.
S1.717S1.
P1.
S1.
S1.
S1.818P1.
S1.
S1.
S1.
Figure
3: RAID 5 Data Layout
The SID data organization is shown in Figure 4. Individual slices are stored contiguously on single disks;
consecutive slices are stored in a round robin fashion. Each slice has an associated redundant/check data
fragment. In this configuration, the redundant data is one-half the size of the slice. (In this paragraph, any
attribute of SID that depends on the configuration will by associated with the phrase "in this configuration.")
Each slice is logically partitioned into two equal sized data fragments in this configuration. The notation
F x:y:z designates data fragment y of slice z of movie x and P x:z denotes the check fragment stored on the
disk with data fragments F x:0:z and F x:1:z. The slice Sx:z from Figure 3 is exactly the data fragments
F x:0:z and F x:1:z juxtaposed. In this configuration, if (the disk on which the slice is stored)
and "row number" within the data layout), then P
thus, for example,
The shaded fragments are contiguous and represent a typical slice access under fault-free operation. In this
configuration, the layout utilizes 66.66 percent of its storage capacity for video data; other SID layouts can
have larger or smaller storage utilization. Now for example, if disk 3 fails, we can reconstruct F1.0.3 and
F1.1.3 by accessing fragments P1.2, F1.1.1, P1.4, and F1.0.0. Since each redundant fragment is only one-half
the size of a slice, the incremental workload will be considerably smaller than for a RAID 5 layout.
F1.
F1.
F1.1.1
P1.1
F1.
F1.1.2
P1.2
F1.
F1.1.3
P1.3
F1.
F1.1.4
P1.4
Figure
4: SID
One possible read scheduling is now described. Time during video presentation is divided into equal-
duration reading cycles when exactly one slice is read for each stream from a disk 3 In the case of RAID 3
organizations, for each slice, the data is obtained by reading from all disks but one. Consequently, the
remainder of this discussion does not apply to RAID 3. The streams being serviced by the video server
are partitioned into groups which we call cohorts. Each cohort has an associated service list of slices (and
fragments under degraded mode of operation) to be read during the reading cycle. Since video slice data
is allocated in a round robin fashion, cohorts logically cycle through the disks moving from one disk to the
next at the start of each reading cycle.
Each cohort has a maximum number of streams it can contain; this number will be limited by the buffer
space and by the capability of the disk drives composing our video server. When the number of streams in
a cohort is lower than this maximum number, the cohort contains one or more free slots. When the display
of a new stream needs to be initiated, the system waits until a cohort with a free slot is about to be served
by the disk where the first slice of the requested movie resides; the new stream is then incorporated into this
cohort service list. When a stream ends, it is dropped from its cohort; this results in a free slot which can
be used to initiate another new stream. Figure 5 shows an example of reading cycles along with their cohort
service lists in a RAID 5 or SID video server with five disks. In this example, a cohort may contain up to
four streams; there are currently three free slots.
disk0 service list: S1.0, S2.10, S5.5, S1.25
disk1 service list: S3.6, S4.11, S2.1
disk2 service list: S2.7, S6.12, S1.2
disk3 service list: S7.3, S3.13, S5.8, S4.3
disk4 service list: S3.9, S4.4, S8.14
reading cycle t
disk0 service list: S3.10, S4.5, S8.15
disk1 service list: S1.1, S2.11, S5.6, S1.26
disk2 service list: S3.7, S4.12, S2.2
disk3 service list: S2.8, S6.13, S1.3
disk4 service list: S7.4, S3.14, S5.9, S4.4
reading cycle t+1
Figure
5: SID and RAID 5 fault-free reading cycles
We conclude our discussion by noting again the load reduction exemplified by our SID example above.
The load reduction occurs by reducing the size of the reconstruction reads; this reduces the transfer time
3 This applied to constant bit rate compression. For variable bit rate compression, at most one slice is read for each stream
during a reading cycle (see Section 4).
which is a dominant component of the buffer replenishment latency. Suppose that disk 3 is inoperative
during reading cycle t + 1. The SID reconstruction equations for disk 3 slice fragments are
where x designates the movie and j - 0 the slice row number within the data layout. The revised SID
service lists are illustrated within Figure 6; the fragments to the right of the semicolon in reading cycle t
are one-half the size of the slices. The RAID reconstruction invariants are expressed as
where x designates the movie and j - 0 the data layout row number. The revised RAID service lists are
illustrated within Figure 7. Under failure, SID and RAID 5 behave quite differently; within RAID 5 the
reading cycle would be longer since all objects accessed are the same size while SID's additional accesses are
all much smaller.
disk0 service list: S1.0, S2.10, S5.5, S1.25
disk1 service list: S3.6, S4.11, S2.1
disk2 service list: S2.7, S6.12, S1.2
disk3 service list: S7.3, S3.13, S5.8, S4.3
disk4 service list: S3.9, S4.4, S8.14
reading cycle t
disk0 service list: S3.10, S4.5, S8.15; F2.0.5, F6.0.10, F1.0.0
disk1 service list: S1.1, S2.11, S5.6, S1.26; F2.1.6, F6.1.11, F1.1.1
disk2 service list: S3.7, S4.12, S2.2; P2.7, P6.12, P1.2
disk3 service list:
disk4 service list: S7.4, S3.14, S5.9, S4.4; P2.9, P6.14, P1.4
reading cycle t+1
Figure
Reading cycles with disk 3 inoperative
disk0 service list: S1.0, S2.10, S5.5, S1.25
disk1 service list: S3.6, S4.11, S2.1
disk2 service list: S2.7, S6.12, S1.2
disk3 service list: S7.3, S3.13, S5.8, S4.3
disk4 service list: S3.9, S4.4, S8.14
reading cycle t
disk0 service list: S3.10, S4.5, S8.15; S2.10, S6.15, S1.0
disk1 service list: S1.1, S2.11, S5.6, S1.26; S2.11, P6.3, S1.1
disk2 service list: S3.7, S4.12, S2.2; P2.2, S6.12, S1.2
disk3 service list:
disk4 service list: S7.4, S3.14, S5.9, S4.4; S2.9, S6.14, P1.0
reading cycle t+1
Figure
7: RAID 5 reading cycles with disk 3 inoperative
Within SID, the slices S2:8, S6:13, and S1:3 are calculated by noting that Sx:(3+5j) consists of F x:0:(3+5j)
and F x:1:(3 These calculations are performed at the conclusion of reading cycle t + 1.
SID calculations:
3 Segmented Information Dispersal (SID)
The SID data organization provides a middle ground between the two well-known extremes RAID level 1
and RAID level 5 as well as capturing the goal outlined at the end of the previous section. The (n,q)-SID
design has parameters n and q designating the number of disks within the array and the dispersal factor
respectively. We give a functional description of the scheme here; a formal description is given in [8]. Every
slice contains k \Delta q data bytes, composed of q juxtaposed fragments each of size k bytes; a slice of data is
stored contiguously on a single disk. The dispersal factor q designates the number of juxtaposed fragments
constituting a slice. The parameter k is not central to our discussion other than it determines the slice size.
Every slice of data has an associated check fragment consisting of k bytes that is stored on the same disk as
the slice. The check fragment value is the exclusive-or of q data fragments each residing on one of q other
disks. The fragments composing the check fragments provide single disk failure tolerance. That is, any slice
of data can be reconstructed by accessing the surviving disks and obtaining at most one fragment (k bytes)
from each. SID requires the check fragments as well as data fragments to have size k bytes. However, the
typical data access, in fault-free operation, transfers a slice (k \Delta q bytes of data) residing on a single disk. The
redundancy ratio, measuring the ratio of the size of the check data to the size of the check plus user data, is
for the SID schemes; in RAID 1, it is 1=2 and in RAID 5, it is 1=n. In our overview examples, the
(5,2)-SID of
Figure
4 has a redundancy ratio of 1=3 and the 5 disk RAID 5 in Figure 3 has a redundancy
ratio of 1=5.
An approach to obtaining suitable SID data layouts is presented immediately after discussing the relationship
between n and q. Suppose a slice of k \Delta q bytes must be recovered. Each of its q fragments resides
within one of q check fragments. Furthermore each of these check fragments requires data fragments
so we can obtain the desired fragment. Thus, we must access q fragments each of size
k. If each of these fragments resides on a distinct disk and we include the failed disk, we obtain
Thus, we see that q cannot be any larger than
within the SID scheme. We will refer to this inequality
as the necessary condition. As a technical aside, we note that there are at most four designs in which equality
holds; i.e. n equals 1. These are SID designs for q equal to 2; 3; 7; and possibly 57. Our previous SID
example is the q equals 2 case. A significant consequence of q being less than
is that during degraded
operation the reconstruction work is evenly distributed over only q 2 disks rather than the surviving n \Gamma 1.
Consequently, we must forgo our desire to evenly distribute the reconstruction workload over all surviving
disks other than in the three (or possibly four) exceptional cases. The non-existence of (q 2 +1; q)-SID designs
is discussed in detail within [8]; this follows from known graph theoretic results regarding the non-existence
of q-regular graphs with nodes and girth 5, referred to as strongly-regular graphs, except for q equal
to 2,3,7, and possibly 57.
One approach to obtaining SID designs is exhaustive search together with backtracking; this will obtain
any and all such designs but it is an enormous computational task. The technique we present now, referred
to as separated difference sets, provides (n,q)-SID designs in which q is reasonably large but not necessarily
maximal.
separated difference set, denoted (n,q)-SDS, is a set of q positive
integers less than n satisfying the following condition. Let D be the set of all differences between ordered
pairs of distinct elements of C,
The name designates the fact that each of the q \Theta (q \Gamma 1) differences must be distinct and they must
also differ from the q elements of C. Separated difference sets are very straightforward to construct even by
exhaustive search. As an example, we consider a (5,2)-SDS in which C is f1; 4g; then the difference set D is
f2; 3g and we see that C [D contains the necessary four elements. As a slightly larger example, we consider
an (n,3)-SDS design; here C could be f1; 4; the difference set D is f2; 3; 5;
and the union of the two contains the required nine elements provided n is at least 11.
An (n; q)-SDS immediately provides an (n; q)-SID design. The elements of the C set designate offsets for
the parity calculations. There are n disks, labeled 0; each containing slice and check data. Each
slice S z consists of q juxtaposed fragments denoted F 0;z the associated parity fragment
z contains the exclusive-or of the following data fragments:
(the disk on which slice S z resides) and r = bz=nc (the "row number" of slice S z in the
layout). This notation is the same as in section 2 except the movie number is omitted. The elements of the
set determine only the offsets for parity. The selection of fragments is arbitrary provided all are covered
within the design. The explicit choices above cover all the data fragment in the SID design. Our example in
Figure
4 is based on the (5,2)-SDS in which C is f1; 4g. A slightly larger example is presented at the end of
this section.
We now verify that the data layouts constructed above are viable SID designs. We show that the
reconstruction of a slice S z will entail fetching q 2 fragments on q 2 distinct disks within our SDS based SID
design. We have noted, in determining inequality 1 above, that q 2 fragments must be accessed. The following
set of parity fragments contain stripe data for
where, as before, computed using the
following data fragments:
First we show that the set of disks containing the data fragments within P which reside on
disks are distinct from the disks containing stripe data for S z . Otherwise,
we have k. The expression (d denotes a parity disk
listed in 3 and the expression (d denotes a disk, listed in 4, containing the fragment data
needed by the parity disk (d \Gamma c k ) mod n. However, this cannot happen within an SDS construction since we
would have a difference c equivalent to an offset c k and this would diminish the cardinality of C [ D.
The other possibility is that the set of disks containing the fragments within P (d\Gammac i ) mod n+nr and
includes disk d as well as at least one other disk in common. Then we would have
Each expression denotes a disk that
contains a data fragment for the i th or j th parity disk. However, this cannot happen within an SDS construction
since we would have a pair of equivalent differences which would diminish the cardinality of C [D.
Thus, our SDS based SID data layouts operate as claimed.
Table
presents SDS based SID designs for 5 - n - 100. For each value of n, the table shows the
obtained q and the set of SDS offsets. These SDS configurations were obtained via computer search. For
certain values of n, a higher dispersal factor appears in parentheses. For these cases, a solution with a higher
dispersal factor is known [8], but it cannot be specified using the SDS representation.
We conclude the section with a slightly larger example design: a (11,3)-SDS based SID design. In this
data layout, 25% of the disk space is devoted to redundant data. However, the performance, under single
disk failure, is much better than that of a 4 disk RAID level 5 data layout also with 25% of the disk space
devoted to redundant data; we consider server performance in detail in the next section. Figure 8 presents
the parity calculations for the first slice "row" of the data layout. The next rows follow the same pattern
obtained by using the SDS f1,4,10g (e.g. P
Figure
8: (11,3)-SDS based SID data parity calculation scheme.
The reconstruction calculations for a failed disk access fragments on nine of the ten surviving disks. Figure 9
presents the calculation scheme for failed disk 3; one fragment is obtained from disks 0,1,2,4,5,6,8,9, and 10.
F 0,3
F 1,3
F 2,3
Figure
9: (11,3)-SDS based SID reconstruction calculation scheme for disk 3.
Table
1: SID data layout designs for 5 - n - 100
Performance Results
We conducted a study which compares the performance of several possible data organizations. Our results
concern the buffer size per stream requirements for various data layout and cohort size configurations. Larger
reading cycle times imply larger buffer sizes. We are trying to maximize the number of users for a given
buffer size. The models are now presented along with the results.
Our reading cycle is flexible and we utilize the following seek optimization implementation referred to as
the "nearest rule" [5] to minimize the expected actuator movement:
1. Sort the slices according to the cylinder location.
2. Determine the distance between the current position of the actuator and first and last slices of the
sorted list. Move actuator to the cylinder of the slice at the closest end of the service list.
3. Access the slices according to the sorted list.
4. Seek the closest extreme (inner-most or outer-most) cylinder of the disk.
We employ the parameters provided by disk manufacturers to determine disk seek times. Seek time
models have two distinct domains [12, 23]: one is the square root portion and the other is the affine portion.
The boundary between the two portions is designated by b. Let the maximum possible seek distance be
denoted by dmax (i.e. the number of cylinders minus 1). We estimate the time required to seek d cylinders
as follows:
ae
d if d - b
Since we know the the track-to-track, maximum and average seek times, denoted t min ave re-
spectively, let X be a random variable which measures the lengths of seeks in a random seek workload for
the disk, and let 1 be the number of cylinders. The probability of a length d seek is determined
as follows:
which is calculated assuming all pairs of cylinders are equally-likely. We can now find
solving the following system of four linear equations:
d)
The last equation arises at the boundary between the two regions within the seek time model. Of course,
if the disk manufacturer provides these model parameters, these calculations are unnecessary. Typically,
manufacturers provide only t max , t min and t ave values. Table 2 summarizes our disk parameter nomenclature
as well as providing typical values.
Since we will be reading several slices in one sweep across the cylinders, we must determine the maximum
seek latency per disk. Assuming that we read m data objects in the sweep, and that we use our seek
optimization, this results in m+1 seeks per reading cycle. It is easy to show that the worst case seek latency
occurs when the actuator starts at one extreme cylinder, makes m+1 equidistant stops finally ending at the
other extreme. Then the maximum total seek latency S(m) in a reading cycle with m streams per cohort is:
otherwise.
Finally we are left to determine the video slice size. We calculate the worst case time, denoted T (m; fi),
required for reading m data objects of size fi. Within an n disk RAID 5 or SID organization, each data object
will be a slice of size fi; in a RAID 3 organization, each data object will be of size fi=(n \Gamma 1). Accordingly
the number of streams supported within the RAID 5 or SID organization is nm and within the RAID
3 organization the number is exactly m. We determine T by summing the worst case seek, transfer, and
rotational latencies; consequently these expressions all have the same form. There are three new terms within
these expressions: t r denotes the worst case disk rotational latency in milliseconds, r t denotes the minimum
transfer rate in kilobytes per second, and finally wmin denotes the minimal track size in in kilobytes (KB).
Table
3 summaries these definitions.
ffl RAID 5 or SID without failure:
wmin
ffl RAID 5 with failure:
wmin
ffl SID with failure:
wmin
ffl RAID 3 with or without failure:
Dwmin
The S expression denotes the worst case seek latency, the term t r =1000 denotes the worst case rotational
latency, the term fi=r t denotes the worst case transfer time, and finally dfi=w min e t min =1000 denotes the
time to do track-to-track moves within a slice. The last three expressions are all multiplied by m within the
RAID 5 or SID without failure expression. Within the RAID 5 with failure expression, these expressions
are multiplied by 2m since each disk will be providing exactly twice as many slices. The SID with failure
expression has two expressions beyond the seek cost; one is identical to SID without failure and the second
has fragment size fi=q rather than fi sized transfers. Finally, the RAID 3 with or without failure is similar
to the RAID 5 without failure except the stripe width of D+1 and the size fi=D of each accessed object.
The RAID 3 organization differs from the others in that each disk contributes to each stream; all seeks and
rotational latencies contribute to the length of the reading cycle. This feature makes RAID 3 a less attractive
data organization.
To avoid starvation, the slice size fi must be large enough so the time to display a slice fi=r c is no smaller
than the time to access the next slice; here r c denotes the display consumption rate in kilobytes per second.
For SID and RAID 5, we must determine for a given number of streams m, the smallest fi such that
Similarly, for RAID 3, we must determine the smallest fi such that
We shall refer to each of these inequalities as the continuity condition. Once we obtain a suitable value of
fi, we know the buffer size per stream: 2fi. This selection of fi will minimize the buffering requirements.
The reading cycle length c t is fi=r c seconds. By allocating a buffer of 2fi kilobytes for each stream, we can
guarantee that neither buffer starvation nor overflow occurs. From the requirement for non-starvation (fi
large enough) and the equations listed above, we can obtain lower bounds on fi for the various cases. For
example, for RAID 5 or SID without failure we get:
wmin
The search for fi begins at this lower bound; the value of fi is doubled until fi - T (m; fi) r c . Then we do a
binary search to obtain the smallest acceptable fi.
The model described above applies to constant bit rate (CBR) compression where the consumption rate
r c is fixed. Variable bit rate (VBR) compression can be accommodated as well by making some adjustments.
We set the consumption rate used in our model to the maximum consumption rate, and compute the slice
size fi as described above. Movies are stored on the disks exactly as in the CBR case (i.e. a round robin
distribution of equal size slices). This will result in a variable display time for slices. To address this issue,
during each reading cycle, at most one slice of size fi is read for each stream. If the buffer for a stream
already contains more than fi bytes, no slice is read; otherwise, a slice is read. Recall that the buffer size
per stream is 2fi. Clearly, this policy does not suffer from buffer overflow. Buffer starvation does not occur
even if no slice is read since this happens only if the buffer for the stream already contains fi bytes which is
enough data to display until the next replenishment.
Our performance study determines the buffer requirements per video stream for SID as well as RAID 3
and 5. The following figures give the results of our calculations for some typical SID as well as RAID 3 and
5 configurations. The disk parameters were based on the performance characteristics of Quantum Atlas II,
Seagate Cheetah, and IBM UltraStar disks. Table 2 contains our parameter values.
15728 r t Minimum transfer rate per disk (KB/s)
rotational latency (ms)
6926 dmax Maximum seek distance
Track to track seek time (ms)
5.4 t ave Average seek time (ms)
2.068082
9 d c Disk capacity (GB)
Boundary between the square root and linear
portions of the seek time
Table
2: Disk Model Parameters
D Number of disks in a parity group (for RAID
r c Consumption rate per stream (KB/s)
c t Length of a reading cycle (sec)
Maximum total seek latency when reading m slices (sec)
required to read m slices of size fi
q SID dispersal factor
Table
3: Video Server Model Parameters
Figure
shows how the buffering requirement per stream varies with the total number of concurrent
streams for a disk array with 12 disks and a video consumption rate of 4 Mbits/sec. The figure presents
the performance of three data organizations: RAID 3, RAID 5, and SID. The redundancy ratio for all three
organizations is 1=4 (i.e. the RAID 3 and RAID 5 layouts consist of three parity groups of size four, and
the SID layout has a dispersal factor of 3.) Striping of size fi (slice sized), is utilized in RAID 5 and SID;
striping within RAID 3 has size fi=D. Accordingly, for SID and RAID 5, each point is for a multiple of
12 streams; for RAID 3, each point is for a multiple of 3 streams. The poor performance of the RAID 3
organization (both with and without failure) and the RAID 5 organization under failure is noted and we
return to this comparison in the next section. In both Figures 10 and 11, the perceived discontinuites for
RAID 3 arise because the points are so close together; with the wider spacing (more streams per ensemble)
the "discontinuites" are less pronounced.
Total number of streams
Buffer
size
per
stream
RAID 5/SID ffl
RAID 5 with failure \Theta
SID with failure \Pi
Figure
vs. RAID 5 and RAID 3: three parity groups of size four; redundancy is 1=4,
video consumption is 4 Mbits/sec
We note that both RAID 3 and RAID 5 provide a higher degree of fault-tolerance in this configuration
since they protect against one disk failure per stripe while SID protects against one failure in the entire disk
array. For a fixed level of fault-tolerance, SID requires a higher redundancy rate than RAID 3 or RAID 5,
but it provides a significantly higher level of performance under failure. The same level of fault-tolerance as
that of RAID 3 or RAID 5 can be achieved by dividing the disk array into SID groups in the same manner
that RAID 3 and RAID 5 arrays are divided into parity groups. Since a large number of disks is desirable
for performance reasons regardless of the data organization used, and since current and future disks offer
a very high capacity, trading some amount of storage space for a significantly higher level of performance
under disk failure is very acceptable.
Figure
11 shows the performance of a disk array with 90 disks. The redundancy rate for these data
organizations is 1=9 (i.e. the RAID 3 and RAID 5 layouts consist of ten parity groups of size nine, and the
SID layout has a dispersal factor of 8.) The impact of using SID in this case is considerably larger than in
the previous case because of the higher dispersal factor. For RAID 5 and SID, each point is a multiple of 90
streams, and for RAID 3, each point is a multiple of ten streams. In this situation, if we limit the buffer size
per stream to 10 MB, the SID data organization supports up to 1980 streams under failure, while RAID 5
only supports at most 1080 streams.
The impact of the dispersal factor is illustrated in Figure 12. We see, as expected, that performance
improves as the dispersal factor increases, but diminishing returns are obtained once the dispersal factor is
high enough to make latencies other than the transfer time dominate. In our disk model, the RAID 5 data
organization services at most 12 streams per disk under failure.
An important advantage of SID when compared to BIBD layouts is the guarantee of contiguous reconstruction
reads. Figure 13 shows the impact of discontiguity on the performance. The different curves
correspond to different numbers of accesses performed during the reconstruction of a video slice. With SID,
only one access is performed, but we studied the performance for larger numbers of accesses in order to
determine the impact of the discontiguity exhibited by BIBD layouts. We see that even a small amount of
discontiguity seriously affects the performance. Once the number of accesses per reconstruction data object
Total number of streams
Buffer
size
per
stream
RAID 5/SID ffl
RAID 5 with failure \Theta
SID with failure \Pi
Figure
3: ten parity groups of size nine; redundancy is 1=9,
video consumption is 4 Mbits/sec
increases beyond 4, the performance becomes even worse than that of RAID 5 (which requires reading larger,
but contiguous, pieces of data).
5 Video Server Design
A video server should support a pre-specified number of concurrent, independent streams with a low cost. We
present SID-based cost optimal designs for a 1000 stream video server for different varieties of technologies.
One design is based on circa 1996 disk drive technology and the other is based on circa 1998 technology.
Thus, SID-based video server designs can successfully accommodate a range of disk technologies.
A similar RAID 5 based design was presented by Ozden et al. [19] using circa 1996 component technologies
and costs. We begin by reviewing their results where they obtain a cost optimal RAID 5 video server for
1000 streams containing 44 disks in which each disk services up to 23 clients. We calculate, using similar
analysis, that the reading cycle c t is approximately 844 ms and the buffer space per stream per reading
cycle fi is 159 KB. Our analysis uses a more realistic (and larger) assessment of the total seek time per
reading cycle. We determine that each (of 46 disks) will be able to service up to 22 clients per reading
cycle. Consequently, our values for c t and fi differ slightly from theirs. This fault-free configuration costs
$81553 with a component cost of $1500 per 2GB disk and $40 per MB of RAM. This RAID 5 design cannot
accommodate the inevitable disk failures gracefully; with a single disk failure, this design will fail to service
the 1000 clients in a timely fashion. Our calculations show that by increasing the buffer space per stream, the
cost optimal design contains 84 disks with This configuration costs $146415. The greater cost
provides additional reliability; our cost assessment is independent of the RAID stripe width. With smaller
stripe widths, we gain additional reliability due to the additional redundant data stored but at the same
time lose client data storage capacity. Ozden et al. [20] have considered single failure tolerant architectures
for continuous media servers which employ BIBD declustered data layouts. They conclude their article with
results for a server. The stripe width varies over 2,4,8,16, and 32 and the buffer
space size is either 256 KB or 2 GB. They show that with 2 GB of buffer space, their server supports less
Streams per disk
Buffer
size
per
stream
RAID 5/SID ffl
RAID 5, failure \Theta
SID (q=2), failure \Pi
SID (q=4), failure
SID (q=16), failure
SID (q=64), failure
SID (q=256), failure
Figure
12: Dispersal factor impact: video consumption=4 Mbits/sec
than 700 streams in degraded mode.
Using circa 1996 technologies, we present cost optimal SID based data layouts for a 1000 client server.
For fault-free operation, our analysis for c t and fi will be the same as given above since SID and RAID 5
behave identically in this mode. The server configuration cost will ultimately be determined by the dispersal
factor q which we will let range from 2 to 8. We determine fi to meet the continuity condition as well as
select a suitable SID design in Table 1. These results are summarized in Table 4. The costs continue to be
cost
6 53 280 101547
Table
4: SID cost-optimal designs
cost
Table
5: SID design variations
reduced for larger q but we cannot meet our necessary condition n - 1. The designs within Table 1
indicate that a (53,6)-SDS design is possible. Any of these hardware configurations is less expensive than
the RAID 5 configuration. Remaining issues include the reliability and video storage capacity. Each of our
video server configurations presented above tolerates a single failed disk. We can improve the reliability by
using stripes within the disk array, each of which is a SID-based design. We may stray slightly from
optimal cost; however, the additional cost buys reliability. For each q, we select n 0 to be the smallest value
to meet our necessary condition as well as a multiple gn 0 that is the smallest value at least equal to the
number of disks in the q dispersal cost optimal layout. Table 5 contains some such configurations together
with their costs. The resulting designs have improved reliability since with g groups, the video server will
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
\Theta
Streams per disk
Buffer
size
per
stream
RAID 5: degraded \Theta
access/rebuild
SID: 4 accesses/rebuild
SID: accesses/rebuild
Figure
13: Discontiguity impact:
tolerate some configurations of up to g concurrent disk failures. Finally we note the video storage capacity
is nd c (q \Gamma 1)=q.
Now we consider a video server design for 1000 clients using the circa 1998 disk drives. First we present
the cost optimal fault-free SID design which contains 42 disks, fi is 1097 KB and the reading cycle period
is 2142 ms; the cost is $33719 with component costs for RAM of $2 per MB and $700 per 9GB disk. This
configuration will suffice for either a RAID 5 or SID data layout. Second, we present several cost optimal
single failure tolerant SID designs for various dispersal factors. Table 6 summarizes these designs. The
cost
6 50 1357 40304
Table
cost optimal designs
cost
Table
7: SID design variations
entry designates the RAID 5 configuration. Since n is small in these cost optimal designs and we must meet
the continuity condition, only a few q values need be considered. As an aside, slightly larger q values would
give rise to only slightly less costly component configurations in Table 6. Table 7 contains several feasible
variation designs with g ? 1 groups having costs close to optimal.
Finally, we demonstrate the reliability and storage capacity for a pair of these designs: the design (A)
consists of nine (7,2)-SDS data layouts and (B) seven stripe width twelve RAID 5 data layouts. The
redundancy ratio for (A) is 1/3 and for (B) 1/12; the storage capacity for (A) is 63 \Theta 2=3 \Theta d
and for (B) is 84 \Theta 11=12 \Theta d c = 77d c MB. We note the mean time to data loss (MTTDL) for the SDS based
data layout is 0:076487=- which is greater than the MTTDL for the RAID based design which is 0:049974=-;
here - designates the failure rate for a single disk. The MTTDL values are determined assuming that no
repair is done; none of our design analysis allows for reconstruction activity.
We continue with two configurations that have identical storage capacities. Suppose we utilize KA
instances of the (A) configuration and KB of (B). Then we require that
KA \Theta
In other words, we have we consider an ensemble in which KA is 11 and KB is 6. The eleven
(A) system configuration costs 11 \Theta each disk can service up to 16 streams for a total of
11088. The six (B) system configuration costs 6 \Theta each disk can service
up to 12 streams for a total of 6 \Theta 84 \Theta 6048. The per stream cost for the (A) system configuration is
$49.27 and for the (B) system configuration $62.61. We calculate the MTTDL for these systems; the Markov
models utilized are given in figure 14 with the left model representing the eleven (A) system configuration
and the right model the six (B) systems. The individual disk failure rate is -. The horizontal rows of
F
594-
588-
F
504- 492- 480- 468- 24- 12-
462-
Figure
14: Markov models for MTTDL calculations
states labeled I designate I groups each containing one failed disk; state F designates that at least one group
contains at least two failed disks. MTTDLA is 0:02049=- and MTTDLB is 0:01780=-. The almost identical
MTTDLs for these configurations are roughly 2% of the mean time to failure (MTTF) for a single disk. In
this comparative example, the actual video capacity of the two configurations is identical, the MTTDL is
essentially the same, and the cost per stream is much less within the SID based data layout.
6 Conclusions
The SID data layout provides excellent performance for video servers as well as any other task characterized
by large, fixed-size and constant rate sequential accesses. A slice is large provided the sum of the rotational
and seek latencies is less than the transfer time for the slice data object. The SID fault-free run-time
performance is identical to that of RAID 5; since the access rate is constant, our performance measure is the
minimal buffer size per stream required to maintain a given number of streams. The SID degraded mode
run-time performance is much better than either that of RAID 3 or 5. One limiting aspect of SID run-time
performance is the sum of the seek and rotational latencies. By raising the dispersal factor, we can diminish
the data transfer times during degraded mode operation but eventually the seek and rotational latencies
dominate.
The benefits of contiguous data layout have been noted as well. We observe a severe penalty per stream
as the number of accesses per parity fragment increases. Other data layout models, such as BIBD, often
necessitate a non-contiguous data layout with resulting poor performance.
We have considered RAID 5 and SID data layouts with (as much as possible) identical redundancy
ratios for our "performance results." SID extends the domain of possible fault tolerant video server designs
considerably. SID obtains contiguous data layout and has excellent fault-free and degraded mode run-time
performance.
Even with disk drives increasing their storage capacity, it will probably not be possible to store all desired
videos within the video servers. But since our video service is a read-only operation, we could include a
tertiary near-line archive that would transfer videos to the server. We could permanently allocate some
space to each cohort or have two modes of operation: "normal" as before and "almost normal" in which the
number of clients serviced would be slightly diminished to allow for migration of video material from the
archive. This variety of migration would be very straightforward without dynamic contingencies. Another
approach would be "just in time" delivery of the video material which would require the tertiary archive to
be capable of operating at a sufficiently rapid rate. In both situations, a key decision is what to overwrite.
This topic is the subject for an expanded study.
Acknowledgements
We wish to thank the anonymous referees for their detailed suggestions regarding our presentation.
--R
Tolerating Multiple Failures in RAID Architectures with Optimal Storage and Uniform Declustering.
Declustered Disk Array Architectures with Optimal and Near-optimal Parallelism
Fault Tolerant Design of Multimedia Servers.
Staggered Striping in Multimedia Information Systems.
Optimal and Near-Optimal Scheduling Algorithms for Batched Processing in Linear Storage
Maximizing performance in a striped disk array.
Segmented Information Dispersal (SID) for Efficient Reconstruction in Fault-Tolerant Video Servers
Segmented Information Dispersal-A New Design with Application to Erasure Correction
Segmented Information Dispersal.
Pipelined Disk Arrays for Digital Movie Retrieval.
Frames and Resolvable Designs: Uses
Parity Striping of Disc Arrays: Low Cost Reliable Storage with Acceptable Throughput.
Digital Video: An Introduction to MPEG-2
Parity Declustering for Continuous Operation in Redundant Disk Arrays.
A Traffic Model for MPEG-Coded VBR Streams
MPEG: A Video Compression Standard for Multimedia Applications.
Performance Analysis of Disk Arrays Under Failure.
Maintaining Good Performance in Disk Arrays During Failure via Uniform Parity Group Distribution.
Disk Striping in Video Server Environments.
A Case for Redundant Arrays of Inexpensive Disks (RAID).
Design and Evaluation of Gracefully Degradable Disk Arrays.
An Introduction to Disk Drive Modeling.
Improved Parity-Declustered Layouts for Disk Arrays
Permutation Development Data Layout (PDDL) Disk Array Declustering.
Streaming RAID: A disk storage system for video and audio files.
Overlay Striping and Optimal Parallel I/O in Modern Applications.
Efficient Failure Recovery in Multi-disk Multimedia Servers
--TR
--CTR
new distributed storage scheme for cluster video server, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.2, p.79-94, February 2005 | disk arrays;algorithms;video servers;data structures;declustering;separated difference sets |
628163 | Minimizing Bandwidth Requirements for On-Demand Data Delivery. | AbstractTwo recent techniques for multicast or broadcast delivery of streaming media can provide immediate service to each client request, yet achieve considerable client stream sharing which leads to significant server and network bandwidth savings. This paper considers 1) how well these recently proposed techniques perform relative to each other and 2) whether there are new practical delivery techniques that can achieve better bandwidth savings than the previous techniques over a wide range of client request rates. The principal results are as follows: First, the recent partitioned dynamic skyscraper technique is adapted to provide immediate service to each client request more simply and directly than the original dynamic skyscraper method. Second, at moderate to high client request rates, the dynamic skyscraper method has required server bandwidth that is significantly lower than the recent optimized stream tapping/patching/controlled multicast technique. Third, the minimum required server bandwidth for any delivery technique that provides immediate real-time delivery to clients increases logarithmically (with constant factor equal to one) as a function of the client request arrival rate. Furthermore, it is (theoretically) possible to achieve very close to the minimum required server bandwidth if client receive bandwidth is equal to two times the data streaming rate and client storage capacity is sufficient for buffering data from shared streams. Finally, we propose a new practical delivery technique, called hierarchical multicast stream merging (HMSM), which has a required server bandwidth that is lower than the partitioned dynamic skyscraper and is reasonably close to the minimum achievable required server bandwidth over a wide range of client request rates. | Introduction
This paper considers the server (disk I/O and network I/O) bandwidth required for on-demand real-time
delivery of large data files, such as audio and video files 1 . Delivery of the data might be done via the Internet or
via a broadband (e.g., satellite or cable) network, or some combination of these networks.
We focus on popular, widely shared files, such as popular news clips, product advertisements, medical or
recreational information, television shows, or successful distance education content, to name a few examples.
Due to the large size and the typical skews in file popularity, for the most popular files, one can expect many
new requests for the file to arrive during the time it takes to stream the data to a given client.
Prior research has shown that the server and network bandwidth required for on-demand delivery of such
files can be greatly reduced through the use of multicast delivery techniques 2 . A simple approach is to make
requests wait for service, hoping to accumulate multiple requests in a short time that can then all be served by a
single multicast stream [DaSS94]. A second approach (called piggybacking) is to dynamically speed up and
slow down client processing rates (e.g., display rates for video files) so as to bring different streams to the same
file position, at which time the streams can be merged [GoLM95, AgWY96a, LaLG98]. An appealing aspect of
these approaches is that they require the minimum possible client receive bandwidth (i.e., equal to the file play
rate 3 ) and minimal client buffer space. On the other hand, if clients have receive bandwidth greater than the file
* This work was partially supported by the NSF (Grants CCR-9704503 and CCR-9975044) and NSERC (Grant OGP-0000264). A shorter
version of this paper appears in Proc. 5 th Int'l. Workshop on Multimedia Information Systems (MIS '99), Indian Wells, CA, Oct. 1999.
More generally, the delivery techniques we consider may be fruitful for any data stream that clients process sequentially.
We use the term "multicast" to denote both multicast and true broadcast throughout this paper.
3 Throughout the paper we use the term "play rate" to denote the fixed rate at which a file must be transmitted in order for
the client to process or play the stream as it arrives. Data is assumed to be transmitted at this rate unless otherwise stated.
play rate, and some spare buffer space, significantly greater server bandwidth savings can be achieved
[AgWY96b, ViIm96, CaLo97, HuSh97, JuTs98, HuCS98, EaVe98, CaHV99, PaCL99, GaTo99, SGRT99,
EaFV99]. In these stream merging methods, a client receiving a particular stream simultaneously receives and
buffers another portion of the data from a different (multicast) stream, thus enabling greater opportunities for
one client to catch up with and share future streams with another client.
Two of these recent techniques, namely dynamic skyscraper (with channel stealing) [EaVe98] and stream
tapping/patching/controlled multicast [CaLo97, HuCS98, CaHV99, GaTo99, SGRT99], have the key property
that they can provide immediate real-time streaming to each client without requiring initial portions of the file to
be pre-loaded at the client. These two techniques also require client receive bandwidth at most two times the
file play rate. To our knowledge, how these two techniques compare with respect to required server bandwidth
has not previously been studied. This paper addresses this issue as well as the following open questions:
(1) What is the minimum required server (disk and network I/O) bandwidth for delivery techniques that provide
immediate service to clients?
(2) What is the interplay between achievable server bandwidth reduction and client receive bandwidth?
(3) Are there new (practical) delivery techniques that achieve better bandwidth savings than the previous
techniques, yet still provide immediate service to each client request?
How does the best of the techniques that provide immediate service to client requests compare to the static
periodic broadcast techniques (e.g., [AgWY96b, ViIm96, HuSh97, JuTs98]) that have fixed server
bandwidth independent of the client request rate?
The principal system design results, in order of their appearance in the remainder of this paper, are as follows:
. We review the optimized stream tapping/grace patching/controlled multicast method, for which required
server bandwidth increases with the square root of the request arrival rate. This is significantly better than
when immediate service is provided and multicast delivery is not employed, in which case required server
bandwidth increases linearly with the request arrival rate.
. We develop a new implementation of the partitioned dynamic skyscraper technique [EaFV99] that provides
immediate service to client requests more simply and directly than the original dynamic skyscraper method,
and we show how to optimize this partitioned dynamic skyscraper architecture.
. The optimized dynamic skyscraper technique has required server bandwidth that increases logarithmically
(with constant factor between two and three) as a function of the client request rate. Thus, at moderate to
high client request rate, the dynamic skyscraper technique significantly outperforms optimized patching/
stream tapping.
. We derive a tight lower bound on the required server bandwidth for any technique that provides immediate
service to client requests. This lower bound increases logarithmically with a constant factor of one as a
function of the client request arrival rate. Thus, techniques that provide immediate service to each client
request have the potential to be quite competitive with the static broadcast techniques that have fixed server
bandwidth independent of client request rate.
. We define a new family of segmented delivery techniques, called segmented send-latest receive-earliest
(SSLRE). Although not necessarily practical to implement, the SSLRE techniques demonstrate that it is at
least theoretically possible to achieve nearly the lower bound on required server bandwidth if client receive
bandwidth equals twice the file play rate, assuming clients can buffer the required data from shared streams.
. We propose a new practical delivery technique, hierarchical multicast stream merging (HMSM), that is
simple to implement and provides immediate real-time service to clients. Simulation results show that if
client receive bandwidth equals two times the file play rate, the required server bandwidth for the HMSM
technique is reasonably close to the minimum achievable required server bandwidth over a wide range of
client request rates.
For the purposes of obtaining the lower bound and examining the fundamental capabilities of the various
delivery techniques, the above results are obtained assuming that (1) clients have sufficient space for buffering
the data streams, and (2) the entire file is consumed sequentially by the client without use of interactive
functions such as pause, rewind or fast forward. However, each of the techniques that we consider can be
adapted for limited client buffer space, and for interactive functions, with a concomitant increase in required
server bandwidth, as discussed in Section 5.
In this paper, required server bandwidth is defined as the average server bandwidth used to satisfy client
requests for a particular file with a given client request rate, when server bandwidth is unlimited. There are at
least two reasons for believing that this single-file metric, which is relatively easy to compute, is a good metric
of the server bandwidth needed for a given client load. First, although the server bandwidth consumed for
delivery of a given file will vary over time, the total bandwidth used to deliver a reasonably large number of
files will have lower coefficient of variation over time, for independently requested files and fixed client request
rates. Thus, the sum over all files of the average server bandwidth used to deliver each file should be a good
estimate of the total server bandwidth needed to achieve very low client waiting time. Second, simulations of
various delivery techniques have shown that with fixed client request rates and finite server bandwidth equal to
the sum of the average server bandwidth usage for each file, average client waiting time (due to temporary
server overload) is close to zero (e.g., [EaFV99, EaVZ99]). Furthermore, the results have also shown that if
total server bandwidth is reduced below this value, the probability that a client cannot be served immediately
and the average client wait rapidly increase.
The rest of this paper is organized as follows. Section 2 reviews and derives the required server bandwidth
for the optimized stream tapping/grace patching/controlled multicast technique. Section 3 reviews the dynamic
skyscraper technique, develops the simpler method for providing immediate service to client requests, defines
how to optimize the new dynamic skyscraper method, and derives the required server bandwidth. Section 4
derives the lower bound on required server bandwidth for any delivery technique that provides immediate
service to clients and shows the impact of client receive bandwidth on this lower bound. Section 5 defines the
new hierarchical multicast stream merging technique, and Section 6 concludes the paper.
Table
defines notation used throughout the rest of the paper.
Required Server Bandwidth for Optimized Stream Tapping/Grace Patching
Two recent papers propose very similar data delivery techniques, called stream tapping [CaLo97] and
patching [HuCS98], which are simple to implement. The best of the proposed patching policies, called grace
patching, is identical to the stream tapping policy if client buffer space is sufficiently large, as is assumed for
comparing delivery techniques in this paper. The optimized version of this delivery technique [CaHV99,
GaTo99], which has also been called controlled multicast [GaTo99], is considered here.
The stream tapping/grace patching policy operates as follows. In response to a given client request, the
server delivers the requested file in a single multicast stream. A client that submits a new request for the same
file sufficiently soon after this stream has started begins listening to the multicast, buffering the data received.
Each such client is also provided a new unicast stream (i.e., a "patch" stream) that delivers the data that was
delivered in the multicast stream prior to the new client's request. Both the multicast stream and the patch
stream deliver data at the file play rate so that each client can play the file in real time. Thus, the required client
receive bandwidth is twice the file play rate. The patch stream terminates when it reaches the point that the
client joined the full-file multicast.
Table
1: Notation
Symbol Definition
request rate for file i
T total time to play file i (equals total time to transmit file i if
average number of requests for file i that arrive during a period of length i
z
required server bandwidth to deliver a particular file using delivery technique z, in units of
the file play rate
y threshold for file i in optimized stream tapping/grace patching, expressed as a fraction of T i
K number of file segments in dynamic skyscraper
largest segment size in dynamic skyscraper
r stream transmission rate, measured in units of the play rate; default value is
client receive bandwidth, measured in units of the play rate
To keep the unicast patch streams short, when the fraction of the file that has been delivered by the most
recent multicast exceeds a given threshold, the next client request triggers a new full-file multicast. Let
y denote the threshold for file i, and i
T denote the duration of the full-file multicast. Assuming Poisson client
request arrivals at rate i , the required server bandwidth for delivery of file i using stream tapping/grace
patching, measured in units of the play rate, is given by
Ni
Ni
y
y
patching
is the average number of client requests that arrive during time i
T . The denominator in the
middle of the above equation is the average time that elapses between successive full-file multicasts, i.e., the
duration of the threshold period plus the average time until the next client request arrives. The numerator is the
expected value of the sum of the transmission times of the full-file and patch streams that are initiated during
that interval. Note that the average number of patch streams that are started before the threshold expires is
y
and the average duration of the patch streams is /2
y .
Differentiating the above expression with respect to i
y , and setting the result to zero, we obtain
as the optimal threshold value. Substituting this value of i
y into the above expression for
required server bandwidth yields the following result for the required server bandwidth for optimized stream
tapping/grace patching:-2N i
patching
optimized
. (1)
Note that the required server bandwidth grows with the square root of the client request rate for the file,
and that the optimal threshold decreases as the client request rate increases. (The reader is referred to [GaTo99]
for an alternate derivation of these results.)
3 Dynamic Skyscraper Delivery
Section 3.1 reviews the recent partitioned dynamic skyscraper delivery technique [EaFV99] and defines a
particular implementation that provides immediate service to client requests in a simpler and more direct way
than the original dynamic skyscraper method. The required server bandwidth for this version of partitioned
dynamic skyscraper delivery is derived in Section 3.2.
3.1 Providing Immediate Service Using Partitioned Dynamic Skyscraper
The static skyscraper broadcast scheme defined in [HuSh97] divides a file into K increasing-sized
segments largest segment size denoted by W. 4 In a broadband satellite or cable network,
each segment is continuously broadcast at the file play rate on its own channel, as illustrated in Figure 1. Each
client is given a schedule for tuning into each of the K channels to receive each of the file segments. For
example, a client who requests the file just before the broadcast period labelled 3 on the first channel, would be
scheduled to receive segments 1-3 sequentially during the periods that are labelled 3, and segments 4-6 during
the periods labelled 1 on channels 4-6. The structure of the server transmission schedule ensures that, for any
given segment 1 broadcast that a client might receive, the client can receive each other file segment at or before
the time it needs to be played, by listening to at most two channels simultaneously. (The reader can verify this
4 The segment sizes are not strictly increasing, but have the pattern 1,2,2,j,j,k,k,., with each size change being an increase
in size. In the original skyscraper scheme, the progression was specified as 1,2,2,5,5,12,12,25,25,., upper bounded by the
parameter W. This progression appears to have the maximum possible size increases (among progressions with the above
pattern) such that clients never need to listen to more than two streams simultaneously. The progression 1,2,2,4,4,8,8,.,
provides similar performance for skyscraper broadcasts (i.e., same client receive bandwidth, similar segment 1 delivery time,
and similar client buffer space requirement), and increases the efficiency of dynamic skyscraper broadcasts.
in
Figure
1.) Since larger segments are multicast less frequently, clients must be able to receive and buffer
segments ahead of when they need to be played, thus merging with other clients that may be at different play
points. The maximum client buffer space needed by any client transmission schedule is equal to the largest
segment size (W) [HuSh97].
The required server bandwidth for this static skyscraper method is equal to K, independent of the client
request rate for the file. The duration of each segment 1 broadcast is determined by the total file delivery time
divided by the sum of the segment sizes. Thus, larger values of K and W result in lower average and
maximum client wait time for receiving the first segment. A desirable configuration might have K=10 and the
segment size progression equal to 1,2,2,4,4,8,8,16,16,32, in which case segment 1 broadcasts begin every
T , and required server bandwidth for skyscraper is lower than for optimized patching if i
The dynamic skyscraper delivery technique was proposed in [EaVe98] to improve the performance of the
skyscraper technique for lower client request rates and for time-varying file popularities. In this technique, if a
client request arrives prior to the broadcast period labeled 1 on the first channel in Figure 1, the set of segment
broadcasts that are shaded in the figure (called a transmission cluster) might be scheduled to deliver the
segments of the file. The arriving client only needs the segment transmissions labeled 1 on each channel.
However, any client who requests the same file prior to the transmission period labeled 8 on channel 1 will
receive broadcasts from the cluster that was scheduled when the first request arrived. When the broadcast
labeled 8 is complete, the six channels (e.g., satellite or cable channels) can be scheduled to deliver an
identically structured transmission cluster for a different file. Thus, if there is a queue of pending client requests,
the six channels will deliver a cluster of segment broadcasts for the file requested by the client at the front of the
queue. If no client requests are waiting, the six channels remain idle until a new client request arrives, which
then initiates a new transmission cluster. Note that as with optimized patching any queueing discipline, for
example one tailored to the needs of the service provider and clients, can be used to determine the order in
which waiting clients are served during periods of temporary server overload.
To employ the dynamic skyscraper technique over the Internet, the transmission cluster can be implemented
with W multicast streams of varying duration, as shown by the numbering of the cluster transmission periods in
Figure
1. That is, the first stream delivers all K segments, the second stream starts one unit segment later and
delivers only the first segment, the third stream starts one unit segment later than the second stream and delivers
the first three segments, and so on. Total server bandwidth is allocated in units of these clusters of W variable-length
streams. Each cluster of streams uses K units of server bandwidth, with each unit of bandwidth used for
duration W. Note that this implementation requires clients to join fewer multicast groups and provides more
time for joining the successive multicast groups needed to receive the entire file than if there are K multicast
groups each transmitting a different segment of the file.
In the original dynamic skyscraper technique [EaVe98], immediate service is provided for clients that are
waiting for the start of a segment 1 multicast in a transmission cluster using a technique termed channel stealing.
That is, (portions of) transmission cluster streams that have no receiving clients may be reallocated to provide
quick service to newly arriving requests.
Figure
1: Skyscraper and Dynamic Skyscraper Delivery
(K=6, W=8, segment size progression = 1,2,2,4,4,8)
Segment 1:
2:
3:
4:
5:
We propose a more direct way to provide immediate service to client requests that is also simpler to
implement. This more direct approach involves a small modification to the segment size progression, and a
particular implementation of partitioned skyscraper delivery [EaFV99]. The new segment size progression has
the form 1,1,2,2,j,j,k,k,. As in [EaVe98, EaFV99], each size increase is either two-fold or three-fold, and no
two consecutive size increases is three-fold, to limit the number of streams a client must listen to concurrently.
Also as before, the progression is upper-bounded by the parameter W, the purpose of which is to limit the
required client buffer space.
We partition the segments, as illustrated in Figure 2, so that streams that deliver the first two segments can
be scheduled independently and immediately in response to each client request. Thus, when client A requests
the file, the server schedules one stream to deliver the first two segments, and schedules W/2 multicast streams
to deliver a transmission cluster of segments 3 through K. These streams (one that delivers the first two
segments and four that deliver the transmission cluster) are shaded in Figure 2. When client B requests the file,
the server only allocates a new stream (from unscheduled bandwidth not shown in the figure) to deliver the first
two segments of the file to client B. Client B receives segments 3 through K by listening to the streams in the
transmission cluster that was scheduled when client A arrived. The next section computes the average server
bandwidth used when the initial two-segment stream and possible new transmission cluster are scheduled
immediately for each client request.
A key observation is that this partitioned dynamic skyscraper system with each segment size increase being
a two-fold increase (i.e., progression 1,1,2,2,4,4,8,8,.) requires client receive bandwidth equal to only twice
the file play rate. The partitioned system with at least one three-fold segment size increase (such as the
progression 1,1,2,2,6,6,12,12,36,36,.) requires client receive bandwidth equal to three times the file play rate,
as in the corresponding dynamic skyscraper system without partitioning.
3.2 Required Server Bandwidth for Dynamic Skyscraper
Let U denote the duration of a unit-segment multicast, which is determined by the time duration of the
T ) and the sum of the segment sizes. For the partitioned dynamic skyscraper system defined above,
the required server bandwidth for delivery of a file i (measured in units of the streaming rate), given a Poisson
request arrival stream, is given by
(K
skyscraper
dynamic
. (2)
Figure
2: Partitioned Dynamic Skyscraper with Immediate Service
(K=9, W=8, segment size progression = 1,1,2,2,4,4,8,8,8)
Client A Client B
Segment 1:
2:
3:
4:
5:
7:
8:
K=9:
The first term is the required bandwidth for delivering the first two unit segments of the file, which involves
sending a stream of duration 2U at frequency l i . The second term is the required bandwidth for delivering the
transmission clusters for the rest of the file, which use K-2 units of server bandwidth, where each unit of
bandwidth is used for time equal to WU.
The values of K and W that minimize the required server bandwidth given in equation (2), may be found
numerically for any particular segment size progression of interest.
It is also possible to determine the asymptotic behavior for high client request rate. For the progression
1,1,2,2,4,4,8,8,.,
Appendix
A shows that for N i > 128, the required server bandwidth is approximately2
- . (3)
Note that the required server bandwidth grows only logarithmically with client request rate.
Other skyscraper systems may be similarly analyzed. For the progression 1,1,2,2,6,6,12,12,36,36,., the
required server bandwidth for large client request rate can be shown to be0
- . (4)
Figure
3 shows the required server bandwidth as a function of client request rate (N i ) for optimized
dynamic skyscraper systems with two different segment size progressions, and for optimized stream
tapping/grace patching. The required server bandwidth for the skyscraper systems is computed using equation
(2) with optimal choices of K and W determined numerically for each i
N . (Recall that the segment size
progression 1,1,2,2,4,4,8,8,. has the same client receive bandwidth requirement as optimized stream
tapping/patching, namely, two times the file play rate.)
The results show that for i
greater than 64 requests on average per time T i , the optimized dynamic
skyscraper systems have significantly lower bandwidth requirements than optimized stream tapping/patching.
More specifically, the optimized dynamic skyscraper delivery system has required server bandwidth that is
reasonably competitive with optimized stream tapping/patching at low client request rate, and is better than or
competitive with static broadcast techniques over the range of client request rates shown in the figure.
Skyscraper systems in which a second partition is added between channels k and k+1, 2<k<K-2, are also
of interest [EaFV99]. Analysis shows that adding the second partition does not alter the asymptotic behavior
under high client request arrival rates, but does improve performance somewhat for more moderate arrival rates,
at the cost of an increase in the required client receive bandwidth. (For k odd, client receive bandwidth must be
three times the file play rate if the progression is 1,1,2,2,4,4,8,8,., or four times the file play rate if the
progression is
Client Request Rate, N
Required
Server
Bandwidth Optimized Patching
Dyn Sky(1,1,2,2,4,4,.)
Dyn Sky(1,1,2,2,6,6,.)
Figure
3: Required Server Bandwidth for Dynamic Skyscraper & Stream Tapping/Patching
4 Minimum Required Server Bandwidth for Immediate Service
Given the results in Figure 3, a key question is whether there exist multicast delivery methods that can
significantly outperform optimized stream tapping/patching and dynamic skyscraper, over the wide range of
client request arrival rates. In other words, how much further improvement in required server bandwidth is
possible? Section 4.1 addresses this question by deriving a very simple, yet tight, lower bound on the required
server bandwidth as a function of client request rate, for any delivery technique that provides immediate real-time
streaming to clients. The lower bound assumes clients have unlimited receive bandwidth. Section 4.2
considers how much the lower bound increases if client receive bandwidth is only equal to n times the file play
rate, for arbitrary n >1.
4.1 Lower Bound on Required Server Bandwidth
The lower bound on required server bandwidth for delivery techniques that provide immediate real-time
service to clients is derived below initially for Poisson client request arrivals, and then is extended to a much
broader class of client request arrival processes. 5
As before, let i
T be the duration of file i and i be its average request rate. Consider an infinitesimally
small portion of the file that plays at arbitrary time x relative to the beginning of the file. For an arbitrary client
request that arrives at time t, this portion of the file can be delivered as late as time t+x, but no later than t+x, if
the system provides immediate real time file delivery to each client. If the portion is multicast at time t+x, all
other clients who request file i between time t and t+x, can receive this multicast of the portion at position x.
For Poisson arrivals, the average time from t+x until the next request arrives for file i is i
1/ . Thus, the
minimum frequency of multicasts of the portion beginning at position x, under the constraint of immediate real
time service to each client, is )
1/
x . This yields a lower bound on the required server bandwidth, in units
of the file play rate, for any technique that provides immediate service to client requests, of
d
minimum
x , (5)
is the average number of requests for the file that arrive during a period of length i
T .
The above lower bound implies that, for Poisson arrivals and immediate real-time service to each client,
the required server bandwidth must grow at least logarithmically with the client request rate. In fact, this is true
for any client arrival process such that the expected time until the next arrival, conditioned on the fact that some
previous request arrived at the current time minus x for any 0 < x < i
T , is bounded from above by i
c for some
constant c. In this case, we replace i
1/ in the denominator of the integral in equation (5) with i
c , which
yields a lower bound on required server bandwidth of )
c . This result is very similar to that in
equation (5); in fact, c
)/
, a constant independent of i
N . Furthermore, this more
general lower bound on required server bandwidth is tight if arrivals occur in "batches", with c requests per
batch, and the batch arrival times are Poisson with rate c
i . This illustrates the key point that there are
greater opportunities for stream sharing, and therefore the server bandwidth requirement is lower, if arrivals are
more bursty (i.e., for larger values of c). By considering Poisson arrivals, we can expect conservative
performance estimates if the actual client request arrival process is more bursty than the Poisson.
5 The Poisson assumption is likely to be reasonably accurate for full streaming media file requests [AKEV01], although one
would expect the (approximately Poisson) request rate to be time-varying, and in particular, time-of-day dependant. For
relatively short files, the Poisson analysis is directly applicable. For the case that substantive change in request rate occurs
on a time scale similar to the file play duration (e.g., perhaps for a two hour movie), the analysis for the broader class of
arrival processes yields applicable intuition and a similar result.
Further, it is easy to see that the bound in equation (5) can be reformulated as a function of the start-up
delay d in a static broadcast scheme, instead of as a function of the client request rate, simply by replacing i
by d
This result is shown formally in parallel work by Birk and Mondri [BiMo99].
Comparing equation (5) with equations (1) through (4) shows that there is considerable room for
improvement over the optimized stream tapping/patching and dynamic skyscraper delivery methods. However,
the lower bound in equation (5) assumes clients can receive arbitrarily many multicasts simultaneously. The
next section considers the likely lower bound on required server bandwidth if client receive bandwidth is a
(small) multiple of the file play rate.
4.2 Impact of Limited Client Receive Bandwidth
For clients that have receive bandwidth equal to n times the file play rate, for any n >1, we define a new
family of delivery techniques. These techniques may not be practical to implement, but intuitively they are
likely to require close to the minimum possible server bandwidth for immediate real-time service to clients who
have the specified receive bandwidth. Each technique in the family is distinguished by parameters n and r,
where r is the stream transmission rate, in units of the file play rate. (In each of the techniques discussed in
previous sections of this paper, r is equal to one.) We derive a close upper bound on the required server
bandwidth, for any n and r. The results show that for the required
server bandwidth for the new technique is nearly equal to the lower bound on required server bandwidth that
was derived in Section 4.1 for any technique that provides immediate real-time service to clients. This suggests
that it may be possible to develop new practical techniques that nearly achieve the lower bound derived in
Section 4.1, and that require client bandwidth of at most two times the play rate.
The new family of delivery techniques operate as follows when the segment transmission rate
which case n is an integer). A file is divided into arbitrarily small segments and the following two rules are used
to deliver the segments to a client who requests the given file at time t:
1. The client receives any multicast of a segment that begins at a position x in the file, as long as that multicast
commences between times t and t+x, and as long as receiving that multicast would not violate the limit n on
the client receive bandwidth. If at any point in time there are more than n concurrent multicasts that the
client could fruitfully receive, the client receives those n segments that occur earliest in the file.
2. Any segment of the file that cannot be received from an existing scheduled multicast is scheduled for
multicast by the server at the latest possible time. That is, if the segment begins at position x and is
transmitted at the file play rate (i.e., 1), the segment is scheduled to begin transmission at time t+x.
For lack of a better name, we call this family of techniques, and its generalization for r - 1given in Appendix C,
segmented send-latest receive-earliest (SSLRE(n,r)). Note that the SSLRE(-,1) technique achieves the lower
bound on required server bandwidth derived in Section 4.1 to any desired precision by dividing the file into
sufficiently small segments. This demonstrates that the lower bound derived in the previous section is tight. A
similar delivery technique defined only for unlimited client receive bandwidth, but augmented for limited client
buffer space, has been shown in parallel work [SGRT99] to be optimal for the case in which available client
buffer space may limit which transmissions a client can receive.
The SSLRE(n,r) technique may be impractical to implement, as it results in very fragmented and complex
delivery schedules. Furthermore, the SSLRE(n,r) technique does not have minimum required server bandwidth
for finite n because there are optimal rearrangements of scheduled multicast transmissions when a new client
request arrives that are not performed by SSLRE. However, the required server bandwidth for the SSLRE(n,r)
technique, derived below for arbitrary n, provides an upper bound on the minimum required server bandwidth
for each possible client receive bandwidth. We speculate that this bound provides accurate insight into the
lower bound on required server bandwidth for each client receive bandwidth. In particular, the optimal
rearrangements of scheduled multicast transmissions that SSLRE does not perform are, intuitively, likely to have
only a secondary effect on required server bandwidth, as compared with the heuristics given in rules 1 and 2
above that are implemented in the SSLRE technique. This intuition is reinforced by the results below that show
that the required server bandwidth for SSLRE(3,1), or for SSLRE(2,e), is very close to the lower bound derived
in Section 4.1. (That is, the optimal rearrangements of scheduled multicasts have at most very minor impact on
required server bandwidth in these cases.)
For a division of file i into sufficiently many small segments, Appendix B derives the following estimate of
the required server bandwidth for the SSRLE(n,1) technique, in units of the file play rate:
B,
n is the positive real constant that satisfies the following equation:11
n .
The above result assumes Poisson arrivals, but can be generalized in a similar fashion as was done for the lower
bound with unlimited client receive bandwidth. 1
decreases monotonically in n between
- 1.62 and .1
Thus, if the server transmits each segment at the file play rate (i.e., client
receive bandwidth is equal to twice the play rate (i.e., possible (although not necessarily practical) to
provide immediate service with no more than 62% greater server bandwidth than is minimally required when
clients have unbounded receive bandwidth. Further, since 3,1 - 1.19, it is possible to achieve nearly all of the
benefit of unbounded client bandwidth, with respect to minimizing required server bandwidth, when
clients have receive bandwidth equal to just three times the play rate.
Appendix
C defines the SSLRE technique for the more general case that the segment transmission rate, r,
can be different than the file play rate. In this case, a client can listen to at most s concurrent segment
transmissions (each at rate r), and total client receive bandwidth, may not be an integer. For this more
general family of techniques, Appendix C derives an estimate of the required server bandwidth as
r
r
r
where r
n, is the positive real constant that satisfies the following equation:,
r
r
r
r
n .
For fixed n, the value of r
thus the required server bandwidth, is minimized for r tending to zero
(i.e., low-rate segment transmissions). Denoting the value of n,r in this limiting case by n, , we have ,
e
n .
Note 6 that n, decreases monotonically in n, from ,
1.255. Thus, for client receive bandwidth n=2, it is (at least theoretically) possible to provide immediate real-time
streaming with no more than about 25% greater server bandwidth than is minimally required when clients
have unbounded receive bandwidth. Furthermore, for n < 2, there is potential for required server bandwidth to
grow only logarithmically with a small constant factor as a function of client request rate. Practical techniques
that exploit this latter potential are explored in [EaVZ00]. Figure 4 shows the lower bound on the required
server bandwidth for unlimited client receive bandwidth from equation (5), and the estimated required server
6 For ,
which is the expected result.
bandwidth for SSLRE(3,1), SSLRE(2,e) and SSLRE(2,1) from equation (7), as functions of the client request
rate, i
N . Note that SSLRE(3,1)
are very close to B minimum , and that for client request rates
up to at least an average of 1000 requests per file play time, even SSLRE(2,1)
B is reasonably competitive with
static broadcast techniques (which require fixed server bandwidth on the order of five to ten streams).
The SSLRE(n,r) techniques can be extended in a straightforward way to operate with finite client buffer
space. However, since the techniques involve very complex server transmission schedules and client receive
schedules, the principal value of these techniques is to determine, approximately, the lowest feasible required
server bandwidth for providing immediate service to client requests when clients have receive bandwidth equal
to n. In the next section we propose a new practical delivery technique and then evaluate the performance of the
new technique by comparing its required server bandwidth against the required server bandwidth for SSLRE.
We then comment on finite client buffer space in the context of the new practical delivery method.
5 Hierarchical Multicast Stream Merging (HMSM)
The results in Figures 3 and 4 show that there is considerable potential for improving performance over the
previous optimized stream tapping/patching/controlled multicast technique and over the optimized dynamic
skyscraper technique. Motivated by these results, we propose a new delivery technique that we call hierarchical
multicast stream merging (HMSM).
The new HMSM technique attempts to capture the advantages of dynamic skyscraper [EaVe98] and
piggybacking [GoLM95, AgWY96a, LaLG98], as well as the strengths of stream tapping/patching [CaLo97,
HuCS98]. In particular, clients that request the same file are repeatedly merged into larger and larger groups,
leading to a hierarchical merging structure (as in dynamic skyscraper or piggybacking). Furthermore, clients are
merged using dynamically scheduled patch streams (as in stream tapping/patching), rather than using
transmission clusters or altering client play rates.
5.1 HMSM Delivery Technique
Key elements of the hierarchical multicast stream merging technique include: (1) each data transmission
stream is multicast so that any client can listen to the stream, (2) clients accumulate data faster than their file
play rate (by receiving multiple streams and/or by receiving an accelerated stream), thereby catching up to
clients that started receiving the file earlier, (3) clients are merged into larger and larger groups, and (4) once
two transmission streams are merged, the clients listen to the same stream(s) to receive the remainder of the file.
The hierarchical multicast stream merging technique is illustrated in Figure 5 for a particular set of request
arrivals for an arbitrary file, assuming the server transmits streams at the play rate (i.e., clients have
receive bandwidth equal to twice the play rate. In this case, denoted by HMSM(2,1), the most efficient way for
a client (or group of clients) to merge with an earlier client or group that requested the same file, is to listen to
the latter's transmission stream, as well as one's own stream. One unit of time on the x-axis corresponds to the
Client Request Rate, N
Required
Server
Bandwidth Dyn Sky(1,1,2,2,4,4,.)
Figure
4: Required Server Bandwidth for Immediate Real Time File Delivery
total time it takes to deliver the file. One unit of data on the y-axis represents the total data for the file. The
solid lines in the figure represent data transmission streams, which always progress through the file at rate equal
to one unit of data per unit of time. The dotted lines show the amount of useful data that a client or group of
clients has accumulated as a function of time.
In the figure, requests arrive from clients A, B, C, and D at times 0, 0.1, 0.3, and 0.4, respectively. In order
to provide immediate service, each new client is provided a new multicast stream that initiates delivery of the
initial portion of the requested file. Client B also listens to the stream initiated by client A, accumulating data at
rate two, and merging with client A at time 0.2. Client D listens to the stream initiated by client C, and merges
with client C before client C can merge with client A. When C and D merge, both C and D listen to the streams
initiated by A and C until both clients have accumulated enough data to merge with clients A and B.
Note that a hierarchical merging structure would also be formed if clients C and D each listen to and
separately merge with the stream initiated by client A. In this case, the merge of clients C and A (which would
terminate the stream for C) would take place at time 0.6, and the merge of clients D and A (which would
terminate the stream for D) would take place at time 0.8. This alternate hierarchical merging structure, which
would occur in the patching technique with threshold larger than 0.4, would require greater server bandwidth for
delivering the file.
Variants of hierarchical multicast stream merging differ according to the precise policy used to determine
which clients to merge with what others, and in what order, as well as according to what (existing or new)
streams are listened to by clients so as to accomplish the desired merges. For homogeneous clients with receive
bandwidth equal to twice the streaming rate, [EaVZ99] proposes and evaluates several heuristic policies for
merging streams that are transmitted at the file play rate. One particularly simple proposed policy dictates that
each client listens to the closest target (i.e., the most recently initiated earlier stream that is still active) in
addition to its own stream, and that merges occur in time-order (i.e., earliest merge first). The policy
evaluations show that this closest target/earliest merge first (CT) policy performs nearly as well as the off-line
optimal merging policy (in which client request arrivals are known in advance, streams are delivered at the play
rate, and the merges that lead to least total server bandwidth are performed) [EaVZ99]. Hierarchical multicast
stream merging policies for homogeneous clients that have receive bandwidth less than twice the play rate are
considered in [EaVZ00]. On-going research considers other contexts.
5.2 Required Server Bandwidth for HMSM
Figure
6 provides the required server bandwidth for HMSM(2,1), as obtained from simulation, assuming
Poisson arrivals and optimal merges that are computed from known client request arrival times using a dynamic
programming technique adapted from [AgWY96a]. As shown in [EaVZ99], there are simple heuristics for0.10.30.50.70.91.1
Time
Position
in
Media
stream and
progress for
client A
Merged stream
and progress for
clients A and B
Merged stream for
clients C and D
stream for
client B
Progress for
clients C and D
Progress for client D
Stream for Client D
Merged stream and progress
for clients A,B,C,D
Figure
5: Example of Hierarchical Multicast Stream Merging
determining merges with unknown future client request arrival times, such as closest target/earliest merge first,
that yield very nearly the same performance as for the offline optimal merges considered here. Also shown in
the figure are the lower bound given in equation (5), and the required server bandwidths for SSLRE(2,1), for
optimized stream tapping/patching, and for the dynamic skyscraper system with progression 1,1,2,2,4,4,8,8,.
and optimal choices of K and W. The results show that HMSM(2,1) yields uniformly good performance,
substantially improving on previous techniques. In fact, HMSM(2,1) has nearly identical performance to
SSLRE(2,1), which suggests that there is little scope for further improvement for policies that are simple to
implement, assuming that B SSLRE(n,1) provides accurate insight into the lower bound on required server
bandwidth when streams are transmitted at the file play rate, which seems likely to be the case. Note that the
similarity in performance between HMSM(2,1) and SSLRE(2,1) is also perhaps surprising, given the simplicity
of the HMSM streams as compared with the complexity of SSLRE segment schedules.
An analytic expression for the required server bandwidth for hierarchical multicast stream merging appears
to be quite difficult to obtain. However, for Poisson arrivals and N i - 1000, recalling from equation (6) that
B SSLRE(2,1) is approximately equal to 1.62 ln (N / 1.62 + 1), the results in Figure 6 show that the required server
bandwidth for HMSM(2,1) with Poisson arrivals is also reasonably well approximated by 1.62
Furthermore, an upper bound on the required bandwidth with optimal offline merging, client receive bandwidth
equal to twice the file play rate, and an arbitrary client request arrival process, derived in Appendix D, is as
follows:
line
optimaloff
Comparing this upper bound with the approximation for Poisson arrivals shows that the bound is quite
conservative for bursty client request arrivals. However, the bound demonstrates that the required server
bandwidth for HMSM with client receive bandwidth equal to twice the play rate, is logarithmic in the client
request rate for any request arrival pattern.
5.3 Finite Buffer Space and Client Interactivity
This paper has thus far discussed and analyzed multicast delivery techniques assuming that clients can
buffer all data that is received ahead of its scheduled playback time, and assuming the client does not perform
any interactive functions such as pause, rewind, fast forward, skip back, or skip ahead. On the other hand, each
of the techniques is easily extended to handle either limited client storage or interactive client requests. For
example, Sen et al. have explored how the stream tapping/patching delivery technique should be modified to
accommodate constrained client buffer space [SGRT99].
For the HMSM technique, if a client does not have the buffer space to implement a given merge, that
particular merge is simply not scheduled. The impact of limited client buffer space on the performance of
HMSM is studied in [EaVZ99] for client receive bandwidth equal to twice the play rate, and in [EaVZ00] for
Required
Server
Bandwidth Opt imized Patching
Dyn Sky(1,1,2,2,4,4,.)
Figure
Required Server Bandwidth for Hierarchical Multicast Stream Merging
Client Request Rate, N14
client receive bandwidth less than twice the file play rate. Both studies show that, if clients can store 5-10% of
the full file and client request arrivals are Poisson, the impact of limited client buffer space on required server
bandwidth is fairly small [EaVZ99, EaVZ00]. The intuitive explanation for this result is that when client
requests are bursty, buffer space equal to 5-10% of the file enables most of the merges to take place.
The HMSM technique is also easily extended for interactive client requests. For example, with fast
forward, the client is given a new multicast stream during the fast forward operation, and when the fast forward
operation is complete, the new stream is merged with other streams as in the standard HMSM policy. Similarly,
for pause, rewind, skip back/ahead, or other interactive requests, the client is given a new stream at the start or
end of the interactive request (as appropriate), and the new stream is merged with other streams at the end of the
interactive operation. Disk storage techniques that support many of the interactive functions are discussed, for
example, in [SaMR00]. The required server bandwidth for supporting interactive functions depends on the
frequency, type, and duration of the interactive requests. The extra server bandwidth needed during the
interactive requests will be the same for all multicast delivery techniques. Exploring the full impact of client
interactivity on the relative required server bandwidth for various delivery techniques is left for future work.
6 Conclusions
This paper has investigated the required server bandwidth for on-demand real-time delivery of large
popular data files, assuming that a multicast capability is available so that multiple clients can share reception of
a single data transmission. As explained in Section 1, we defined required server bandwidth to be the average
server bandwidth used to deliver a file with a given client request rate, when the server has unlimited (disk and
network) bandwidth.
We developed a new implementation of the partitioned dynamic skyscraper delivery technique that provides
immediate service to clients more simply and easily than the original dynamic skyscraper technique. We
defined a method for determining the optimal parameters (i.e., K and W) of this new dynamic skyscraper system
for a given client request rate, and derived the required server bandwidth, which is logarithmic with a constant
factor between two and three in the client request rate. Thus, at moderate to high client request rates, the
dynamic skyscraper system outperforms the optimized stream tapping/patching/controlled multicast technique,
which has required server bandwidth that increases with the square root of the client request rate.
We derived a tight lower bound on the server bandwidth required for any technique that provides
immediate real-time service, and found that this bandwidth must grow at least logarithmically with the client
request rate. By defining and analyzing the required server bandwidth for a new family of delivery techniques
with complex and fragmented segment delivery schedules (SSLRE(n,r)), we showed that required server
bandwidth generally increases as client receive bandwidth (n) decreases, but that for client receive bandwidth
equal to twice the file play rate there is potential for required server bandwidth to be no more than 25%-62%
greater than the lower bound. The results for SSLRE(n,r) also demonstrated that for client receive bandwidth
less than twice the file play rate (i.e., n < 2) there is potential for required server bandwidth to increase only
logarithmically with a small constant factor as client request rate increases.
We proposed a new practical delivery method, hierarchical multicast stream merging (HMSM), which
merges clients into larger and larger groups that share multicast streams, without altering the client play rate.
This new technique has a number of important advantages. First, it is simple to implement and it is easily
extended to operate in the presence of interactive client requests. Second HMSM with outperforms the
optimized stream tapping/patching and optimized dynamic skyscraper techniques at all client request rates, and
not much further improvement in required server bandwidth is possible. The HMSM technique with
also competitive with static broadcast techniques at high client request rates (and is far superior to static
broadcast techniques when client request rate is low or varying). For example, if on average 1000 client
requests arrive during the time it takes to transmit the full file at the file play rate, the required server bandwidth
for providing immediate real-time service to each client using HMSM (n=2) is approximately equal to 10
streams at the file play rate. (Efficient HMSM techniques with n < 2 are defined in [EaVZ00].) Finally, the
slow logarithmic increase in required server bandwidth as a function of client request rate implies that the
HMSM delivery technique could be used to offer a new service to clients that join a live multicast after it has
begun, namely each such client could start at an earlier point in the multicast and then catch up to the live
multicast stream, without much increase in the server bandwidth expended on the live multicast. Note also that,
like the other multicast delivery methods examined in this paper, the HMSM delivery technique is not dependent
on any particular form of multicast support in the network. IP multicast, application-level multicast, broadband
satellite or cable broadcast, or other multicast mechanisms can be used to deliver the HMSM data streams.
On-going research includes: (1) designing HMSM policies for composite objects and for clients with
heterogeneous receive bandwidths and storage capacities, (2) evaluating the impact of file indexing and client
interactive functions on required server bandwidth for HMSM, (3) developing optimal real-time delivery
techniques that support recovery from packet loss, (4) developing optimized caching models and strategies
[EaFV99, EaFV00] for HMSM systems, (5) designing and evaluating disk load balancing strategies for HMSM
systems, and (6) the design and implementation of a prototype system that supports experimental evaluation of
alternative delivery techniques and caching strategies.
--R
"On Optimal Piggyback Merging Policies for Video-On-Demand Systems"
"A Permutation Based Pyramid Broadcasting Scheme for Video- On-Demand Systems"
"Analysis of Educational Media Server Workloads"
"Tailored Transmissions for Efficient Near-Video-On-Demand Service"
"Optimizing Patching Performance"
"Improving Video-on-Demand Server Efficiency Through Stream Tapping"
"Scheduling Policies for an On-demand Video Server with Batching"
"Dynamic Skyscraper Broadcasts for Video-on-Demand"
"Optimized Regional Caching for On-Demand Data Delivery"
"Optimized Caching in Systems with Heterogeneous Client Populations"
"Optimal and Efficient Merging Schedules for Video-on- Demand Servers"
"Bandwidth Skimming: A Technique for Cost-Effective Video- on-Demand"
"Supplying Instantaneous Video-on-Demand Services Using Controlled Multicast"
"Reducing I/O Demand in Video-On-Demand Storage Servers"
"Skyscraper Broadcasting: A New Broadcasting Scheme for Metropolitan Video-on- Demand Systems"
"Patching: A Multicast Technique for True Video-On-Demand Services"
"Fast Data Broadcasting and Receiving Scheme for Popular Video Service"
Queueing Systems: Vol.
"Merging Video Streams in a Multimedia Storage Server: Complexity and Heuristics"
"A Hybrid Broadcasting Protocol for Video On Demand"
"Comparing Random Data Allocation and Data Striping in Multimedia Servers"
"Optimal Patching Schemes for Efficient Multimedia Streaming"
"Metropolitan Area Video-on-Demand Service using Pyramid Broadcasting"
--TR
--CTR
Yi Cui , Klara Nahrstedt, Layered peer-to-peer streaming, Proceedings of the 13th international workshop on Network and operating systems support for digital audio and video, June 01-03, 2003, Monterey, CA, USA
Zheng , Guobin Shen , Shipeng Li, Distributed prefetching scheme for random seek support in peer-to-peer streaming applications, Proceedings of the ACM workshop on Advances in peer-to-peer multimedia streaming, November 11-11, 2005, Hilton, Singapore
Chow-Sing Lin , Yi-Chi Cheng, P2MCMD: A scalable approach to VoD service over peer-to-peer networks, Journal of Parallel and Distributed Computing, v.67 n.8, p.903-921, August, 2007
Yanping Zhao , Derek L. Eager , Mary K. Vernon, Scalable on-demand streaming of nonlinear media, IEEE/ACM Transactions on Networking (TON), v.15 n.5, p.1149-1162, October 2007
Liqi Shi , Phillipa Sessini , Anirban Mahanti , Zongpeng Li , Derek L. Eager, Scalable streaming for heterogeneous clients, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Klara Nahrstedt , Bin Yu , Jin Liang , Yi Cui, Hourglass multimedia content and service composition framework for smart room environments, Pervasive and Mobile Computing, v.1 n.1, p.43-75, March 2005
Juan Segarra , Vicent Cholvi, Convergence of periodic broadcasting and video-on-demand, Computer Communications, v.30 n.5, p.1136-1141, March, 2007
Marcus Rocha , Marcelo Maia , talo Cunha , Jussara Almeida , Srgio Campos, Scalable media streaming to interactive users, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Marcelo Maia , Marcus Rocha , talo Cunha , Jussara Almeida , Srgio Campos, Network bandwidth requirements for optimized streaming media transmission to interactive users, Proceedings of the 12th Brazilian symposium on Multimedia and the web, November 19-22, 2006, Natal, Rio Grande do Norte, Brazil
Xiaobo Zhou , Cheng-Zhong Xu, Efficient algorithms of video replication and placement on a cluster of streaming servers, Journal of Network and Computer Applications, v.30 n.2, p.515-540, April, 2007
Anirban Mahanti , Derek L. Eager , Mary K. Vernon , David J. Sundaram-Stukel, Scalable on-demand media streaming with packet loss recovery, IEEE/ACM Transactions on Networking (TON), v.11 n.2, p.195-209, April
Meng Guo , Mostafa H. Ammar , Ellen F. Zegura, Selecting among replicated batching video-on-demand servers, Proceedings of the 12th international workshop on Network and operating systems support for digital audio and video, May 12-14, 2002, Miami, Florida, USA
Anirban Mahanti , Derek L. Eager , Mary K. Vernon , David Sundaram-Stukel, Scalable on-demand media streaming with packet loss recovery, ACM SIGCOMM Computer Communication Review, v.31 n.4, p.97-108, October 2001
Haonan Tan , Derek L. Eager , Mary K. Vernon , Hongfei Guo, Quality of service evaluations of multicast streaming protocols, ACM SIGMETRICS Performance Evaluation Review, v.30 n.1, June 2002
Bashar Qudah , Nabil J. Sarhan, Towards scalable delivery of video streams to heterogeneous receivers, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
Haonan Tan , Derek L. Eager , Mary K. Vernon, Delimiting the range of effectiveness of scalable on-demand streaming, Performance Evaluation, v.49 n.1-4, p.387-410, September 2002
Niklas Carlsson , Derek L. Eager , Mary K. Vernon, Multicast protocols for scalable on-demand download, Performance Evaluation, v.63 n.9, p.864-891, October 2006
Huadong Ma , G. Kang Shin , Weibiao Wu, Best-Effort Patching for Multicast True VoD Service, Multimedia Tools and Applications, v.26 n.1, p.101-122, May 2005
Wun-Tat Chan , Tak-Wah Lam , Hing-Fung Ting , Prudence W. H. Wong, On-line stream merging in a general setting, Theoretical Computer Science, v.296 n.1, p.27-46, 4 March
Yanping Zhao , Derek L. Eager , Mary K. Vernon, Network bandwidth requirements for scalable on-demand streaming, IEEE/ACM Transactions on Networking (TON), v.15 n.4, p.878-891, August 2007
Xiaobo Zhou , Cheng-Zhong Xu, Harmonic Proportional Bandwidth Allocation and Scheduling for Service Differentiation on Streaming Servers, IEEE Transactions on Parallel and Distributed Systems, v.15 n.9, p.835-848, September 2004
Shudong Jin , Azer Bestavros, G
ISMO
Ying Cai , Zhan Chen , Wallapak Tavanapong, Caching collaboration and cache allocation in peer-to-peer video systems, Multimedia Tools and Applications, v.37 n.2, p.117-134, April 2008
Zongpeng Li , Anirban Mahanti, A progressive flow auction approach for low-cost on-demand P2P media streaming, Proceedings of the 3rd international conference on Quality of service in heterogeneous wired/wireless networks, August 07-09, 2006, Waterloo, Ontario, Canada
Stergios V. Anastasiadis , Rajiv G. Wickremesinghe , Jeffrey S. Chase, Circus: Opportunistic Block Reordering for Scalable Content Servers, Proceedings of the 3rd USENIX Conference on File and Storage Technologies, March 31-31, 2004, San Francisco, CA
Shudong Jin , Azer Bestavros, Scalability of multicast delivery for non-sequential streaming access, ACM SIGMETRICS Performance Evaluation Review, v.30 n.1, June 2002
Stergios V. Anastasiadis , Kenneth C. Sevcik , Michael Stumm, Scalable and fault-tolerant support for variable bit-rate data in the exedra streaming server, ACM Transactions on Storage (TOS), v.1 n.4, p.419-456, November 2005 | video-on-demand;streaming media;scalable protocols;performance evaluation;multicast |
628166 | Nondeterministic, Nonmonotonic Logic Databases. | AbstractWe consider in this paper an extension of Datalog with mechanisms for temporal, nonmonotonic, and nondeterministic reasoning, which we refer to as Datalog++. We show, by means of examples, its flexibility in expressing queries concerning aggregates and data cube. Also, we show how iterated fixpoint and stable model semantics can be combined to the purpose of clarifying the semantics of Datalog++ programs and supporting their efficient execution. Finally, we provide a more concrete implementation strategy on which basis the design of optimization techniques tailored for Datalog++ is addressed. | Introduction
Motivations. The name Datalog++ is used in this paper to refer to Datalog
extended with mechanisms supporting:
- a limited form of temporal reasoning, by means of temporal, or stage, arguments
of relations, ranging over a discrete temporal domain, in the style
of [5];
- nonmonotonic reasoning, by means of a form of stratified negation w.r.t. the
stage arguments, called XY-stratification [26];
nondeterministic reasoning, by means of the nondeterministic choice construct
[12].
which is essentially a fragment of LDL++ [2], and is advocated
in [27, Chap. 10], revealed a highly expressive language, with applications in
diverse areas such as AI planning [4], active databases [25], object databases
[8], semistructured information management and Web restructuring [10], data
mining and knowledge discovery in databases [15, 3, 11]. However, a thorough
study of the semantics of Datalog++ is still missing, which provides a basis to
sound and efficient implementations and optimization techniques. A preliminary
study of the semantics of Datalog++ is sketched in [4], by discussing the relation
between a declarative (model-theoretic) semantics and a fixpoint (bottom-up)
semantics. This discussion is however informal, and moreover fails to achieve
a precise coincidence of the two semantics for arbitrary programs. We found
therefore motivated an in-depth study of the semantics of Datalog++ programs,
which also explains the reason of the missing coincidence of semantics in [4], and
proposes a general condition which ensures such coincidence.
Objectives and contributions. This paper is aimed at
1. illustrating the expressiveness and flexibility of Datalog++ as a query language
2. providing a declarative semantics for Datalog++, which integrates the tem-
poral, nonmonotonic and nondeterministic mechanisms, and which justifies
the adoption of an iterated fixpoint semantics for the language;
3. providing the basis for query optimization in Datalog++, thus making it
viable an efficient implementation.
To this purpose, we proceed as follows:
1. the use of Datalog++ as a query language is discussed in Section 2;
2. a natural, purely declarative, semantics for Datalog++ is assigned using the
notion of a stable model in Section 3; a constructive semantics is then assigned
using an iterative procedure which exploits the stratification induced by the
progression of the temporal argument;
3. in the main result of this paper, we show that the two semantics are equiva-
lent, provided that a natural syntactic restriction is fulfilled, which imposes
a disciplined use of the temporal argument within the choice construct.
4. On the basis of this result, we introduce in Section 4 a more concrete operational
semantics using relational algebra operators, and, in Section 5, a
repertoire of optimization techniques, especially tailored for Datalog++. In
particular, we discuss how it is possible to support efficient history-insensitive
temporal reasoning by means of real side-effects during the iterated computation
[19].
The material in Sections 2 and 3 is based on results presented in more compact
form in [9, 14], while the material in Sections 4 and 5 is new. In conclusion, this
paper provides a thorough account of the pragmatics, semantics and implementation
of Datalog++.
Related Work. Nondeterminism is introduced in deductive databases by means
of the choice construct. The original proposal in [17] was later revised in [23],
and refined in [12]. These studies exposed the close relationship connecting non-monotonic
reasoning with nondeterministic constructs, leading to the definition
of a stable model semantics for choice. While the declarative semantics
of choice is based on stable model semantics which is intractable in general,
choice is amenable to efficient implementations, and it is actually supported in
the logic database language LDL [20] and its evolution LDL++ [2].
On the other side, stratification has been a crucial notion for the introduction
of nonmonotonic reasoning in deductive databases. From the original idea in [1]
of a static stratification based on predicate dependencies, stratified negation has
been refined to deal with dynamic notions, as in the case of locally stratified
programs [21] and modularly stratified programs [22]. Dynamic, or local, stratification
has a close connection with temporal reasoning, as the progression of
time points yields an obvious stratification of programs-consider for instance
1S [5]. It is therefore natural that non monotonic and temporal reasoning
are combined in several deductive database languages, such as those in [18], [16],
However, a striking mismatch is apparent between the above two lines of
nondeterminism leads to a multiplicity of (stable) models, whereas
stratification leads to a unique (perfect) model. So far, no comprehensive study
has addressed the combination of the two lines, which occurs in Datalog++, and
which requires the development of a non deterministic iterated fixpoint proce-
dure. We notice however the mentioned exception of [4], where an approach to
this problem is sketched with reference to locally stratified programs augmented
with choice. In the present paper, we present instead a thorough treatment of
programs, and repair an inconvenience of the approach in [4] concerning
the incompleteness of the iterated fixpoint procedure.
Preliminaries. We assume that the reader is familiar with the concepts of
relational databases and of the Datalog language [24]. Various extensions of
are considered in this paper:
- Datalog: is the language with unrestricted use of negation in the body of
the rules;
is the subset of Datalog: consisting of (predicate-)stratified pro-
grams, in the sense of [1];
is the language introduced in [5], where each predicate may have a
designated argument, called a stage argument, ranging over natural numbers
(where each natural number n is represented by the term s n (nil));
1S is the language Datalog 1S with an unrestricted use of negation in
rule bodies;
- Datalog++ is the subset of
1S where negation is used in a disciplined
manner, according to the mechanisms of XY-stratification and nondeterministic
choice, which are introduced in the next Section 2.
Query Answering with Datalog++
Datalog, the basis of deductive databases, is essentially a friendly syntax to express
relational queries, and to extend the query facilities of the relational calculus
with recursion. Datalog's simplicity in expressing complex queries impacted
on the database technology, and nowadays recursive queries/views have become
part of the SQL3 standard. Recursive queries find natural applications in all
areas of information systems where computing transitive closures or traversals
is an issue, such as in bill-of-materials queries, route or plan formation, graph
traversals, and so on.
However, it is widely recognized that the expressiveness of Datalog's (recur-
sive) rules is limited, and several extensions, along various directions, have been
proposed. In this paper, we address in particular two such directions, namely
nondeterministic and nonmonotonic reasoning, supported respectively by the
choice construct and the notion of XY-stratification. We introduce these mechanisms
by means of a few examples, which are meant to point out the enhanced
query capabilities.
Nondeterministic choice. The choice construct is used to nondeterministically
select subsets of answers to queries, which obey a specified FD constraint.
For instance, the rule
st ad(St; Ad) / major(St; Area); faculty(Ad; Area); choice((St); (Ad)):
assigns to each student a unique, arbitrary advisor from the same area, since
the choice goal constrains the st ad relation to obey the FD (St ! Ad). There-
fore, if the base relation major is formed by the tuples f!smith, db?, !gray,
se?g and the base relation faculty is formed by the tuples f!brown, db?,
!scott, db?, !miller, se?g, then there are two possible outcomes for the
query st ad(St,Ad): either f!smith, brown?, !gray, miller?g or f!smith,
scott?, !gray, miller?g. In practical systems, such as LDL++, one of these
two solutions is computed and presented as a result.
Thus, a first use of choice is in computing nondeterministic, nonrecursive
queries. However, choice can be combined with recursion, as in the following
rules which compute an arbitrary ordering of a given relation r:
Here root is a fresh constant, conveniently used to simplify the program. If the
base relation r is formed by k tuples, then there are k! possible outcomes for the
query ord r(X; Y), namely a set:
ford r(root; root); ord r(root; t 1
for each permutation ft 1
g of the tuples of r. Therefore, in each possible
outcome of the mentioned query, the relation ord r is a total ordering of the
tuples of r. The double choice constraint in the recursive rule specifies that the
successor and predecessor of each tuple of r is unique.
Interestingly, choice can be employed to compute new deterministic queries,
which are inexpressible in Datalog, as well as in pure relational calculus. A
remarkable example is the capability of expressing aggregates, as in the following
program which computes the summation aggregate over a relation r, which uses
an arbitrary ordering of r computed by ord r:
sum r(root; 0):
sum r(Y; N) / sum r(X; M); ord r(X; Y);
total sum r(N) / sum r(X; N); :ord r(X;
Here, sum r(X,N) is used to accumulate in N the summation up to X, with respect
to the order given by ord r. Therefore, the total sum is reconstructed from
sum r(X; N) when X is the last tuple in the order. Notice the use of (stratified)
negation to the purpose of selecting the last tuple. In practical languages, such
as LDL++, some syntactic sugar for aggregation is used as an abbreviation of
the above program [26]:
total sum r(sum ! X ?) / r(X):
On the basis of this simple example, more sophisticated forms of aggregation,
such as datacube and other OLAP functions, can be built. As an example, consider
a relation sales(Date, Department, Sale), and the problem of aggregating
sales along the dimensions Date and Department. Three aggregation patterns
are then possible, corresponding to the various facets of the datacube: !Date,*?,
!*,Department?, !*?. The former two patterns correspond to the aggregation
of sales along a single dimension (respectively Department and Date), and
can be obtained from the original relation by applying the method shown above.
The latter pattern, then, can be obtained by recursively applying such method
to one of the two patterns previously computed, in order to aggregate along the
remaining dimension. In case of several dimensions along which to aggregate we
can simply repeat the process, aggregating at each step along a new (i.e., still
non-aggregated) dimension.
A thorough account on programming with nondeterminism in deductive
databases can be found in [7, 13].
The semantics of choice is assigned using the so-called stable model semantics
of Datalog: programs, a concept originating from autoepistemic logic, which
was applied to the study of negation in Horn clause languages by Gelfond and
Lifschitz [6]. To define the notion of a stable model we need to introduce a
transformation H which, given an interpretation I , maps a Datalog: program
P into a positive Datalog program H(P; I):
Next, we define:
Then, M is said to be a stable model of P if SP In general, Datalog:
programs may have zero, one or many stable models. The multiplicity of stable
models can be exploited to give a declarative account of nondeterminism.
We can in fact define the stable version of a program
Given a Datalog: program P , its stable version
defined as the program obtained from P by replacing all the references
to the choice atom in a rule r : H / B; choice((X); (Y)) (X and Y are
disjunct vectors of variables, that appear also in B) with the atom chosen r
(X; Y).
The chosen r predicate is defined by the following rules:
chosen r
diffchoice r
In the above definition, for any fixed value of X, each choice for Y inhibits all
the other possible ones via diffchoice r
, so that in the stable models of SV (P )
there is (only) one of them. Notice that, by construction, each occurrence of a
choice atom has its own pair of chosen and diffchoice atoms, thus bounding
the scope of the atom to the rule it appears in. The various stable models of
the transformed program SV (P ) thus correspond to the choice models of the
original program.
XY-programs. Another notion used in this paper is that of XY-programs originally
introduced in [26]. The language of such programs is
1S , which admits
negation on body atoms and a unary constructor symbol, used to represent
a temporal argument usually called the stage argument. A general definition of
XY-programs is the following:
(XY-stratification). A set P of
1S rules defining mutually
recursive predicates, is an XY-program if it satisfies the following conditions:
1. each recursive predicate has a distinguished stage argument;
2. every recursive rule r is either an X-rule or a Y-rule, where:
- r is an X-rule when the stage argument in every recursive predicates in
r is the same variable,
- r is a Y-rule when (i) the head of r has a stage argument s(J), where J
is a variable, (ii) some goal of r has J as its stage argument, and (iii) the
remaining recursive goals have either J or s(J) as their stage argument.
Intuitively, in the rules of XY-programs, an atom p(J; ) denotes the extension
of relation p at the current stage (present time) J, whereas an atom p(s(J); )
denotes the extension of relation p at the next stage (future time) s(J). By
using a different primed predicate symbol p 0 in the p(s(J); ) atoms, we obtain
the so-called primed version of an XY-program. We say that an XY-program
is XY-stratified if its primed version is a stratified program. Intuitively, if the
dependency graph of the primed version has no cycles through negated edges,
then it is possible to obtain an ordering on the original rules modulo the stage
1 For ease of presentation, here we include the definition of SV (P ) for the case of at
most one choice atom per rule. The definition for the general case can be found
in [13].
arguments. As a consequence, an XY-stratified program is also locally stratified,
and has therefore a unique stable model that coincides with its perfect model
[21].
Let P be an XY-stratified program. Then, for each i ? 0, define P i as
I is the stage argument of the head of rg
(here r[x=I ] stands for r where I is replaced by x) i.e., P i is the set of rule
instances of P that define the predicates with stage argument s i
the iterated fixpoint procedure for computing the (unique) minimal model of P
can be defined as follows:
1. compute M 0 as the minimal model of P
2. for each j ? 0 compute M j as the minimal model of
Notice that for each j 0, P j is stratified by the definition, and hence its perfect
model M j is computable via an iterated fixpoint procedure.
In this paper, we use the name Datalog++ to refer to the language of XY-
programs augmented with choice goals.
3 A Semantics for
When choice constructs are allowed in XY-programs, a multiplicity of stable
models exists for any given program, and therefore it is needed to clarify how
this phenomenon combines with the iterated fixpoint semantics of choice-free
XY-programs. This task is accomplished in three steps.
1. First, we present a general result stating that, whenever a Datalog: program
P is stratifiable into a hierarchy of recursive cliques (i.e., minimal sets
of mutually recursive rules) any stable model of the entire
program P can be reconstructed by iterating the construction of approximating
stable models, each associated to a clique.
2. Second, we observe that, under a syntactic restriction on the use of the
choice construct that does not compromise expressiveness, Datalog++ programs
can be naturally stratified into a hierarchy of recursive cliques
by using the temporal arguments of recursive predicates.
3. Third, by the observation in 2., we can apply the general result in 1. to
programs, thus obtaining that the stable models of the entire
program can be computed by an iterative fixpoint procedure which follows
the stratification induced by the temporal arguments.
Given a (possibly infinite) program P , consider a (possibly infinite) topological
sort of its distinct recursive cliques Q 1 OE induced by the
dependency relation over the predicates of P . Given an interpretation I , we use
the notation I i to denote the subset of atoms of I whose predicate symbols are
predicates defined in clique Q i .
The following observations are straightforward:
I , and analogously
- the predicates defined in Q i+1 depend only on the definitions in
as a consequence, the interpretation of Q i+1 is I
can ignore
The next definition shows how to transform each recursive clique, within the
given topological ordering, in a self-contained program which takes into account
the information deduced by the previous cliques. Such transformation resembles
the Gelfond-Lifschitz transformation reported in Sect. 2.
Definition 3. Consider a program P , a topological sort of its recursive cliques
and an interpretation I =
I i . Now define
are defined in Q i
are defined in (Q
ut
The idea underlying the transformation is to remove from each clique Q i all the
dependencies induced by the predicates which are defined in lower cliques. We
abbreviate Q red(I)
i by Q red
when the interpretation I is clear by the context.
Example 1. Consider the program and the
recursive cliques r:g. Now, consider
the interpretation I = fs; q; rg. Then Q red
red
The following Lemma 1 states the relation between the models of the transformed
cliques and the models of the program. We abbreviate I
I (i) , and analogously for Q (i) .
Lemma 1. Given a (possibly infinite) Datalog: program P and an interpretation
be the topological sorts on
P and I induced by the dependency relation of P . Then the following statements
are equivalent:
1. SP
2. 8i ? 0: S Q red
3. 8i ? 0: S Q (i)
Proof sketch. The proof is structured as follows: (1) () (3) and (2) () (3).
(3) =) (1) We next show that (a) SP (I) ' I , and (b) I ' SP (I).
(a) Each rule in H(P; I) comes from a rule r of P , which in turn appears
in Q (i) for some i, and then I (i) is a model of r, by the hypothesis. No
atom in I n I (i) appears in r, so also I is model of r. I is then a model
of H(P; I), and hence SP (I) ' I .
(b) If A 2 I , then A 2 I (i) for some i, so (by the hypothesis and definition
of SP ) for each I such that I I . Moreover, for
each I 0 such that I readily checked that for each i
I
(1) =) (3) We observe that I = minfI j I which implies:
I
I (i)
)g.
(2) =) (3) We proceed by induction on i. The base case is trivial. In the
inductive case, we next show that (a) S Q (i) (I (i) ) ' I (i) , and (b) vice versa.
(a) Notice that from the induction hypothesis, I (i)
suffices to show that I (i) (by a simple case analysis).
(b) Exploiting the induction hypothesis, we have I (i\Gamma1) ' S Q (i\Gamma1)
(by definition of H(P; I)). We now show by
induction on n that 8n 0 T n
red
. The base case
In the induction case (n ? 0), if A 2 T n
red
, then
there exists a rule A / b red
red
. Now, by definition of H and Q red
i , there exists a rule:
in Q i such that fc
:e l . Observe now that by definition of H , A / b
Furthermore, by the induction hypothesis and I (i\Gamma1) '
we have the following: fb
Hence, by definition of
, that is A 2 S Q (i) (I (i) ). This
completes the innermost induction, and we obtain that I
(3) =) (2) We proceed is a way similar to the preceding case. To see that
suffices to verify that for each rule instance r with head
A, the following property holds: 8n A 2 T n
red
. For
the converse, we simply observe that I i is a model of Q red
This result states that an arbitrary Datalog: program has a stable model
if and only if each its approximating clique, according to the given topological
sort, has a local stable model. This result gives us an intuitive idea for computing
the stable models of an approximable program by means of the computation of
the stable models of its approximating cliques.
Notice that Lemma 1 holds for arbitrary programs, provided that a stratification
into a hierarchy of cliques is given. In this sense, this result is more
widely applicable than the various notions of stratified programs, such as that of
modularly stratified programs [22], in which it is required that each clique Q red
is locally stratified. On the contrary, we do not require here that each clique is,
in any sense, stratified. This is motivated by the objective of dealing with non
determinism, and justifies why we adopt the (nondeterministic) stable model
semantics, rather than other deterministic semantics for (stratified) Datalog:
programs, such as, for instance, perfect model semantics [21].
We turn now our attention to XY-programs. The result of instantiating the
clauses of an XY-program P with all possible values (natural numbers) of the
stage argument, yields a new program SG(P ) (for stage ground). More precisely,
is a rule of I is the stage argument of rg:
The stable models of P and SG(P ) are closely related:
Lemma 2. Let P be an XY-program. Then, for each interpretation I:
Proof sketch. We show by induction that 8n:T n
which implies the thesis. The base case is trivial. For the inductive case, observe
that since P is XY-stratified, if A 2 T n+1
H(P;I) (;) then for each A /B
I) such that fB
H(SG(P );I) (;), we have that
Vice versa, if A 2 T n+1
(;) then for each A /
such that fB
H(P;I) (;), we have A/
I). ut
However, the dependency graph of SG(P ) (which is obviously the same as P )
does not induce necessarily a topological sort, because in general XY-programs
are not predicate-stratified, and therefore Lemma 1 is not directly applicable.
To tackle this problem, we distinguish the predicate symbol p in the program
fragment P i from the same predicate symbol in all other fragments P j with j 6= i,
by differentiating the predicate symbols using the temporal argument. Therefore,
if p(i; x) is an atom involved in some rule of P i , its modified version is p i
(x). More
precisely, we introduce, for any XY-program P , its modified version SO(P ) (for
stage-out), defined by SO(P
is obtained from
the program fragment P i of SG(P ) by extracting the stage arguments from any
atom, and adding it to the predicate symbol of the atom. Similarly, the modified
version SO(I) of an interpretation I is defined. Therefore, the atom p(i; x) is
in I iff the atom p i
(x) is in SO(I), where i is the value in the stage argument
position of relation p.
Unsurprisingly, the stable models of SG(P ) and SO(P ) are closely related:
Lemma 3. Let P be an XY-program. Then, for each interpretation I:
Proof sketch. It is easy to see that SO(SG(P Hence, the least
Herbrand models of SO(H(SG(P ); I)) and H(SO(P ); SO(I)) coincide. ut
Our aim is now to conclude that, for a given
is the topological sort over SO(P ) in the hypothesis
of Lemma recall that, for i 0, the clique SO(P consists of the rules
from SO(P ) with stage argument i in their heads;
(b) by Lemmas 1, 2 and 3, an interpretation I is a stable model of P iff I can be
constructed as
i0 I i , where, for i 0, I i is a stable model of SO(P ) red(I (i) )
i.e. the clique SO(P reduced by substituting the atoms deduced at stages
earlier than i.
On the basis of (b) above, it is possible to define an iterative procedure to
construct an arbitrary stable model M of P as the union of the interpretations
defined as follows:
Iterated stable model procedure.
Base case. M 0 is a stable model of the bottom clique SO(P
Induction case. For i ? 0, M i is a stable model of SO(P ) red(M (i) )
i.e. the
clique SO(P reduced with respect to M 0 [
The interpretation
is called an iterated stable model of P .
It should be observed that this construction is close to the procedure called
iterated choice fixpoint in [4]. Also, following the approach of [13], each local
stable model M i can in turn be efficiently constructed by a nondeterministic
fixpoint computation, in polynomial time.
Unfortunately, the desired result that the notions of stable model and iterated
stable model coincide does not hold in full generality, in the sense that the iterative
procedure is not complete for arbitrary Datalog++ programs, giving rise to
the inconveniences in [4] mentioned in the introduction. In fact, as demonstrated
by the example below, an undisciplined use of choice in Datalog++ programs
may cause the presence of stable models that cannot be computed incrementally
over the hierarchy of cliques.
Example 2. Consider the following simple
In the stable version SV (P ) of P , the rule defining predicate p is replaced by:
chosen(X) / q(I; X); :diffchoice(X):
In general, SO(P ) i can be composed by more than one clique, so that in the above
expression it should be replaced by SO(P
. However, for ease of
presentation we ignore it, since such general case is trivially deduceable from what
follows.
It is readily checked that SV (P ) admits two stable models, namely fq(0; a);
but only the first model
is an iterated stable models, and therefore the second model cannot be computed
using the iterated choice fixpoint of [4]. ut
The technical reason for this problem is that the free use of the choice construct
inhibits the possibility of defining a topological sort on SO(P ) based on the value
of the stage argument. In the Example 2, the predicate dependency relation of
dependency among stage i and the stages j ? i, because
of the dependency of the chosen predicate from the predicates q i
for all stages
To prevent this problem, it is suffices to require that choice goals refer the
stage argument I in the domain of the associated functional dependency. The
programs which comply with this constraint are called choice-safe.
The following is a way to turn the program of Example 2 into a choice-safe
program (with a different semantics):
This syntactic restriction, moreover, does not greatly compromise the expressiveness
of the query language, in that it is possible to simulate within this
restriction most of the general use of choice (see [19]).
The above considerations are summarized in the following main result of
the paper, which, under the mentioned restriction of choice-safety, is a direct
consequence of Lemmas 1, 2 and 3.
Theorem 1 (Correctness and completeness of the iterated stable model
procedure).
Let P be a choice-safe Datalog++ program and I an interpretation. Then I is a
stable model of SV (P ) iff it is an iterated stable model of P . ut
The following example shows a computation with the iterated stable model
procedure.
Example 3. Consider the following Datalog++ version of the seminaive pro-
gram, discussed in [26], which non-deterministically computes a maximal path
from node a over a graph g:
Assume that the graph is given by eig. The following
interpretations are carried out at each stage of the iterated stable model
procedure:
1. I
(a)g.
2. I
(b)g.
3. I 1
(c)g,
I 2
(d)g
4. I 1
(b); all 3
5. I
By Theorem 1, we conclude that there are two stable models for the program:
I
2 and I
3 . Clearly, any realistic implementation,
such as that provided in LDL++, computes non deterministically only one of
the possible stable models. ut
4 An Operational Semantics for
We now translate the iterated stable model procedure into a more concrete form,
by using relational algebra operations and control constructs. Following the style
of [24], we associate with each predicate p a relation P - same name capitalized.
The elementary deduction step TQ (I) is translated as an assignment to appropriate
relations:
I 0 := TQ (I) /!
i.e., the relations defined in
the clique Q together with the extensional relations, and EVAL(p; Rels) denotes,
in the notation of [24], a single evaluation step of the rules for predicate p with
respect to the current extension of relations in Rels.
We show the translation of Datalog++ cliques incrementally in three steps,
starting with simple Datalog programs and stratified negation, then introducing
the Choice construct and eventually describing how to translate the full language.
The translation of a whole program can be trivially obtained by gathering the
single translated cliques in the natural order.
with stratified negation. We can apply straightforwardly the transformation
given in [24] for safe stratified Datalog programs, where each negative
literal referring to a previously computed or extensional relation is translated to
the complement of the relation w.r.t. the universe of constants:
Translation Template 1
8p defined in
repeat
8p defined in Q last
until 8p defined in Q last P
The translation is illustrated in the following:
Example 4. The program:
is translated to the following naive evaluation procedure, where EVAL(: : :) is
instanced with an appropriate RA query:
repeat
last
last
last P:
Adding Choice. Now we need to translate in relational algebra terms the operations
which compose a nondeterministic computation. Following the approach
of [13, 12], we partition the rules of SV (Q) (the stable version of Q) into three
sets:
chosen rules
diffChoice rules
(i.e., the remaining rules)
Now, the non-deterministic fixpoint procedure which computes the stable models
of a choice program is represented by the following:
Translation Template 2
0: Init
8p defined in SV
1: Saturation
repeat
8p defined in O : last P := P
until 8p defined in O : last
2: Gather choices
8chosen r defined in C : Chosen 0
3: Termination test
if 8chosen r defined in C : Chosen 0
4: Choice
Execute (fairly) the following
a. Choose Chosen 0
b. Choose t 2 Chosen 0
r
5: Inhibit other choices
8diffchoice r defined in D : Diffchoice r
EVAL(diffchoice r
At step 1, the procedure tries to derive all possible atoms from the already
given choices (none at the first iteration), so that at step 2 it can collect all
candidate atoms which can be chosen later. If there is any such atom (i.e., we
have not reached the fixpoint of the evaluation), then we can nondeterministically
choose one of them (step 4) and then propagate the effects of such choice (step
5) in order to force the FD which it implies. We are then ready to repeat the
process.
Example 5. The stable version of the students-advisors example seen in section
2 is the following:
st ad(St; Ad) / major(St; Area); faculty(Ad; Area); chosen(St; Ad):
Following the above translation schema, then, we obtain the following procedure:
0: St ad := ;; Chosen := ;; Diffchoice := ;;
1: St ad := St;Ad (Major(St; Area) ./ F aculty(Ad; Area) ./ Chosen(St;
2: Chosen 0 :=
St;Ad
Major(St; Area) ./ F aculty(Ad; Area) ./ Diffchoice(St; Ad)
3: if Chosen
4c: Chosen := Chosen [ f! st; ad ?g;
5: Diffchoice := Diffchoice [
st ?g \Theta f! ad ?g
Notice that some steps have been slightly modified: (i) step 1 has been sim-
plified, since the only rule in O is not recursive (modulo the chosen predicate);
(ii) here we have only one choice rule, so step 4a becomes useless and then it has
been ignored; (iii) step 5 is rewritten in a brief and more readable form, which
has exactly the same meaning of that shown in the above general schema.
Adding XY-stratification. Analyzing the evaluation procedure of an XY-
cliques Q, it is easy to see that at each step n the only atoms which can be
derived are of the form all with the same stage argument, and
then the syntactic form of the rules ensures that such rules refer only to atoms
Then, the stage arguments in each rule serve only to
distinguish the literals computed in the actual stage I from those computed in
the previous stage I \Gamma 1.
Therefore, we can safely omit the stage argument from each XY-recursive
predicate, renaming the literals referring to a previous stage (i.e., those having
stage I inside a rule with head having stage I + 1) by adding the prefix "old ".
This does not apply to exit-rules, in which the stage argument value is significant
and then must be preserved. We denote by Q 0 the resulting rules, and by p 0 the
predicate obtained from each p.
Example 6. The program Q:
is translated into the new program
it suffices to store in an external register J the value of the stage
under evaluation. We can (i) fire the exit-rules having the same stage argument
as J , and then (ii) to evaluate the new rules in Q 0 (which are now stratified
and possibly with choice) as described in the last two sections. When we have
completely evaluated the actual stage, we need to store the newly derived atoms
p in the corresponding old p, to increment J and then to repeat the process in
order to evaluate the next stage. The resulting procedure is the following.
Translation Template 3
0: Init
defined in Q
1: Fire exit rules
8exit-rule
2: Fire Q'
Translate following the translation templates 1 and 2
3: Update relations
defined in Q
4: J := J
Here we simply reduce the evaluation of the XY-clique to the iterated evaluation
of its stage instances (step 2) in a sequential ascending order (step 4).
Each stage instance is stratified modulo choice and then it can be broken into
subcliques (step 2) which can be translated by template 1 (if choice-free) or
template 2 (if with choice). The resulting relations (step can be easily obtained
by collecting at each stage J the relations P 0 and translating them into
the corresponding P , i.e. adding to them the stage argument J .
Example 7. Let g=2 be an extensional predicate representing the edges of a
graph. Consider the following clique Q:
The corresponding transformed clique Q is:
can be partitioned into: exit-rule r 0 , subclique Q 0
and subclique
g. Applying the translation template 3 we obtain:
0: J := 0; old
old All 0 := ;; All 0 := ;;
1: if
3: old
old All 0 := All
4: J := J
Notice that steps 2a and 2b have been simplified w.r.t. translation template 1,
because Q 0 is not recursive and then the iteration cycle is useless (indeed it would
reach saturation on the first step and then exit on the second one).
5 Optimization of Datalog++ queries
A systematic study of query optimization techniques is realizable on the basis
of the concrete implementation of the iterated stable model procedure discussed
in the previous section. We now sketch a repertoire of ad hoc optimizations for
by exploiting the particular syntactic structure of programs and
queries, and the way they use the temporal arguments.
First of all, we observe that the computations of translation template 3 never
terminate. An obvious termination condition is to check that the relations computed
at two consecutive stages are empty. To this purpose, the translation
template 3 can be modified by inserting the following instruction between step
2 and 3:
if 8p defined in Q
A more general termination condition is applicable to deterministic cliques, under
the assumption that the external calls to the predicates of the clique do not
specify particular stages, i.e., external calls are of the form In this case,
the termination condition above can be simplified as follows:
if 8p defined in Q
Forgetful-fixpoint computations. In many applications (e.g., modeling updates
and active rules [25, 10]) queries are issued with reference to the final
stage only (which represents the commit state of the database). Such queries
often exhibit the form
with the intended meaning "find the value X of p in the final state of p". This
implies that (i) when computing the next stage, we can forget all the preceding
states but the last one (see [26]), and (ii) if a stage I such that p(I; X); :p(s(I); )
is unique, we can quit the computation process once the above query is satisfied.
For instance, the program in Example 7 with the query \Delta(I ; X); :\Delta(s(I); )
computes the leaf nodes at maximal depth in a breadth-first visit of the graph
rooted in a. To the purpose of evaluating this query, it suffices to (i) keep track
of the last computed stage only, (ii) exit when the current \Delta is empty. The code
for the program of Example 7 is then optimized by:
(i) replacing step 3
3: old
old All 0 := All
i.e., dropping the instructions that record previous stages;
(ii) insert between steps 2 and 3 the instruction:
if old
Another interesting case occurs when the answer to the query is distributed
along the stages, e.g., when we are interested in the answer to a query such as
which ignores the stage argument. In this case, we can collect the partial
answers via a gathering predicate defined with a copy-rule. For instance, the
all predicate in Example 7 collects all the nodes reachable from a. Then the
query all(I; X); :all(s(I); ), which is amenable for the described optimization,
is equivalent to the query \Delta( ; X), which on the contrary does not allow it. There-
fore, by (possibly) modifying the program with copy-rules for the all predicate,
we can apply systematically the space optimized forgetful-fixpoint.
Delta-fixpoint computations. We already mentioned the presence of a copy-
rule in Example 7:
Its effect is that of copying all the tuples from the stage I to the next one, if
any. We can avoid such useless space occupation, by maintaining for each stage
only the modifications which are to be applied to the original relation in order to
obtain the actual version. For example, the above rule represents no modification
at all, and hence it should not have any effect; indeed, it suffices to keep track
of the additions to the original database requested by the other rule:
which can be realized by a supplementary relation all + containing, at each
stage, the new tuples produced. In the case that we replace the copy-rule with
a delete-rule of the form:
we need simply to keep track of the negative contribution due to literal :q(X),
which can be stored in a relation all
\Gamma . Each can then be obtained
by integrating with all the all
J I . This method is particularly effective when is a large relation.
To illustrate this point, let us assume that the program of Example 7 is modified
by adding a new exit rule for relation all:
where r is an extensional predicate. The resulting code is then the following:
0: J := 0; old
old All
1: if
3: old
old All 0 := All
4: J := J
In this way we avoid the construction of relation All, i.e., the replication of
relation r at each stage. In fact, All is reconstructed on the fly when needed
(step 2a).
Side-effect computations. A direct combination of the previous two techniques
gives rise to a form of side-effect computation. Let us consider, as an ex-
ample, the nondeterministic ordering of an array performed by swapping at each
step any two elements which violate ordering. Here, the array a =! a 1
is represented by the relation a with extension a(1; a 1
ar(0;
At each stage i we nondeterministically select an unordered pair x; y of
elements, delete the array atoms ar(i; p1; x) and ar(i; p2; y) where they ap-
pear, and add the new atoms ar(s(i); p1; y) and ar(s(i); p2; x) representing the
swapped pair. The query allows a forgetful-fixpoint computation (in particular,
stage selected by the query is unique), and the definition of predicate ar is composed
by delete-rules and an add-rules. This means that at each step we can
(i) forget the previously computed stages (but the last), and (ii) avoid copying
most of relation ar, keeping track only of the deletions and additions to be per-
formed. If the requested update are immediately performed, the execution of the
proposed program, then, boils down to the efficient iterative computation of the
following (nondeterministic) Pascal-like program:
while 9I a[I
6 Conclusions
The work reported in this paper, concerning fixpoint/operational semantics and
optimization of a logic database language for non deterministic and nonmonotonic
reasoning, constitutes the starting point for an actual implemented sys-
tem. Such project is currently in progress, on the basis of the LDL++ system
developed at UCLA. We plan to incorporate the proposed optimization into the
LDL++ compiler, to the purpose of
evaluating how effective the proposed optimizations are for realistic LDL++
programs, i.e., whether they yield better performance or not,
evaluating how applicable the proposed optimizations are for realistic LDL++
programs, i.e., how often they can be applied,
- experimenting the integration of the proposed optimizations with the classical
optimization techniques, such as magic-sets.
--R
Towards a theory of declarative knowledge.
A Classification-based Methodology for Planning Audit Strategies in Fraud Detection
The Logic of Totally and Partially Ordered Plans: a Deductive Database Approach.
Temporal deductive databases.
The Stable Model Semantics for logic programming.
Programming with non Determinism in Deductive Databases.
Query Answering in Non deterministic Non monotonic logic databases.
A Deductive Data Model for Representing and Querying Semistructured Data.
Experiences with a logic-based knowledge discovery support environment
Semantics and Expressive Power of Non Deterministic Constructs for Deductive Databases.
On the Effective Semantics of Temporal
Integration of deduction and induction for mining supermarket sales data.
ELS programs and the efficient evaluation of non-stratified programs by transformation to ELS
A logical framework for active rules.
Nondeterminism and XY-Stratification in Deductive Databases (in Italian)
A Logic Language for Data and Knowledge Bases.
Every logic program has a natural stratification and an iterated fix point model.
Modular Stratification and Magic Sets for Datalog Program with Negation.
Stable Models and Non-determinism in Logic Programs with Negation
Principles of Database and Knowledge-base Systems
Active Database Rules with Transaction Conscious Stable Model Se- mantics
Negation and Aggregates in Recursive Rules: The LDL
--TR
--CTR
Christos Nomikos , Panos Rondogiannis , Manolis Gergatsoulis, Temporal stratification tests for linear and branching-time deductive databases, Theoretical Computer Science, v.342 n.2-3, p.382-415, 7 September 2005
Fosca Giannotti , Giuseppe Manco , Franco Turini, Specifying Mining Algorithms with Iterative User-Defined Aggregates, IEEE Transactions on Knowledge and Data Engineering, v.16 n.10, p.1232-1246, October 2004 | stable models;negation;nondeterminism;logic programming;databases |
628189 | Mining Optimized Association Rules with Categorical and Numeric Attributes. | AbstractMining association rules on large data sets has received considerable attention in recent years. Association rules are useful for determining correlations between attributes of a relation and have applications in marketing, financial, and retail sectors. Furthermore, optimized association rules are an effective way to focus on the most interesting characteristics involving certain attributes. Optimized association rules are permitted to contain uninstantiated attributes and the problem is to determine instantiations such that either the support or confidence of the rule is maximized. In this paper, we generalize the optimized association rules problem in three ways: 1) association rules are allowed to contain disjunctions over uninstantiated attributes, 2) association rules are permitted to contain an arbitrary number of uninstantiated attributes, and uninstantiated attributes can be either categorical or numeric. Our generalized association rules enable us to extract more useful information about seasonal and local patterns involving multiple attributes. We present effective techniques for pruning the search space when computing optimized association rules for both categorical and numeric attributes. Finally, we report the results of our experiments that indicate that our pruning algorithms are efficient for a large number of uninstantiated attributes, disjunctions, and values in the domain of the attributes. | Introduction
Association rules, introduced in [1], provide a useful
mechanism for discovering correlations among the underlying
data. In its most general form, an association
rule can be viewed as being defined over attributes of
a relation, and has the form C
are conjunctions of conditions, and each condition
is either A are values
from the domain of the attribute A i ). Each rule has
an associated support and confidence. Let the support
of a condition C i be the ratio of the number of tuples
satisfying C i and the number of tuples in the relation.
The support of a rule of the form C is then the
same as the support of C 1 - C 2 , while its confidence
is the ratio of the supports of conditions C 1 - C 2 and
. The association rules problem is that of computing
all association rules that satisfy user-specified minimum
support and minimum confidence constraints,
and schemes for this can be found in [1, 2, 6, 10, 11].
For example, consider a relation in a telecom service
provider database that contains call detail infor-
mation. The attributes of the relation are date, time,
src city, src country, dst city, dst country and duration.
A single tuple in the relation thus captures information
about the two endpoints of each call, as well as
the temporal elements of the call. The association rule
(dst country = France) would satisfy
the user-specified minimum support and minimum
confidence of 0.05 and 0.3, respectively, if at least 5%
of total calls are from NY to France, and at least 30%
of the calls that originated from NY are to France.
The optimized association rules problem, motivated
by applications in marketing and advertising, was introduced
in [5]. An association rule R has the form
tribute, l 1 and u 1 are uninstantiated variables, and
only instantiated conditions (that
is, the conditions do not contain uninstantiated vari-
ables). The authors propose algorithms for determining
values for the uninstantiated variables l 1 and u 1
for each of the following cases:
ffl Confidence of R is maximized and support of the
condition is at least the user-specified
support (referred to as the optimized confidence
rule).
ffl Support of the condition
maximized and confidence of R is at least the user-specified
confidence (referred to as the optimized
support rule).
Optimized association rules are useful for unraveling
ranges for numeric attributes where certain trends
or correlations are strong (that is, have high support
or confidence). For example, suppose the telecom service
provider mentioned earlier was interested in offering
a promotion to NY customers who make calls
to France. In this case, the timing of the promotion
may be critical - for its success, it would be advantageous
to offer it close to a period of consecutive days
in which at least a certain minimum number of calls
from NY are made and the percentage of calls from
NY to France is maximum. The framework developed
in [5] can be used to determine such periods.
Consider, for example, the association rule (date 2
dst
a minimum support of 0.03, the optimized confidence
rule results in the period for which calls from NY in
the period are at least 3% of the total number of calls,
and the percentage of calls from NY that are directed
to France is maximum. With a minimum confidence
of 0.5, the optimized support rule results in the period
during which at least 50% of the calls from NY are to
France, and the number of calls originating in NY is
maximum.
A limitation of the optimized association rules dealt
with in [5] is that only a single optimal interval for
a single numeric attribute can be determined. How-
ever, in a number of applications, a single interval
may be an inadequate description of local trends in
the underlying data. For example, suppose the telecom
service provider is interested in doing upto k promotions
for customers in NY calling France. For this
purpose, we need a mechanism to identify upto k periods
during which a sizable number of calls from NY to
France are made. If association rules were permitted
to contain disjunctions of uninstantiated conditions,
then we could determine the optimal k (or fewer) periods
by finding optimal instantiations for the rule:
(date
dst France. The above framework can be
further strengthened by enriching association rules
to contain more than one uninstantiated attribute,
and permitting attributes to be both numeric (e.g.,
date and duration) as well as categorical (e.g., src city,
dst country). Thus, optimal instantiations for the rule
dst France would yield
valuable information about cities and periods with a
fairly high outward call volume, a substantial portion
of which is directed to France. Alternately, information
about cities and specific dates can be obtained from
the rule (src
dst country = France. This information
can be used by the telecom service provider to
determine the most suitable geographical regions and
dates for offering discounts on international long distance
calls to France.
In this paper, we generalize the optimized association
rules problem, described in [5], in three ways - 1)
association rules are permitted to contain disjunctions
over uninstantiated attributes, 2) association rules are
allowed to contain an arbitrary number of uninstantiated
attributes, and uninstantiated attributes can
be either categorical or numeric. We first show that
the problem of computing optimized support and optimized
confidence association rules in our framework is
NP-hard. We then present a general depth first search
algorithm for exploring the search space. The algorithm
searches through the space of instantiated rules
in the decreasing order of the weighted sums of their
confidences and supports, and uses branch and bound
techniques to prune the search space effectively. For
categorical attributes, we also present a graph search
algorithm that, in addition, uses intermediate results
to reduce the search. Finally, for numeric attributes,
we develop techniques to eliminate certain instantiated
rules prior to searching through them. Experimental
results indicate that our schemes perform well for
a large number of uninstantiated attributes, disjunctions
and values in the domain of the uninstantiated
attributes. Proofs of theorems presented in the paper
can be found in [9].
Related Work
Association rules for a set of transactions in which
each transaction is a set of items bought by a customer,
were first studied in [1]. These association rules for
sales transaction data have the form
and Y are disjoint sets of items. Efficient algorithms
for computing them can be found in [2, 7, 6, 10, 11, 3].
In [10, 6], the generalization of association rules to
multiple levels of taxonomies over items is studied.
Association rules containing quantitative and categorical
attributes are studied in [8] and [11]. The work
in [8] restricts association rules to be of the form
suggest ways to extend
their framework to have a range (that is, A
rather than a single value in the left hand side of a
rule. To achieve this, they partition numeric attributes
into intervals. However, they do not consider merging
neighboring intervals to generate a larger interval. In
[11], the authors use a partial completeness measure
in order to determine the partitioning of numeric attributes
into intervals.
The optimized association rule problem was introduced
in [5]. The authors permit association rules to
contain a single uninstantiated condition A 1
on the left hand side, and propose schemes to determine
values for variables l 1 and u 1 such that the confidence
or support of the rule is maximized. In [4], the
authors extend the results in [5] to the case in which
rules contain two uninstantiated numeric attributes on
the left hand side. They propose algorithms that discover
optimized gain, support and confidence association
rules for two classes of regions - rectangles and
admissible regions (for admissible regions, the algorithms
compute approximate, not optimized, support
and confidence rules). However, their schemes only
compute a single optimal region. In contrast, our algorithms
are general enough to handle more than two
uninstantiated attributes, which could be either categorical
or numeric. Furthermore, our algorithms can
generate an optimal set of rectangles rather than just a
single optimal rectangle (note that we do not consider
admissible regions or the notion of gain in this paper).
This enables us to find more interesting patterns.
3 Preliminaries
In this section, we define the optimized association
rule problem addressed in the paper. The data is assumed
to be stored in a relation defined over categorical
and numeric attributes. Association rules are
built from atomic conditions each of which has the
could be either categorical or nu-
meric), and A i 2 [l (only if A i is numeric). For
the atomic condition A a value from the
domain of A i , the condition is referred to as instanti-
ated; else, if v i is a variable, we refer to the condition as
uninstantiated. Likewise, the condition A i 2 [l
referred to as instantiated or uninstantiated depending
on whether l i and u i are values or variables.
Atomic conditions can be combined using operators
- or - to yield more complex conditions. Instantiated
association rules, that we study in this paper,
have the form C are arbitrary
instantiated conditions. Let the support for an
instantiated condition C, denoted by sup(C), be the
ratio of the number of tuples satisfying the condition C
and the total number of tuples in the relation. Then,
for the association rule R: C defined
as is defined as sup(C1-C2 )
Note that our definition of sup(R) is different from
the definition in [1] where sup(R) was defined to be
Instead, we have adopted the definition
used in [5] and [4]. Also, let minSup and minConf
denote the user-specified minimum support and minimum
confidence, respectively.
The optimized association rule problem requires optimal
instantiations to be computed for an uninstantiated
association rule which has the form: U-C
where U is a conjunction of m uninstantiated atomic
conditions over m distinct attributes, and C 1 and C 2
are arbitrary instantiated conditions. Let U i denote an
instantiation of U - thus, U i is obtained by replacing
variables in U with values. An instantiation U i can be
mapped to a rectangle in m-dimensional space - there
is a dimension for each attribute and the co-ordinates
for the rectangle in a dimension are identical to the
values for the corresponding attribute in U i . Two instantiations
U 1 and U 2 are said to be non-overlapping
if the two (m-dimensional) rectangles defined by them
do not overlap (that is, the intersection of the two rectangles
is empty).
Having defined the above notation for association
rules, we present below, the formulations of the optimized
association rule problems.
Optimized Confidence Problem: Given k
and an uninstantiated rule U - C
non-overlapping instantiations U
of U with l - k such that sup(R) -
minSup and conf(R) is maximized, where R is
the rule (U
ffl Optimized Support Problem: Given k and
an uninstantiated rule U - C
non-overlapping instantiations U
of U with l - k such that conf(R) -
minConf and sup(R) is maximized, where R is
the rule (U
The problem of computing optimized association
rules required instantiations U l to be determined
such that the rule R :
satisfies user-specified constraints. Suppose for an instantiation
U i of U , I i is the instantiated rule U
for a set of instantiated rules,
sup(S) and conf(S) are defined as follows.
Then, we have
conf(R). Thus, since (1) for every instantiated rule (or
alternatively,
can be computed by performing a single pass over the
relation, and (2) these, in turn, can be used to compute
supports and confidences for sets of instantiations, the
optimized association rule problem reduces to the following
Optimized Confidence Problem: Given k,
and sup(I i ) and conf(I i ) for every instantiation
I i , determine a set S containing at
most k non-overlapping instantiations such that
sup(S) -minSup and conf(S) is maximized.
ffl Optimized Support Problem: Given k, and
instantiation I i ,
determine a set S containing at most k non-overlapping
instantiations such that conf(S) -
minConf and sup(S) is maximized.
In the remainder of the paper, we use the above
formulations instead to develop algorithms for the optimized
association rule problems.
4 Categorical Attributes
In this section, we present algorithms for computing
optimized support and confidence sets when rules
contain only uninstantiated conditions of the form
v. Thus, the results of this section are only
applicable to categorical attributes and numeric attributes
restricted to A (conditions of the form
is a numeric attribute, are dealt
with in the next section). An example of such a rule
is dst France
(in the rule, date is a numeric attribute while src city
is categorical).
Due to the above restriction, any two arbitrary instantiations
are always non-overlapping. This property
is essential for the correctness of the pruning technique
used by the graph search algorithm presented in Section
4.4.
4.1 NP-Hardness Result
The problem of computing optimized sets, given
instantiation I i , can be
shown to be intractable, and follows from the following
theorem.
Theorem 4.1: Given sup(I i ) and conf(I i ) for every
instantiation I i , determining if there is a set S containing
an arbitrary number of instantiations such that
conf(S) - minConf and sup(S) - minSup is NP-hard.
In the following subsections, we present schemes for
computing optimized sets that employ techniques for
pruning the search space in order to overcome the complexity
of the problem. In each subsection, we first
present the scheme for computing optimized confidence
sets, and then briefly describe the modifications to the
scheme in order to compute optimized support sets.
1 Note that if the domain of the numeric attribute A i is large,
then it can be partitioned into a sequence of n i intervals, and
successive intervals can be mapped to consecutive integers in the
interval between 1 and n i .
procedure optConfNaive(curSet, curLoc):
1. for i := curLoc to n do f
2. S := curSet [ finstArray[i]g
3. if sup(S) - minSup and conf(S) ? conf(optSet)
4. optSet := S
5. if
7. g
Figure
1: Naive Alg. for Optimized Confidence Set
4.2 Naive Algorithm
Optimized Confidence Set: We begin by presenting
a naive algorithm for determining the optimized
confidence set of instantiations (see Figure 1). In a
nutshell, the algorithm employs depth first search to
enumerate all possible sets containing k or less instan-
tiations, and returns the set with the maximum confidence
and support at least minSup.
The algorithm assumes that instantiations are
stored in the array instArray. The number of instantiations
is is the number
of values in the domain of A i . The algorithm is
initially invoked with arguments
Loc=1. The variable optSet is used to keep track of the
optimized set of instantiations encountered during the
execution of the algorithm. The algorithm enumerates
all possible subsets of size k or less by recursively invoking
itself (Step 6) and sets optSet to a set with a greater
confidence than the current optimized set (steps 3 and
4). Each invocation accepts as input curSet, the set
of instantiations to be further extended, and curLoc,
the index of the first instantiation in instArray to be
considered for extending curSet (all instantiations between
curLoc and n are considered). The extended
state is stored in S, and if the number of instantiations
in S is less than k, the algorithm calls itself recursively
to further extend S with instantiations whose index is
greater than the index of all the instantiations in S.
The complexity of the naive algorithm is \Sigma k
When k - n, the complexity of the algorithm becomes
O(n k ). However, as we showed earlier, if there is no
restriction on the size of the optimized set (that is,
n), the problem is NP-hard.
Optimized Support Set: The naive algorithm for
computing the optimized support set is similar to opt-
ConfNaive, except that the condition in Step 3 which
tries to maximize confidence is replaced with the following
sup(optSet).
4.3 Pruning Using the Current Optimized
The naive algorithm exhaustively enumerates all
possible sets with at most k instantiations - this results
in high complexity. However, in the naive algorithm illustrated
in Figure 1, if we know that the confidence of
any set satisfying minimum support and obtained as a
result of extending curSet cannot exceed the confidence
of the current optimized set (that is, optSet), then we
can stop extending curSet immediately and reduce the
search space significantly. In this section, we develop
branch and bound pruning techniques that, with the
aid of the current optimized set, considerably reduce
the overhead of exploring the entire search space.
For our pruning techniques to be effective, it is imperative
that we find a set close to the optimized set
early - since it can then be used to eliminate a larger
number of sub-optimal sets. It may seem logical that
for the optimized confidence problem, since we are trying
to maximize confidence, considering instantiations
with high confidences first may cause the search to converge
on the optimized set more rapidly. However, this
may not be the case since the support of the optimized
confidence set has to be at least minSup. For a high
minimumsupport, it may be better to explore instantiations
in the decreasing order of their supports. Thus,
the order in which instantiations must be considered
by the search algorithm is a non-trivial problem. In order
to investigate this idea of pruning curSet early, we
introduce the notion of the weight of an instantiation
I (denoted by w(I)) and define it below.
In the definition, w 1 and w 2 are positive real con-
stants. Thus, the weight of an instantiation is the
weighted sum of both, its confidence and support. Our
search algorithms can then consider instantiations with
higher weights first, and by using different values of
vary the strategy for enumerating sets. In
the remainder of this section, we propose algorithms
that store instantiations in instArray in the decreasing
order of their weights and explore instantiations with
higher weights first. We also present techniques that
exploit the sort order of instantiations to prune the
search space. The variables maxConf and maxSup are
used to store the maximum confidence and maximum
support of all the instantiations in instArray, respectively
Optimized Confidence Set: Suppose the current
set of instantiations, curSet, is only extended with
instantiations belonging to the set comprising of
instArray[i], for some i, and instantiations following it
in instArray. The key idea is that we can stop extending
curSet if among instantiations being considered to
extend curSet, there does not exist a set of instantiations
S such that (1) sup(curSet[S) - minSup, and
In order to determine whether a set S satisfying
the above conditions (1) and (2) exists, we first derive
the constraints that such a set S must satisfy. If
the constraints are unsatisfiable with the remaining instantiations
that are candidates for extension, a set S
satisfying the conditions (1) and (2) does not exist and
we can stop extending curSet immediately. Let variables
s and c denote sup(S) and conf(S), respectively,
and In the following, we derive
the constraints on s that must be satisfied by any
set S that could be used to extend curSet such that
conditions (1) and (2) hold. Due to the condition (2),
we have the following constraint.
Re-arranging the terms yields the following constraint.
s
(1)
Also, since the set curSet[S must satisfy minimum
support, and a set S can consist of at most l instantiations
we require s to satisfy the following two constraints
Note that since the confidence of set S can be
at most maxConf, we have the following constraint:
maxConf. Combining this constraint with Constraint
(1) results in the following constraint which, if
satisfied by s, ensures that there exists a c that does
not exceed maxConf and at the same time, causes the
confidence of curSet[S to be at least conf(optSet).
Finally, we exploit the fact that instantiations are
sorted in the decreasing order of their weights and only
instantiations that follow instArray[i] are used to extend
curSet. We utilize the following property of sets
of instantiations.
Theorem 4.2: For an arbitrary set of instantiations
there exists an instantiation I 2 S such that
Thus, since S contains only instantiations with
weights at most w(instArray[i]), and S can contain at
most l instantiations, due to Theorem 4.2, we obtain
the following constraint on s and c.
l
Re-arranging the terms, we get
w(instArray[i]) l \Gamma w2 s
l w1
Since c, the confidence of S, must be at least 0, substituting
for c in the above equation results in the
following constraint that prevents s from getting so
procedure optConfPruneOpt(curSet, curLoc):
1. for i := curLoc to n do f
2. [minS,maxS] := optConfRange(curSet, i)
3. if minS ? maxS
4. break
5. S := curSet [ finstArray[i]g
6. if sup(S) - minSup and conf(S) ? conf(optSet)
7. optSet := S
8. if
9.
Figure
2: Depth first alg. for optimized confidence set
large that c would have to become negative to satisfy
the above constraint.
w(instArray[i]) l
Finally, constraints (1) and (5) can be combined
to limit s to values for which a corresponding value
of c can be determined such that the confidence of
curSet[S does not drop below conf(optSet), and
the weights of instantiations in S do not exceed
w(instArray[i]).
w(instArray[i]) l \Gamma w2 s
w1 l
s
Any set S used to extend curSet to produce a better
set for optimized confidence must satisfy the constraints
(on its support). Thus, if there exists
a value of s satisfying constraints (2)-(7), then curSet
must be extended. Otherwise, it can be pruned without
extending it further.
The procedure for computing the optimized confidence
set, illustrated in Figure 2, is similar to the
naive algorithm except that it invokes optConfRange
to determine if there exists an S that curSet can be
extended with to yield the optimized set. It stops extending
curSet if the range of values [minS, maxS] for
the support of such a set is empty (which happens
when minS ? maxS).
The procedure optConfRange computes the range
[minS, maxS] which is the range of values for s that
satisfy the above constraints. If there does not exist
an s that satisfies constraints (2)-(7), then a range for
which minS ? maxS is returned for convenience. Thus,
for the returned range from optCon-
fRange, we can stop extending curSet. The procedure
takes as input curSet and the index i (in instArray)
of the next instantiation being considered to extend
curSet. A detailed derivation of the ranges satisfying
the above constraints and a description of the procedure
optConfRange can be found in [9].
Optimized Support Set: For optimized support
sets, constraints on s are similar to constraints (1)-
(7) except that conf(optSet) and minSup are replaced
with minConf and sup(optSet), respectively. In other
words, the confidence of curSet [ S must be at least
minConf and the support of curSet [ S should be no
smaller than that of optSet. Thus, optSupRange, the
procedure to compute the range of values for s that
satisfy constraints (1)-(7), is similar to optConfRange
with all occurrences of conf(optSet) and minSup replaced
with minConf and sup(optSet). Also, optSup-
PruneOpt, the procedure for computing optimized sets
invokes optSupRange instead. Note that if
maxConf, then optSupRange always returns [1,0] for
convenience. However, this is a special case for which
the optimized support set can be computed directly
and is simply the set of all instantiations with confidence
maxConf.
4.4 Pruning using Intermediate Sets
The pruning technique presented in the previous
subsection used the current optimized set to reduce
the search space of the depth-first algorithm for computing
optimized sets. A different class of algorithms,
graph search algorithms, maintain a list of intermediate
sets, and in each step, extend one of them. In this
case, not only the current optimized set but also the
intermediate sets can be used for pruning. The graph
search algorithms, however, do incur additional storage
overhead since they have to keep track of intermediate
sets. In this section, we present new techniques for
pruning using intermediate sets.
Optimized Confidence Set: The algorithm, after
the p th step, keeps track of, for the p instantiations
with the highest weight, (1) the current optimized set,
and (2) intermediate sets involving some subset of the
instantiations. The intermediate sets are those that
have the potential to be further extended with the remaining
instantiations to yield the optimized set. In
the p+1 th step, it extends every intermediate set using
the instantiation with the next highest weight, thus
generating new intermediate sets. If any of the new
intermediate sets is better than the current optimized
set, then optSet is replaced. Next, since instantiations
in instArray are stored in the decreasing order of their
weights, procedure optConfRange from the previous
subsection (that used the current optimized set for
pruning) is used to eliminate those intermediate sets
that can never be candidates for the optimized set.
Furthermore, consider two intermediate sets S 1 and
set S of instantiations that can be used to extend
in a set with confidence at least that of
optSet and support at least minSup),
minSup and conf(S 1 [ Due to
it is the case that S 1 [S
contains no more instantiations than
since there is no overlap between any two arbitrary
instantiations, for some set S if S 2 [S could be an optimized
confidence set, then S 1 [S is also an optimized
set. Consequently, it follows that deleting S 2 from the
list of intermediate sets does not affect the ability of
our algorithm to discover the optimized confidence set.
The range of supports for a set S that can be used to
extend S 2 to result in a set with support minSup and
confidence as good as the current optimized set can
be obtained using the procedure optConfRange. Fur-
thermore, if s and c are the support and confidence of
set S, respectively, is the
index of the next instantiation to be considered for extending
the intermediate sets, then due to constraints
(1) and (5) from the previous section, the confidence
of S satisfies the following inequality.
s
w(instArray[i]) l \Gamma w2 s
l w1
Suppose, in addition, we compute the range of supports
for a set S which when used to extend S 1 causes
its support to be at least minSup and for all values of
c satisfying Constraint (8), the confidence of
is at least that of S 2 [ S. Now, if the former range
of supports for set S is contained within the latter,
then it implies that for every set S (with support s
and confidence c satisfying Constraint (8) above) that
can be used to extend S 2 to yield an optimized set,
the same set S can also be used to extend S 1 resulting
in a set with support at least minSup and confidence
greater than or equal to that of S [ S 2 . Thus, S 2 can
be pruned. The challenge is to determine the latter
range, which we compute below, as follows. We required
the support of S 1 [ S to be at least minSup -
this translates to the following constraint on s.
In addition, we required that for all values of c satisfying
Constraint (8), conf(S 1 [S) - conf(S 2 [S). Thus
s must satisfy the following constraint for all values of
c satisfying Constraint (8).
It can be shown that if sup(S 1
a given value of s, the above constraint holds for all
values of c (satisfying Constraint (8)) if it holds for
s and the leftmost term in Constraint (8).
Similarly, if sup(S
of s, Constraint (10) holds for all values of c (satisfy-
ing Constraint (8)) if it holds for s and
rightmost term in Constraint (8). Thus, the range of
values for s that we are interested in are those that
satisfy Constraint (9), and either Constraint (10) with
c replaced with c min if sup(S 1
Constraint (10) with results
in two separate equations of the form A s 2 +B s+C -
with values of A, B and C. The range of values
for s that satisfy the equations can be determined by
procedure optConfPruneInt(intList, curLoc):
Figure
3: Graph search alg. for optimized confidence
set
solving the quadratic inequality equation. Due to lack
of space, the procedure optConfCanPrune to decide
whether an intermediate set S 1 can prune another intermediate
set S 2 with the next instantiation to extend
both intermediate sets is presented in [9].
The overall procedure for computing optimized confidence
sets is described in Figure 3. The procedure
accepts as input parameters intList which is a list of
intermediate sets for the first curLoc-1 instantiations
in instArray, and curLoc which is the index of the instantiation
(in instArray) to be considered next for
extending the intermediate sets in intList. The procedure
is initially invoked with arguments intList=;
and curLoc=1, and it recursively invokes itself with
successively increasing values for curLoc.
The procedure begins by extending every set in
intList with instArray[curLoc] and forms a new list
newList containing finstArray[curLoc]g and the extended
sets (steps 1-7). Furthermore, if an extended
set in newList is found to have support at least minSup
and higher confidence than the currently stored optimized
set, then optSet is set to the extended set. In
steps 10-24, those intermediate sets that cannot be extended
further to result in the optimized set are deleted
from intList. These include sets already containing k
instantiations and sets S for which optConfRange(S,
curLoc+1) returns an empty range. In addition, for
any two intermediate sets S 1 and S 2 in intList that can
be extended to result in an optimized set, if one can
prune the other, then the other is deleted (steps 14-
21). Finally, steps 25-27 contain conditions that enable
the algorithm to terminate early - even though
curLoc may be less than n. This occurs when there
are no intermediate sets to expand (that is, intList
;) and the weight of instArray[curLoc+1] is less than
minWeight, the minimum weight required to generate
the optimized set (follows from Theorem 4.2).
In this section, we present algorithms for computing
optimized sets when association rules contain uninstantiated
conditions of the form A
a numeric attribute. Thus, unlike the previous section,
in which an instantiation was obtained by instantiating
each uninstantiated attribute with a single value
in its domain, in this section each uninstantiated attribute
is instantiated with an interval in its domain.
Thus, each instantiation corresponds to a rectangle in
m-dimensional space (in the previous section, each instantiation
corresponded to a point in m-dimensional
space).
Permitting association rules to contain uninstantiated
numeric attributes in conditions of the form
complicates the problem of computing optimized
sets in several ways. First, unlike the categorical
attribute case in which two distinct instantiations
had no overlap (since each instantiation corresponded
to a point), two distinct instantiations may overlap.
This makes it impossible to use intermediate sets for
pruning as described in Section 4.4. For instance, when
computing optimized confidence sets, consider two intermediate
sets S 1 and S 2 such that for every set S
of instantiations that can be used to extend S 2 (to result
in a set with confidence at least that of optSet
and support at least minSup),
and
conf(S may not qualify
to be the optimized set since instantiations in S may
overlap with instantiations in S 1 . However, there may
be no overlap between S and S 2 , and thus it may still
be possible to extend S 2 to result in the optimized set.
As a result, S 1 cannot prune S 2 . This is not a problem
for categorical attributes since there is no overlap
between any pair of instantiations.
The depth first algorithm in Section 4.3, how-
ever, can be modified to compute optimized confidence
sets. Since the optimized set can contain only
non-overlapping instantiations, we must not extend
curSet with an overlapping instantiation instArray[i]
- this cannot result in the optimized set. For a set
S of instantiations and an instantiation I, let the
function overlap(S; I) return true if the rectangle for
some instantiation in S and the rectangle for I over-
lap. Then the modification to optConfPruneOpt is to
have the statement "if overlap(curSet,
false" between steps 1 and 2, and the body of this
if-statement includes the steps 2-8.
The next complication is that if there are m uninstantiated
attributes A 1 , A 2 in conditions of
the form A and the domain of A i ranges between
1 and n i , then the total number of instantiations
is In contrast, the number of
possible instantiations if every attribute was categorical
would be n 1 . Thus, the number of
instantiations to be examined by our search algorithms
increases dramatically for optimized association rules
with uninstantiated conditions of the form A
In the remainder of this section, we propose a pruning
technique for the optimized confidence problem
that reduces the number of instantiations stored in
instArray, but still guarantees optimality. Reducing
the number of instantiations allows us to lower the
overhead of storing (that is, appending) the instantiations
to instArray, and to incur smaller costs when
sorting the instantiations in the decreasing order of
their weights. More importantly, it can also considerably
reduce the input size to our search algorithms.
Key Observation: As we mentioned earlier, the
number of instantiations for numeric attributes
increases significantly as the number of uninstantiated
attributes increases. An instantiation
for m numeric attributes can be represented as
where the interval
for the uninstantiated numeric attribute A i is bounded
by x i below and y i above. Thus, the x
determine the bounds for the rectangle for the instantiation
along each of the m axis. For instantiations
I
I we say that I 1
is contained in I 2 if u (that
is, the rectangle for I 1 is contained in the rectangle for
I 2 ).
The following theorem provides the basis for pruning
instantiations in instArray when computing optimized
confidence sets.
Theorem 5.1 : Let I 1 and I 2 be two instantiations
such that I 1 is contained in I 2 . If both I 1 and I 2 have
support at least minSup and conf(I 1
there exists an optimized confidence set that does not
contain I 2 .
Instantiation I 2 can be deleted since I 2 's rectangle
is the outer rectangle. Thus, if the optimized
set were to contain I 2 , then replacing I 2 with I 1 results
in an optimized set with at least the same confidence
and no overlapping instantiations. However,
the above pruning rule does not work for optimized
support sets since that would require I 1 to be pruned
(when I 1 's rectangle is contained in I 2 's rectangle and
Thus, if the optimized
set were to contain I 1 , then replacing I 1 with
I 2 results in an optimized set with as high confidence
and support - however, it could contain overlapping
instantiations.
Due to lack of space, we do not report in this pa-
per, the details of the algorithm that uses the above
pruning rule for pruning instantiations from instArray
before optConfPruneOpt is executed. The algorithm,
including an analysis of its time and space complexity,
can be found in [9]. In [9], we show that the time complexity
of the algorithm is O(m n 2
m is the number of numeric attributes considered and
n i is the number of values for attribute A i .
6 Experimental Results
In this section, we study the performance of our algorithms
for computing optimized confidence and support
sets. From our experimental results, we establish
that
ffl Our pruning techniques improve the performance
of both, depth first and graph search algorithms,
significantly.
ffl The values for w 1 and w 2 play an important role
in reducing the search times of our algorithms.
ffl Pruning instantiations prior to performing search,
whenever possible, can result in further improvements
in performance.
ffl The low execution times of our algorithms in a
large number of cases make them suitable to be
used in practice.
Since naive algorithms that exhaustively enumerate
all possible sets are obviously too expensive, we do not
consider them. Also, the data file is read only once at
the beginning of each algorithm (in order to generate
instantiations). The time for this, in most cases, constitutes
a tiny fraction of the total execution time of
our algorithms. Thus, we do not include the time spent
on reading the data file in our results. Furthermore,
note that the performance of our algorithms does not
depend on the number of tuples in the data file - it
is more sensitive to the number of attributes and the
sizes of their domains.
We performed extensive experiments using a Sun
Ultra-2/200 machine with 512 MB of RAM and running
Solaris 2.5. However, due to lack of space, we do
not report all our experimental results - these can be
found in [9].
Synthetic Datasets: The association rule that we
experimented with, has the form U-C
contains m uninstantiated attributes (see Section 3).
For simplicity, we assume that the domains of the uninstantiated
attributes consist of integers ranging from 1
to n. We consider two forms for U . The first, U 0 , has
the form A can be used for
both categorical and numeric attributes. The second,
U 00 , has the form A 1
and can be used only for numeric attributes. Every
instantiation of U (and thus, every point
in m-dimensional space) is assigned a randomly generated
confidence between 0 and 1 with uniform dis-
tribution. Each instantiation of U
assigned a randomly generated support between 0 andn m with uniform distribution.
6.1 Pruning Prior to Search
We begin by studying the effectiveness of pruning
instantiations from instArray prior to performing
search. This technique applies only to optimized confidence
sets when U has the form U 00 - it was introduced
in Section 5 and the detailed algorithm and experimental
result can be found in [9]. In our experiment, the
best execution time for the algorithm without prior
pruning is greater than the best execution time for the
prior pruning algorithm. As a result, in subsequent
experiments, we only use the prior pruning algorithm.
6.2 Sensitivity to w 1 and w 2
Optimized Confidence Sets: For optimized confidence
sets, we fix w 1 at 1 and vary w 2 . When
the support of each instantiation is very low (the average
support of an instantiation is 1
In contrast,
when instantiations with larger rectangles
have larger supports - as a matter of fact, the instantiation
with the largest rectangle has support 1. Thus,
for only low values for minSup are mean-
ingful, and (2) only large values for w 2 impact the performance
of our search algorithms.
We first describe the results for Figure 4-
(a) presents the execution times for both the depth
first as well as the graph search algorithms for values
of minSup between 0.002 and 0.0038. For most minSup
values, both algorithms perform the best when w
- that is, when instantiations are sorted by confidence
only. When minSup is very high (e.g., 0.00038), the algorithms
with w 2 of 500 (that is, the highest value for
that we selected) perform as well or better. The
reason for this is that for a majority of the support
values, the optimized confidence set comprises mainly
of instantiations with high confidences. When instAr-
ray is sorted by confidence, these instantiations are
considered first by the algorithms, allowing a rapid
convergence to the optimized confidence set. As we
increase the value of w 2 , instantiations with higher
confidences are pushed to the end of instArray by instantiations
with larger supports (and possibly smaller
confidences). Thus, the algorithms need to enumerate
more sets before the optimized set is reached, and
this makes them perform poorly. Higher values of w 2 ,
however, are better for large minSup values when the
optimized confidence set contains instantiations with
both high supports and high confidences.
Once instantiations are sorted such that those belonging
to the optimized set are toward the front of
instArray, then the graph search algorithm that uses
intermediate sets for pruning is faster than the depth
first algorithm that uses only the current optimized set.
One of the reasons for this is that the graph search algorithm
prunes the search space more effectively. The
other reason is that the graph search algorithm initially
concentrates on finding the optimized set using
instantiations at the front of instArray and gradually
considers instantiations toward the end. In contrast,
the depth first search algorithm may have to generate
sets containing instantiations located at the end of in-
stArray before all of the instantiations at the front of
instArray have been considered.
Now, we turn our attention to In Figure
4-(b), we plot execution times for the depth first
algorithm with prior pruning as minSup is varied from
0.1 to 0.35. For typically instantiations with
large supports have small confidences and the ones
with high confidences have low supports. This explains
why (1) for all values of w 2 , execution times
for the algorithm increases with minSup, and (2) for
0:75, the algorithm performs the best. As minSup
is increased, the supports for instantiations that
are possible candidates for the optimized set increases.
Thus, with increasing minSup, a larger number of instantiations
need to be considered by the search algo-
rithms, and their performance degrades.
Furthermore for small values of w 2 (e.g., 0), instAr-
ray is sorted by confidence and instantiations with low
supports and high confidences are at the front of in-
stArray. These instantiations, however, cannot be in
the optimized set if minSup is large - thus, the algorithm
performs the worst when w
high. On the other hand, for large values of w 2 (e.g.,
2), instantiations with very high supports and low confidences
move up in instArray causing the algorithm
perform poorly for smaller values of minSup. The algorithm
performs the best for a wide range of minSup
values when w 0:75 since this results in instantiations
with moderately high values for both supports
and confidences making it to the top in instArray.
6.3 Sensitivity to size of domain
For we found that for a given k, increasing
the size of the domain n for attributes actually
causes the search times to decrease since there are a
lot more instantiations with higher supports and confidences
(supports and confidences are uniformly distributed
across the instantiations). However, when
even though the number of instantiations
with high supports and confidences increases, a disproportionately
larger number of instantiations (1) with
high confidences have low supports (e.g., points), and
(2) with high supports have low confidences (e.g., instantiations
with large rectangles). The detailed result
can be found in [9].
6.4 Sensitivity to k
Due to the lack of space, we present results in [9].
7 Concluding Remarks
In this paper, we generalized the optimized association
rules problem proposed in [5] in three ways - 1)
association rules are permitted to contain disjunctions
over uninstantiated attributes 2) association rules are
allowed to contain an arbitrary number of uninstantiated
attributes and uninstantiated attributes can
be either categorical or numeric. Since the problem
of computing optimized rules is intractable, we had
to develop effective mechanisms for both, exploring as
well as pruning the search space. We assigned a weight
to each instantiation, and our search algorithms considered
instantiations in the decreasing order of their
weights. Thus, based on input parameters (e.g., minimum
support, number of disjunctions), by appropriately
assigning weights to instantiations, the exploration
of the search space can be guided to be effi-
cient. In addition, we proposed a general depth first
algorithm that keeps track of the current optimized
rule and uses it to prune the search space. For categorical
attributes, we also proposed a graph search
algorithm that uses intermediate results to eliminate
Execution
minSup
m=4, k=20, n=10, w1=1
Opt, w2=0
Opt, w2=100
Opt, w2=500
Int, w2=500101000100000
Execution
Time
minSup
k=5, m=1, n=200
(a)
Figure
4: Sensitivity to w 1 and w 2 (optimized confidence sets)
paths that cannot result in the optimized rule. For numeric
attributes, we developed an algorithm for pruning
instantiated rules prior to performing the search
for the optimized confidence rule. Finally, we reported
the results of our experiments that demonstrate the
practicality of the proposed algorithms.
Acknowledgments
We would like to thank Phil Gibbons and Jeff Vitter
for their valuable insights that led to Theorem 4.2 and
its proof. We would also like to thank Narain Gehani,
Hank Korth and Avi Silberschatz for their encouragement
and for their comments on earlier drafts of this
paper. Without the support of Yesook Shim, it would
have been impossible to complete this work.
--R
Mining association rules between sets of items in large databases.
Fast algorithms for mining association rules.
Advances in Knowledge Discovery and Data Mining.
Data mining using two-dimensional optimized association rules: Scheme
Mining optimized association rules for numeric attributes.
Discovery of multiple-level association rules from large databases
Efficient algorithms for discovering association rules.
Mining optimized association rule for categorical and numeric attributes.
Mining generalized association rules.
Mining quantitative association rules in large relational tables.
--TR
--CTR
Mehmet Kaya , Reda Alhajj, Novel approach to optimize quantitative association rules by employing multi-objective genetic algorithm, Proceedings of the 18th international conference on Innovations in Applied Artificial Intelligence, p.560-562, June 22-24, 2005, Bari, Italy
Hye-Jung Lee , Won-Hwan Park , Doo-Soon Park, An efficient algorithm for mining quantitative association rules to raise reliance of data in large databases, Design and application of hybrid intelligent systems, IOS Press, Amsterdam, The Netherlands,
Hui Xiong , Shashi Shekhar , Pang-Ning Tan , Vipin Kumar, Exploiting a support-based upper bound of Pearson's correlation coefficient for efficiently identifying strongly correlated pairs, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Yiping Ke , James Cheng , Wilfred Ng, Mining quantitative correlated patterns using an information-theoretic approach, Proceedings of the 12th ACM SIGKDD international conference on Knowledge discovery and data mining, August 20-23, 2006, Philadelphia, PA, USA
Sameep Mehta , Srinivasan Parthasarathy , Hui Yang, Toward Unsupervised Correlation Preserving Discretization, IEEE Transactions on Knowledge and Data Engineering, v.17 n.9, p.1174-1185, September 2005
Yen-Liang Chen , Kwei Tang , Ren-Jie Shen , Ya-Han Hu, Market basket analysis in a multiple store environment, Decision Support Systems, v.40 n.2, p.339-354, August 2005
Antonin Rozsypal , Miroslav Kubat, Association mining in time-varying domains, Intelligent Data Analysis, v.9 n.3, p.273-288, May 2005 | algorithm;knowledge discovery;data mining;optimized association rules |
628191 | Finding Localized Associations in Market Basket Data. | AbstractIn this paper, we discuss a technique for discovering localized associations in segments of the data using clustering. Often, the aggregate behavior of a data set may be very different from localized segments. In such cases, it is desirable to design algorithms which are effective in discovering localized associations because they expose a customer pattern which is more specific than the aggregate behavior. This information may be very useful for target marketing. We present empirical results which show that the method is indeed able to find a significantly larger number of associations than what can be discovered by analysis of the aggregate data. | Introduction
Market basket data consists of sets of items bought together by customers. One such set of
items is called a transaction. In recent years, a considerable amount of work has been done
in trying to find associations among items in large groups of transactions [3, 4]. Considerable
amount of work has also been done on finding association rules beyond the traditional
support-confidence framework which can provide greater insight to the process of finding
association rules than more traditional notions of support and confidence [2, 7, 8, 12, 21].
However, all of the above methods try to find associations from the aggregate data as opposed
to finding associations in small localized segments which take advantages of the natural
skews and correlations in small portions of the data. Recent results [1, 24] have shown that
thereare considerable advantages in using concepts of data locality, and localized correlations
for problems such as clustering and indexing. This paper builds upon this flavor of
techniques.
In this paper, we focus on segmenting the market basket data so as to generate extra insight
by discovering associations which are localized to small segments of the data. This has
considerable impact in deriving association rules from the data, since patterns which cannot
be recognized on an aggregate basis can often be discovered in individual segments. Such
associations are referred to as personalized associations and can be applied to more useful
target marketing.
Moreover, our algorithm can be directly adapted to segment categorical data. A categorical
data set has an associated set of attributes, where each attribute takes a finite number of
non-numerical values. A record in the data consists of a set of values, one for each attribute.
Such a record can be transformed into a transaction in a simple manner by creating an
item for each categorical value. However, our method is specifically applicable to the case of
discovering useful associations in market basket data, as opposed to finding well partitioned
clusters in categorical data.
The problem of clustering has been widely studied in the literature [5, 6, 9, 10, 11, 13, 14, 15,
In recent years, the importance of clustering categorical data has received
considerable attention from researchers [13, 14, 16, 17]. In [17], a clustering technique is
proposed in which clusters of items are used in order to cluster the points. The merit in this
approach is that it recognized the fact that there is a connection between the correlation
among the items and the clustering of the data points. This concept is also recognized in
Gibson [14], which uses an approach based on non-linear dynamical systems to the same
effect.
A technique called ROCK [16] which was recently proposed uses the number of common
neighbors between two data points in order to measure their similarity. Thus the method
uses a global knowledge of the similarity of data points in order to measure distances. This
tends to make the decision on what points to merge in a single cluster very robust. At
the same time, the algorithm discussed in [16] does not take into account the global associations
between the individual pairs of items while measuring the similarities between the
transactions. Hence, a good similarity function on the space of transactions should take into
account the item similarities. Moreover, such an approach will increase the number of item
associations reported at the end of the data segmentation, which was our initial motivation
in considering the clustering of data. Recently, a fast summarization based algorithm called
CACTUS was proposed [13] for categorical data.
Most clustering techniques can be classified into two categories: partitional and hierarchical
[19]. In partitional clustering, a set of objects is partitioned into clusters such that the objects
in a cluster are more similar to one another than to other clusters. Many such methods work
with cluster representatives, which are used as the anchor points for assigning the objects.
Examples of such methods include the well known K-means and K-medoid techniques. The
advantage of this class of techniques is that even for very large databases it is possible to
work in a memory efficient way with a small number of representatives and only periodic
disk scans of the actual data points. In agglomerative hierarchical clustering, we start off
with placing each object in its own cluster and then merge these atomic clusters into larger
clusters in bottom up fashion, until the desired number of clusters are obtained. A direct
use of agglomerative clustering methods is not very practical for very large databases, since
the performance of such methods is likely to scale at least quadratically with the number of
data points.
In this paper, we discuss a method for data clustering, which uses concepts from both
agglomerative and partitional clustering in conjunction with random sampling so as to make
it robust, practical and scalable for very large databases. Our primary focus in this paper is
slightly different from most other categorical clustering algorithms in the past; we wish to use
the technique as a tool for finding associations in small segments of the data which provide
useful information about the localized behavior, which cannot be discovered otherwise.
This paper is organized as follows. In the remainder of this section, we will discuss definitions,
notations, and similarity measures. The algorithm for clustering is discussed in section 2,
and the corresponding time complexity in section 3. The empirical results are contained in
section 4. Finally, section 5 contains the conclusions and summary.
1.1 An intuitive understanding of localized correlations
In order to provide an intuitive understanding of the importance of localized correlations,
let us consider the following example of a data set which is drawn from a database of
supermarket cutomers. This database may contain customers of many types; for example,
those transactions which are drawn from extremely cold geographical regions (such as Alaska)
may contain correlations corresponding to heavy winter apparel, whereas these may not be
present in the aggregate data because they are often not present to a very high degree
in the rest of the database. Such correlations cannot be found using aggregate analysis,
since they are not present to a very great degree on an overall basis. An attempt to find
such correlations using global analysis by lowering the support will result in finding a large
number of uninteresting and redundant "correlations" which are created simply by chance
throughout the data set.
1.2 Definitions and notations
We introduce the main notations and definitions that we need for presenting our method.
Let U denote the universe of all items. A transaction T is a set drawn from these items
U . Thus, a transaction contains information on whether or not an item was bought by a
customer. However, our results can be easily generalized to the case when quantities are
associated with each item.
A meta-transaction is a set of items along with integer weights associated with each item.
We shall denote the weight of item i in meta-transaction M by wtM (i). Each transaction
is also a meta-transaction, when the integer weight of 1 is associated with each item. We
define the concatenation operation + on two meta-transactions M and M 0 in the following
way: We take the union of the items in M and M 0 , and define the weight of each item i in
the concatenated meta-transaction by adding the weight of the item i in M and M 0 . (If an
item is not present in a meta-transaction, its weight is assumed to be zero.) Thus, a meta-
transaction may easily be obtained from a set of transactions by using repeated concatenation
on the constituent transactions. The concept of meta-transaction is important since it uses
these as the cluster representatives.
Note that a meta-transaction is somewhat similar to the concept of meta-document which
is used in Information Retrieval applications in order to represent the concatenation of
documents. Correspondingly, a vector-space model [23] may also be used in order to represent
the meta-transactions. In the vector space model, each meta-transaction is a vector of items,
and the weight associated with a given entry is equal to the weight associated with the
corresponding item.
The projection of a meta-transaction M is defined to be a new meta-transaction M 0 obtained
from M by removing some of the items. We are interested in removing the items with the
smallest weight in the meta-transaction M . Intuitively, the projection of a meta-transaction
created from a set of items corresponds to the process of ignoring the least frequent (and
thus least significant or most noisy) items for that set.
1.3 Similarity Measures
The support of a set of items [3] is defined as the fraction of transactions which contain all
items. The support of a pair of items is an indication of the level of correlation and presence
of that set of items and has often been used in the association rule literature in order to
identify groups of closely related items in market basket data. We shall denote the aggregate
support of a set of items by X by sup(X). The support relative to a subset of transactions
C is denoted by sup C (X).
In order to measure the similarity between a pair of transactions, we first introduce a measure
of the similarity between each pair of items, which we call the affinity between those items.
The affinity between two items i and j, denoted by A(i; j), is the ratio of the percentage of
transactions containing both i and j, to the percentage of transactions containing at least
one of the items i and j. Formally:
The similarity between a pair of transactions is
defined to be the average affinity of their items. Formally:
The similarity between a pair of meta-transactions is defined in an analogue manner, except
that we weigh the terms in the summation by the products of the corresponding item weights.
be two meta-transactions. Then, the similarity
of M and M 0 is defined by:
(j q )A(i
(j
An interesting observation about the similarity measure is that two transactions which have
no item in common, but for which the constituent items are highly correlated, can have high
similarity. This is greatly desirable, since transaction data is very sparse, and often closely
correlated transactions may not share many items.
2 The clustering algorithm
In this section we describe the CLASD (CLustering for ASsociation Discovery) algorithm.
The overall approach uses a set of cluster representatives or seeds, which are used in order
to create the partitions. Finding a good set of cluster representatives around which the
partitions are built is critical to the success of the algorithm. We would like to have a total
of k clusters where k is a parameter specified by the user. We assume that k is a user's
indication of the approximate number of clusters desired. The initial number of seeds chosen
by the method is denoted by startsize, and is always larger than than the final number of
clusters k.
We expect that many of the initial set of cluster representatives may belong to the same
cluster, or may not belong to any cluster. It is for this reason that we start off with a larger
number of representatives than the target, and iteratively remove the outliers and merge
those representatives which belong to the closest cluster.
In each iteration, we reduce the number of cluster representatives by a factor of ff. To do
so, we picked the closest pair of representatives in each iteration and merged them. Thus,
if two representatives belong to the same "natural" cluster, we would expect that they
were merged automatically. This process can be expensive, as hierarchical agglomeration
by straightforward computation requires O(n 2 ) time for each merge, where n is the total
number of representatives. As we will discuss in detail in a later section, we found that pre-computation
and maintainance of certain amount of information about the nearest neighbors
of each seed was an effective option for implementing this operation effectively. The overall
algorithm is illustrated in Figure 1. The individual merge operation is illustrated in Figure
2. After each merge, we project the resulting meta-transaction as described in Figure 3.
This helps in removal of the noisy items which are not so closely related to that cluster; and
also helps us speed up subsequent distance computations.
We use partitioning techniques in order to include information from the entire database in
each iteration in order to ensure that the final representatives are not created only from the
tiny random sample of startsize seeds. However, this can be expensive since the process of
partitioning the full database into startsize partitions can require considerable time in each
iteration. Therefore we pick the option of performing random sampling on the database in
order to assign the transactions to seeds. We increase the value of samplesize by a factor of
ff in each iteration. This is the same factor by which the number of representatives in each
iteration is decreased. Thus, later phases of the clustering benefit from larger samplesizes.
This is useful since robust computations are more desirable in later iterations. In addition,
it becomes more computationally feasible to pick larger samplesizes in later iterations, since
the assignment process is dependant on the current number of cluster representatives. The
process of performing the database (sample) partitions is indicated in the Figure 4. Points
which are outliers would have very few points assigned to them. Such points are removed
automatically by our clustering algorithm. This process is accomplished in the procedure
Kill(\Delta) which is described in Figure 5. The value of threshold that we found to be effective
for the purpose of our experiments was 20% of the average number of transactions assigned
to each representative.
The process continues until one of two conditions is met: either there are at most k clusters
in the set (where k is a parameter specified by the user), or the largest k clusters after the
assignment phase contain a significant percentage of the transactions. This parameter is
denoted by threshold 0 in the UpdateCriterion procedure of Figure 6. For the purpose of
our algorithm, we picked threshold 0 to be 33% of the total transactions assigned to all the
representatives.
Thus, unlike many clustering methods whose computation and output are strictly determined
by the input parameter k (equal to the number of output clusters), CLASD has a more
relaxed dependency on k. Thus, the number of final clusters can be smaller than k; and will
correspond to a more "natural" grouping of the data. We regard k as a user's estimation
on the granularity of the partitioning from which he would generate the localized item
associations. The granularity should be sufficient so as to provide significant new information
on localized behavior, whereas it should be restricted enough for each partition to have some
statistical meaning.
Let N be the total number of transactions in the database. The size of the initial set of seeds
M of randomly chosen transactions was
are two
constants (in our experiments, we use 30). The reason for choosing the value
of n to be dependent on
N is to ensure that the running time for a merge operation is at
most linear in the number of transactions. At the same time, we need to ensure that the
number of representatives are atleast linearly dependent on the number of output clusters
k. Correspondingly, we choose the value of n as discussed above. The other parameters
appearing in Figure 1 were set to the following values in all the experiments we report in
the next sections: 100. The parameter
fi used in Figure 3 is 10:
Algorithm
begin
Precompute affinities between item pairs;
currentsize g; is the initial set of startsize seeds;
while (not termination-criterion) do
begin
Construct R by randomly picking samplesize transactions
from the database;
f Final partitioning of database g
Let R be the entire database of transactions;
Report (C);
Figure
1: The clustering algorithm
Algorithm Merge(Cluster Representative: M, Integer: MergeCount)
begin
to MergeCount do
begin
Replace M a and M b in M by M ab which is concatenation of M a and M b
Figure
2: The merging procedure
Algorithm Project(Cluster Representative: M , Number of items: d)
fi is a fixed constant
begin
sort the items of M in decreasing order of weights;
be the first item in this order;
for each item
if (more than d items have weight ?
largest d weights;
set all remaining weights to 0;
Figure
3: Projecting out the least important items
procedure Assign(Cluster Representatives: M, Transactions: T )
begin
for each T 2 T do
begin
be the cluster representative for which
Assign T to cluster C
for each M i 2 M do
Redefine M i by concatenating all transactions in C i to M i
return(M; C) end
Figure
4: Assigning transactions to clusters
procedure Kill(Cluster Representatives: M, Clusters: C, Integer: threshold)
begin
for each M i 2 M do
if C i contains less than threshold points
discard M i from M
Figure
5: Removing Outlier Representatives
procedure UpdateCriterion(Cluster Representatives: M, Clusters: C, k, threshold 0 )
begin
if (jMj -
if (largest k clusters in C have ? threshold 0 transactions)
Figure
Updating the termination criterion
2.1 Implementing the merging operations effectively
Finally, we discuss the issue of how to merge the cluster representatives effectively. The idea
is to precompute some of the nearest neighbors of each representative at the beginning of
each Merge phase, and use these pre-computed neighbors in order to find the best merge.
After each merge operation between a meta-transaction M and its nearest neighbor (denoted
henceforth by nn[M ]) we need to delete M and nn[M ] from the nearest neighbor lists where
these representatives occured, and added the merged meta-transaction M 0 to the appropriate
nearest neighbor lists. If a list becomes empty, we recompute it from scratch. In some of
the implementations such as those discussed in ROCK [16], an ordered list of the distances
to all the other clusters is maintained for each cluster in a heap data structure. This results
in O(log n) time per update and O(n \Delta log n) time for finding the best merge. However, the
space complexity can be quadratic in terms of the initial number of representatives n.
Our implementation precomputed only a constant number of nearest neighbors for each
cluster representative. This implementation uses only linear space. Updating all lists takes
time per iteration, if no list becomes empty. This solution is also better than that of
always recomputing a single nearest neighbor (list size =1) for each representative, because
maintaining a larger list size reduces the likelihood of many lists becoming empty. Corre-
spondingly, the likelihood of having to spend O(n 2 ) in one iteration is highly decreased. In
our experiments, we maintain the 5 closest neighbors of each representative. In general, if
at least one of the merged representatives M and nn[M ] appeared in the list of some other
representative N 2 M; then the resulting meta-transaction M 0 is one of the 5 nearest neighbors
of N: In all the experiments we performed, no list ever became empty, thus effectively
achieving both linear space and time for the maintenance of these lists.
2.2 Reporting item correlations
As discussed in Figure 1, the transactions are assigned to each cluster representative in a
final pass over the database. For an integrated application in which the correlations are
reported as an output of the clustering algorithm, the support counting can be parallelized
with this assignment process in this final pass. While assigning a transaction T to a cluster
representative M; we also update the information on the support sup C (fi; jg) of each 2-item
with respect to the corresponding cluster C. At the end, we report all pairs
of items whose support is above a user specified threshold s in at least one cluster.
3 Time and Space Complexity
As discussed above, CLASD has space requirements linear in n; because we maintain only a
constant number of neighbors for each cluster. We also need to store the affinity A(i;
each pair of items (i; denote the total number of items in the data.
Then, the overall space required by CLASD is O(n
Let Q be the time required to calculate the similarity function. Thr running time may be
computing by summing the times required for the merge and assignment operations. The
nearest neighbor lists need to be initialized for each of the log ff (n=k) sequences of Merge
operations. This requires O(n 2 \Delta Q \Delta log ff (n=k)) time. Let n r denote the total number of
recomputations of nearest neighbor lists over all merges in the entire algorithm. This requires
time. Determining the optimum pair to be merged during each iteration takes
O(n) time. Each assignment procedure requires O(n
the size of the random sample increases by the same factor by which the number of clusters
decreases). The last pass over the data which assigns the entire data set rather than a random
sample requires N \Delta k \Delta Q time. Summing up everything, we obtain an overall running time
of O((n 2 \Delta Q log ff
O((N
Choosing different values for InitSampleSize allows us a tradeoff between the accuracy and
running time of the method. As discussed earlier, the value of n r turned out to be small in
our runs, and did not contribute significantly. Finally, to report the item correlations, we
spend
Number of disk accesses We access the entire data set at most thrice. During the
first access, we compute the affinity matrix A, choose the random sample M of the initial
seeds, and also choose all independent random samples R (of appropriate sizes) that will be
used during various iterations to refine the clustering. The information on how many such
samples we need, and what their sizes are, can be easily computed if we know n; k; ff and
InitSampleSize: Each random sample is stored in a separate file. Since the sizes of these
random samples are in geometrically increasing order, and the largest random sample is at
most 1=ff times the full database size, accessing these files requires the equivalent of at most
one pass over the data for ff - 2. The last pass over the data is used for the final assignment
of all transactions to cluster representatives.
4 Empirical Results
The simulations were performed on a 200-MHz IBM RS/6000 computer with 256MB of
memory, running AIX 4.3. The data was stored on a 4:5GB SCSI drive. We report results
obtained for both real and synthetic data. In all cases, we are interested in evaluating the
extent to which our segmentation method helps us discover new correlation among items, as
well as the scalability of our algorithm. We first explain our data generation technique and
then report on our experiments.
We have also implemented the ROCK algorithm (see [16]) and tested it both on the synthetic
and real data sets. In this method, the distance between two transactions is proportional to
the number of common neighbors of the transactions, where two transactions T 1 and T 2 are
neighbors if
- ': The number of clusters k and the value ' are the main parameters
of the method.
4.1 Synthetic data generation
The synthetic data sets were generated using a method similar to that discussed in Agrawal
et. al. [4]. Generating the data sets was a two stage process:
(1) Generating maximal potentially large itemsets: The first step was to generate
"potentially large itemsets". These potentially large itemsets capture
the consumer tendencies of buying certain items together. We first picked the size
of a maximal potentially large itemset as a random variable from a poisson distribution
with mean - L . Each successive itemset was generated by picking half of its items from
the current itemset, and generating the other half randomly. This method ensures that
large itemsets often have common items. Each itemset I has a weight w I associated
with it, which is chosen from an exponential distribution with unit mean.
(2) Generating the transaction data: The large itemsets were then used in order to
generate the transaction data. First, the size S T of a transaction was chosen as a
poisson random variable with mean - T . Each transaction was generated by assigning
maximal potentially large itemsets to it in succession. The itemset to be assigned to a
transaction was chosen by rolling an L sided weighted die depending upon the weight
w I assigned to the corresponding itemset I. If an itemset did not fit exactly, it was
assigned to the current transaction half the time, and moved to the next transaction
the rest of the time. In order to capture the fact that customers may not often buy all
the items in a potentially large itemset together, we added some noise to the process
by corrupting some of the added itemsets. For each itemset I, we decide a noise level
I 2 (0; 1). We generated a geometric random variable G with parameter n I . While
adding a potentially large itemset to a transaction, we dropped minfG; jIjg random
items from the transaction. The noise level n I for each itemset I was chosen from a
normal distribution with mean 0.5 and variance 0.1.
We shall also briefly describe the symbols that we have used in order to annotate the data.
The three primary factors which vary are the average transaction size - T , the size of an average
maximal potentially large itemset - L , and the number of transactions being considered.
A data set having - transactions is denoted by T10.I4.D100K.
The overall dimensionality of the generated data (i.e. the total number of items) was always
set to 1000.
4.2 Synthetic data results
For the remainder of this section, we will denote by AI(s) (aggregate itemsets) the set of
2-item-sets whose support relative to the aggregate database is at least s. We will also denote
by CI(s) (cluster partitioned itemsets) the set of 2-item-sets that have support s or more
relative to at least one of the clusters C generated by the CLASD algorithm (i.e., for any
pair of items (i; there exists a cluster C k 2 C so that the support of (i;
C k is at least s). We shall denote the cardinality of the above sets by AI(s) and CI(s)
respectively. It is easy to see that AI(s) ' CI(s) because any itemset which satisfies the
minimum support requirement with respect to the entire database must also satisfy it with
respect to at least one of the partitions. It follows that AI(s) - CI(s).
In order to evaluate the insight that we gain on item correlations from using our clustering
method, we have to consider the following issues.
1. The 2-item-sets in CI(s) are required to have smaller support, relative to the entire
data set, than the ones in AI(s); since jC i j - N for any cluster C i 2 C: Thus, we should
compare our output not only with AI(s); but also with AI(s 0 ); for values s
2. As mentioned above, AI(s) ' CI(s): Thus, our method always discovers more correlations
among pairs of items, due to the relaxed support requirements. However, we
want to estimate how discriminative the process of finding these correlations is. In
other words, we want to know how much we gain from our clustering method, versus
a random assignment of transactions into k groups.
Number
of
2-item-sets
Aggregate Data Itemsets: AI(s/k)
Partition Itemsets: CI(s)
Aggregate Data Itemsets: AI(s)
Figure
7: Comparison between clustered itemsets and aggregate itemsets
In all of the experiments reported below, the size of the random set of seeds for the clustering
algorithm was 30k:
In
Figure
7, we used a T20.I6.D100K dataset, and the experiments were all run with
The value s 0 was chosen to be s=k: The reason is that, if all k output clusters had the same
size, then the required support for a 2-item-set reported by our clustering algorithm would
decrease by a factor of k: Hence, we want to know what would happen if we didn't do any
clustering and instead would simply lower the support by a factor of k:. As expected, the
graph representing CI(s) as a function of s lies above AI(s) and below AI(s 0 ). However, note
that the ratio AI(s 0 )=CI(s) is large, between 3 (for
This means that clustering helps us prune out between 75% and 82% of the 2-item-sets that
would be reported if we lowered the support. Thus, we get the benefit of discovering new
correlations among items, without being overwhelmed by a large output, which mixes both
useful and irrelevant information.
Next, we tested how the number of 2-item-sets reported at the end of our clustering procedure
compare to the number of 2-item-sets we would obtain if we randomly grouped the
transactions into k groups. We again used the T20.I6.D100K set from above and set
In the first experiment, we assigned the transactions uniformly at random to one of the k
groups. Let RI(s) denote the set of itemsets which have support at least s relative to at
least one of the partitions. The corresponding cardinality is denoted by RI(s). In the second
experiment we incorporated some of the knowledge gained from our clustering method.
More exactly, we created a partition of the data into k groups, so that the size of the ith
Number
of
2-item-sets
Partition Itemsets CI(s)
Clustersize Random Partition Itemsets RI"(s)
Equal Random Partition Itemsets RI(s)
Figure
8: Comparison between Clustering and Random Partitions
group is equal to the size of the ith cluster obtained by our algorithm, and the transactions
are randomly assigned to the groups (subject to the size condition) The corresponding
set is denoted by RI"(s). The results are shown in Figure 8. Since RI(s) - RI"(s) for
all values of s considered, we restrict our attention to RI"(s): We used the same dataset
that CI(s)=RI"(s) varies between 2:7 (for
which shows that our clustering method significantly outperforms an
indiscriminate grouping of the data.
Finally, in Figure 9, we show how the number of clusters k influences the number of 2-item-
sets we report, for the T20.I6.D100K set used before. Again, we compare both with AI(s)
(which is constant, in this case) and with AI(s 0 ); where s use 0:0025: The
fact that CI(s) increases with k corresponds to the intuition that grouping into more clusters
implies lowering the support requirement.
The examples above illustrate that the itemsets found are very different depending upon
whether we use localized analysis or aggregate analysis. From the perspective of a target
marketing or customer segmentation application, such correlations may have much greater
value than those obtained by analysis of the full data set.
The next two figures illustrate how our method scales with the number of clusters k and
the size of the dataset N: In Figure 10, we fix while in Figure 11 we fix
We separately graph the running time spent on finding the cluster representatives
before performing the final partitioning procedure of the entire database into clusters using
Number of clusters k
Number
of
2-item-sets
Aggregate Data Itemsets AI(s/k)
Parition Itemsets CI(s)
Aggregate Data Itemsets AI(s)
Figure
9: Comparison between clustered itemsets and aggregate itemsets
these representatives. Clearly, the running time for the final phase requires O(k \Delta N) distance
computations; something which we cannot hope to easily outperform for any algorithm which
attempts to put N points into k clusters. If we can show that the running time for the entire
algorithm before this phase is small compared to this, then our running times are close to
optimal. Our analysis of the previous section shows that this time is quadratic in the size
of the random sample, which in turn is linear in k: On the other hand, the time spent on
the last step of the method to assign all transactions to their respective clusters is linear in
both k and N: As can be noted from Figures 10 and 11, the last step clearly dominates the
computation, and so overall the method scales linearly with k and N:
Comparison with ROCK We ran ROCK on the synthetically generated T20.I6.D100K
dataset used above, setting that ' is the minimum overlap
required between two neighbor clusters). We wanted to report all 2-item-sets with support
0:0025 in at least one of the generated clusters. ROCK obtained a clustering in which one
big cluster contained 86% of the transactions, a second cluster had 3% of the transactions,
and the sizes of the remaining clusters varied around 1% of the data. This result is not
surprising if we take into account the fact that the overall dimensionality of the data space
is 1000; while the average number of items in a transaction is 20: Thus, the likelihood of two
transactions sharing a significant percentage of items is low. This means that the number
of pairs of neighbors in the database is quite small, which in turn implies that the number
of links between two transactions is even smaller (recall that the number of links between
Number of clusters k
Running
time
in
seconds
Total time
Finding Cluster Representatives
Figure
10: Running Time Scalability (Number of Clusters)
5100030005000Number of transactions N
Running
time
in
seconds
Total time
Finding Cluster Representatives
Figure
11: Running Time Scalability (Number of Transactions)
Output Clusters Edible Poisonous Output Clusters Edible Poisonous
9 150 124 19 232 0
Figure
12: Correspondence between output clusters and original labeling
two transactions is the number of common neighbors). Moreover, the algorithm runs on
a random sample and computes the number of links between transactions relative only to
this sample, decreasing even further the likelihood of having a non-zero number of links
between two transactions. But since the distance between two clusters is proportional to
the number of links between all pairs of transactions in the two clusters, the result is that
cluster distances are almost always equal to zero. Hence, the data agglomerates in the first
cluster, because most of the time we discover that this cluster has distance zero to its nearest
neighbor, and thus we do not choose a different candidate for merging.
The quality of the results could be improved either by increasing the size of the random
sample (in the experiment above, the random sample had the same size as the random
sample on which CLASD was run), or by decreasing ': Increasing the size of the random
sample is a limited option, however, since ROCK has quadratic storage requirements. As for
changing '; note that for two transactions of size 20 and for the current value
are only required to have 4 items in common in order to be considered neighbors. A smaller
value of ' does indeed increase the number of neighbor pairs, but the notion of neighbors itself
tends to lose its meaning, since the required overlap is insignificant. We ran two experiments,
as follows: in the first experiment we doubled the size of the random sample, while in the
second one we set In both cases, the generated clusters achieved more balance in
sizes, yet the number of 2-item-sets reported was below that discovered by CLASD: 28983
in the first experiment, and 38626 in the second experiment, compared to 56861, discovered
by CLASD. Moreover, both methods described above are sensitive to changes in the overall
dimensionality of the data.
We conclude that, due to its implicit assumption that a large number of transactions in a
data set are reasonably well correlated, ROCK does not perform well on data for which these
assumptions do not hold. Our synthetically generated data, which was designed in [4] to
resemble the real market basket data, is an example of such input.
We tested both CLASD and ROCK on the mushroom and adult data sets from the ML
Repository 1 .
4.3 The mushroom data set
The mushroom data set contained a total of 8124 instances. Each entry has 22 categorical
attributes (e.g. cap-shape, odor etc.), and is labeled either "edible" or "poisonous". We
transformed each such record into a transaction in the same manner used by [16]: for each
attribute A and each value v in the domain of A; we introduce the item A:v: A record R is
transformed into a transaction T so that T contains item A:v if and only if R has value v
for attribute A:
We ran ROCK with the parameters indicated in [16], i.e. were able
to confirm the results reported by the authors on this data set, with minor differences. The
number of 2-item-sets discovered after clustering was 1897, at support level
To test CLASD on this data set, we must take into account the fact that if attribute A
has values in the domain fv; wg; then items A:v and A:w will never appear in the same
transaction. We want to make a distinction between this situation, and the situation when
two items do not appear together in any transaction in the database, yet they do not exclude
one another. Hence, we define a negative affinity for every pair of items (A:v; A:w)
as above. We present the results in Figure 12. With the exception of cluster number 9,
which draws about half of its records from each of the two classes of mushrooms, the other
clusters clearly belong to either the "edible" or "poisonous" categories, although some have
a small percentage of records from the other category (e.g., clusters 1 and 2). We believe
that the latter is due to the fact that our distance function is a heuristic designed to maximize
the number of 2-itemsets with enough support in each cluster. This may induce the
"absorption" of some poisonous mushrooms into a group of edible mushrooms, for example,
if this increases the support of many pairs of items. The number of 2-item-sets discovered
was 2155, for support level which is superior to that reported by ROCK. We also
computed the number of 2-item-sets using the original labels to group the data into two
clusters ("edible" and "poisonous"), and found 1123 2-itemsets with enough support relative
to at least one such cluster. Hence, our segmentation method proves useful for discovering
interesting correlations between items even when a previous labeling exists.
Aside from this, we also found some interesting correlations which cannot be found by only
looking at the labels. For the support level we could not find a correlation
between convex caps and pink gills, since this pair of characteristics appears together in
750 species of mushrooms, or 9:2% of the data. Such a correlation between this pair is
useful enough, and it should be recognized and reported. We also could not discover this c
orrelation by treating the original labels as two clusters, because there are 406 edible species
with these two characteristics, or 9:6% of the edible entries, and 344 poisonous species, or
8:7% of all poisonous entries. However, our technique creates a structured segmentation in
which the correlation is found.
4.4 The adult data set
We also tested the algorithm for the adult data set in the UCI machine learning respository.
The data set was extracted from the 1994 census data and contained information about
data about people. This data set had a total of 32562 instances. Among these
instances, 8 were categorical valued. We first transformed the categorical attributes to binary
data. This resulted in a total of 99 binary attributes. One observation on this data set
was that there were a significant number of attributes in the data which corresponded to a
particular value. For example, most records were from the United States, one of the fields
was "Private" a large fraction of the time, and the data largely consisted of whites. There
is not much information in these particular attributes, but the ROCK algorithm often built
linkages based on these attributes and resulted in random assignments of points to clusters.
We applied the CLASD algorithm to the data set with several interesting
2-itemsets in the local clusters which could not be discovered in the aggregate data set.
For example, we found a few clusters which contained a disproportionate number of females
who were either "Unmarried" or in the status "Not-in-family". (In each of these clusters, the
support of (Female, Unmarried) and (Female, Not-in-family) was above 30%.) This behavior
was not reflected in any of the clusters which had a disproportionately high number of men.
For the case of men, the largest share tended to be "husbands". Thus, there was some asymmetry
between men and women in terms of the localized 2-itemsets which were discovered.
This asymmetry may be a function of how the data was picked from the census; specifically
the data corresponds to behavior of employed people. Our analysis indicates that in some
segments of the population, there are large fractions of employed women who are unmarried
or not in families. The correlation cannot be discovered by the aggregate support model,
since the required support level of 8:1% (which is designed to catch both the 2-itemsets)
is so low that it results in an extraodinarily high number of meaningless 2-itemsets along
with the useful 2-itemsets. Such meaningless 2-itemsets create a difficulty in distinguishing
the useful information in the data from noise. Another interesting observation was a cluster
in which we found the following 2-itemsets: (Craft-Repair, Male), and (Craft-Repair,
HS-grad). This tended to expose a segment of the data containing males with relatively
limited education who were involved in the job of craft-repair. Similar segmented 2-itemsets
corresponding to males with limited educational level were observed for the professions of
Transport-Moving and Farming-Fishing. Another interesting 2-itemset which we discovered
from one of the segments of the data set was (Doctorate, Prof-specialty). This probably
corresponded to a segment of the population which was highly educated and was involved
in academic activities such as professorships. We also discovered a small cluster which had
the 2-itemset (Amer-Indian-Eskimo, HS-Grad) with a support of 50%. Also, most entries
in this segment corresponded to American-Indian Eskimos, and tended to have educational
level which ranged from 9th grade to high school graduate. This exposes an interesting
demographic segment of the people which shows a particular kind of association. Note that
the absolute support of this association relative to the entire data set is extremely low, since
there are only 311 instances of American Indian Eskimos in the data base of 32562 instances
(less than 1%). The 2-itemsets in most of the above cases had a support which was too low
to be discovered by aggregate analysis, but turned out to be quite dominant and interesting
for particular segments of the population.
5 Conclusions and Summary
In this paper we discussed a new technique for clustering market basket data. This technique
may be used for finding significant localized correlations in the data which cannot be found
from the aggregate data. Our empirical results illustrated that our algorithm was able to
find a significant percentage of itemsets beyond a random partition of the transactions. Such
information may prove to be very useful for target marketing applications. Our algorithm can
also be generalized easily to categorical data. We showed that in such cases, the algorithm
performs better than ROCK in finding localized associations.
--R
"Finding Generalized Projected Clusters in High Dimensional Spaces"
"A new framework for itemset generation."
"Mining association rules between sets of items in very large databases."
Fast Algorithms for Mining Association Rules in Large Databases.
"Automatic Subspace Clustering of High Dimensional Data for Data Mining Applications"
"OPTICS: Ordering Points To Identify the Clustering Structure."
"Mining surprising patterns using temporal description length."
"Beyond Market Baskets: Generalizing association rules to correlations."
"Incremental Clustering for Mining in a Data Warehousing Environment."
"A Database Interface for Clustering in Large Spatial Databases"
"A Density Based Algorithm for Discovering Clusters in Large Spatial Databases with Noise"
"From Data Mining to Knowledge Discovery: An Overview."
"CACTUS: Clustering Categorical Data Using Summaries"
"Clustering Categorical Data: An Approach Based on Dynamical Systems"
"CURE: An Efficient Clustering Algorithm for Large Databases"
"ROCK: a robust clustering algorithm for categorical attributes"
"Clustering based on association rule hypergraphs"
"Snakes and Sandwiches: Optimal Clustering Strategies for a Data Warehouse"
Algorithms for Clustering Data
"Finding Groups in Data- An Introduction to Cluster Anal- ysis."
"Finding Interesting Rules from Large Sets of discovered association rules."
"Efficient and Effective Clustering Methods for Spatial Data Mining"
Introduction to Modern Information Retrieval.
Local Dimensionality Reduction: A New Approach to Indexing High Dimensional Spaces.
"A Distribution-Based Clustering Algorithm for Mining in Large Spatial Databases"
"A Comparative Study of Clustering Methods"
"BIRCH: An Efficient Data Clustering Method for Very Large Databases"
--TR
--CTR
Antonin Rozsypal , Miroslav Kubat, Association mining in time-varying domains, Intelligent Data Analysis, v.9 n.3, p.273-288, May 2005
Hua Yan , Keke Chen , Ling Liu, Efficiently clustering transactional data with weighted coverage density, Proceedings of the 15th ACM international conference on Information and knowledge management, November 06-11, 2006, Arlington, Virginia, USA
Alexandros Nanopoulos , Apostolos N. Papadopoulos , Yannis Manolopoulos, Mining association rules in very large clustered domains, Information Systems, v.32 n.5, p.649-669, July, 2007 | market basket data;clustering;association rules |
628197 | A Comprehensive Analytical Performance Model for Disk Devices under Random Workloads. | AbstractOur goal with this paper is to contribute a common theoretical framework for studying the performance of disk-storage devices. Understanding the performance behavior of these devices will allow prediction of the I/O cost in modern applications. Current disk technologies differ in terms of the fundamental modeling characteristics, which include the magnetic/optical nature, angular and linear velocities, storage capacities, and transfer rates. Angular and linear velocities, storage capacities, and transfer rates are made constant or variable in different existing disk products. Related work in this area has studied Constant Angular Velocity (CAV) magnetic disks and Constant Linear Velocity (CLV) optical disks. In this work, we present a comprehensive analytical model, validated through simulations, for the random retrieval performance of disk devices which takes into account all the above-mentioned fundamental characteristics and includes, as special cases, all the known disk-storage devices. Such an analytical model can be used, for example, in the query optimizer of large traditional databases as well as in an admission controller of multimedia storage servers. Besides the known models for magnetic CAV and optical CLV disks, our unifying model is also reducible to a model for a more recent disk technology, called zoned disks, the retrieval performance of which has not been modeled in detail before. The model can also be used to study the performance retrieval of possible future technologies which combine a number of the above characteristics and in environments containing different types of disks (e.g., magnetic-disk-based secondary storage and optical-disk-based tertiary storage). Using our model, we contribute an analysis of the performance behavior of zoned disks and we compare it against that for the traditional CAV disks, as well as against that of some possible/future technologies. This allows us to gain insights into the fundamental performance trade-offs. | Introduction
The performance modeling of direct-access storage devices is concerned with expressing mathematically
the technology behind them and the way their components cooperate in order to perform
a task and is used to predict the performance of those devices. Such a modeling can be useful in
estimating the expected retrieval cost, in optimal data placement determination, in estimating the
average time spent on specific operations such as disk seek and rotational latency, etc.
In general, analytical as well as simulation-based modeling have been used in the literature.
The benefit of the analytical models over simulation is its generality, ease of use, understanding, and
update. The drawback is that analytical models have to usually make simplifying assumptions for
the system which are not necessary when using simulation. In this paper, we contribute to the field
of analytical modeling for disk retrieval performance without having to resort to over-simplifying
assumptions.
1.1 Overview of Available Disk Technologies
In order to model the performance of storage devices, we need to fully understand their tech-
nology. We should be aware of the disk products currently available, the data placement strategies,
details regarding their functionality and features (i.e., the exact behavior of the disk when servicing
various requests etc).
There are two kinds of disk devices: magnetic and optical. The properties that affect mostly
the performance of these devices are the surface format and the operational characteristics of the
device. A magnetic disk drive typically consists of several disk platters, each of which has two
writable/readable surfaces. On the other hand, many optical disks typically have a single platter.
The geometry of a disk drive partitions each platter's surface into either a set of concentric tracks,
or a single spiral track, which are in turn partitioned into a number of sectors. Sectors are the
minimum unit of data that can be read and recorded from/onto a disk. Also, consecutive tracks
from all the platters of the drive are grouped to form cylinders. Information is read and written onto
the platter surfaces using a per-platter-surface read/write head. Each read/write head is attached
to a head-arm mechanical assembly which positions all the read/write heads onto the cylinder and
its tracks which will be accessed. Finally, the disk pack is constantly revolving with the help of a
spindle at a constant or variable velocity.
Older technology of magnetic disks is based on the Constant Angular Velocity (CAV) format.
In this format, data are read or written while keeping constant the angular velocity of the disk.
The tracks of a magnetic CAV disk are concentric and the revolution axis passes through the center
of the tracks and is perpendicular to the disk surface. The sectors become more elongated as we
move away from the revolution axis, resulting in a smaller recording density (bits/inch). Thus, the
outer tracks contain the same number of sectors as the inner ones do. This results in a waste of
storage space, since the potential for additional storage capacity of the longer outer tracks is not
exploited. Therefore, the storage capacity and the time to read a sector (transfer rate) remains
constant throughout the platter.
Recently, a new technology of magnetic disks emerged, which we call the Zoned CAV (ZCAV)
disk. In this format, the cylinders are divided into successive groups of cylinders, called zones.
Within each zone, the number of sectors per track as well as the transfer rate (i.e., time to read
a sector) is constant. However, a track in a given zone, contains more sectors than the track in
the neighboring zone closer to the platter center. As a result, since the angular velocity remains
constant, the transfer rate of the outer zones is higher than the transfer rate of the inner ones. This
happens because outer tracks observe higher linear velocities meaning that more sectors per unit
of time pass beneath the disk head in the case of the outer zones.
For optical disks, besides the CAV recording format, another recording format is popular. In
this format the recording density (bits/inch) remains constant throughout the disk platter. This
format is a Constant Linear Velocity (CLV) format and is used in CD-ROM devices. In CLV
disks, the constant linear velocity results in a constant transfer rate. This is achieved by adjusting
their angular velocity, decreasing it as we move to outer tracks. This adjustment impacts on the
speed of access, making these disks generally slower than the CAV ones. Some optical disks have
a single spiral track. However, even in this case, we can still define tracks. This can be done by
considering a radial line of the disk. A "track" is lying between two successive intersections of the
spiral track with the radial line. The set of the successive tracks which all have the same capacity
(in number of sectors) can be viewed as forming a zone, similarly to ZCAV disks.
The ZCAV recording format tries to combine the benefits of CAV and CLV formats and form
disks with larger storage capacities and lower access times. There are many other possibilities for
disk products, each resulting from combinations of the above mentioned fundamental characteris-
tics. For example, there can be CLV magnetic disks, optical disks with no spiral tracks, disks which
are partitioned into CAV and CLV partitions, etc. Most of these possible technologies currently
remain in the sphere of theory. The following table summarizes some of the existing disk-device
technologies in terms of their fundamental characteristics presented earlier.
Technology Type Constant Variable
CAV magnetic angular velocity linear velocity
storage capacity sector length
transfer rate
CLV optical linear velocity angular velocity
transfer rate storage capacity
sector length
ZCAV magnetic angular velocity linear velocity
storage capacity
sector length
transfer rate
1.2 Target Applications
The performance of the I/O subsystem is one of the most critical factors that will determine
the success of systems which are built to support the demanding workload of emerging applications.
We mention two "champion" applications that can benefit from the comprehensive model proposed
in this paper. The first application is the query optimizer of large databases of traditional data.
In these environments, storage hierarchies of secondary storage devices (magnetic disks) and near-line
tertiary storage devices (such as jukeboxes of optical disks) are typically employed, given the
great storage space requirements. A small portion of the data is kept on-line, i.e., in the secondary
storage devices which are mainly composed by magnetic disks, and the rest of them are stored
in the tertiary storage layer. When a user query is submitted to the system (often concurrently
with others), it is decomposed into a number of requests for blocks that need to be retrieved from
disk (seconday and/or tertiary). Typically, these blocks are randomly distributed over the various
storage devices ([6, 7]). Some of the requested blocks may already reside in primary memory. For
the rest of the cache-missed blocks, the query optimizer needs to know what is the expected retrieval
cost. Such an information will then be used in order to compute the best access plan in serving
the query, which typically is the one that minimizes the overall I/O cost. Our model estimates the
expected I/O cost for retrieving a number of randomly distributed blocks from the disk surface,
and, therefore, it can be used by such query optimizers.
A second application involves the admission controller component of multimedia database
systems. Typically, in a multimedia (e.g., video) database server, several video blocks, one per each
video display, are being retrieved in parallel from the disks ([10, 15]). The admission controller is
unaware of low-level details, such as exact position of video blocks on the disk surface. Typically,
video streams are striped over the secondary storage disks, using a coarse-grained striping technique,
such as that suggested in [22, 21]. With such striping, a video's blocks are stored in round-robin
fashion at consecutive disks, distributing the video to all disks. With respect to the workload
received by a single disk, this striping has a randomizing effect. In addition to this, VCR-type user
interactions have also the tendency to create random workloads for each disk. Therefore, there
appears to be a random access distribution of requests to disk blocks. In addition, an admission
controller should make a quick decision of whether a new request will be accepted by the system
or not. One way of making such a decision is the probabilistic approach which tries to predict the
available I/O bandwidth of the system based on stochastic models ([13]). The model presented in
this paper can be used as the basis of such a probabilistic admission controller when the disk access
pattern is assumed to be random.
The balance of the paper is organized as follows. In section 2, we present a brief overview of
related work in the modeling of the performance of retrieval operations of disk devices. In section
3, we present our comprehensive model, which combines all the essential features of current disk
technologies and characterizes the performance of the disk operations. In section 4, we derive the
formulas for calculating the expected cost of operations in the comprehensive disk technology model
derived earlier. In section 5, we show how our unifying model can be reduced to models for the
existing disk technologies and for possible future technologies. Section 6, first presents the validation
of the analytical results derived in the previous sections by comparing them with simulation-based
results. Subsequently, we show some performance results concentrating on the performance of
zoned magnetic CAV disks (a popular recent disk device technology, the performance of which has
not been analytically modeled before). In addition, this section also compares the performance
of ZCAV disks against CAV disks and some theoretical disks, such as CLV magnetic disks and
CLV/CAV magnetic disks. Finally, section 7 contains concluding remarks.
Related Work
In the bibliography, two kinds of disks have been studied extensively: the magnetic CAV and
the optical CLV disks. In the following, a notion widely used, will be the qualifying sector. We say
that a sector qualifies if and only if that sector has been specifically asked to be retrieved by a user
request.
2.1 CAV Magnetic Disks
For CAV magnetic disks, the researchers' attention has been focused on modeling the performance
of the fundamental disk operations and estimating the seek cost and rotational delay.
The seek operation (during which the disk head moves from the current track to the target track),
depends on the distance d traveled by the head. The seek operation mainly consists of three phases
namely the acceleration, the linear and the settle phases. In the acceleration phase, the head speeds
up until it reaches a certain threshold velocity. In the linear phase, the head travels with this constant
velocity. After the linear phase, the head slows down and lands on the destination track.
This is called the settle cost. If the distance to be traveled is short enough, then there is no linear
phase.
The seek cost in the past has been modeled as a square root function or as a linear function
of the distance ([24, 25]) or a bi-linear equation (one line for each type of seek, short or long).
Recently, the best modeling is based on a combination of the above: i.e., on a square root function,
when the seek distance d is short enough (d ! Q), and by a linear function of d, if the distance
is long enough (d Q) ([16, 26]). Experimental studies have also determined the value for Q
for different disk products. For example, for the HP 97560 and HP C2200A models Q= 383 and
Q=686 respectively. Notice that if the distance of the destination cylinder is such that d ! Q then
there is no linear phase. So, the cost for a single seek access at distance d is:
Cost seek
a 1
a
Seeks at distance less than Q are termed short seeks and seeks at distance greater than Q are
termed long seeks. The a 1 , a 2 , b 1 , b 2 and Q parameters are dependent on the characteristics of each
device. These parameters differ from product to product and can be very accurately estimated
experimentally.
After reaching the target track and paying a head settle cost, we must wait for the first
qualifying sector of this track to pass beneath the head. The duration of this operation, is called
rotational delay. The rotational delay has been modeled by one half of a full platter revolution
because, on average, the targeted sector will be lying one half of a track away after the seek
operation. Therefore, the rotational delay of a single access is given by
Delay rot =2 s t \Delta h (1)
where s t is the track capacity (in number of sectors) and h is the read sector time.
2.2 CLV Optical Disks
The seek cost and the rotational delay models for the case of the CLV optical disks are the
same as the ones above 1 . There is also a qualitative difference, namely that unlike magnetic disks,
the (expensive) seek operation is not always necessary. In optical disk drives, the target track can
sometimes be accessed by simply diverting the laser beam directly on it. This diversion can be
done by tilting the objective lens. The time consumed for this operation is practically negligible.
Unfortunately, there is a maximum angle that the optic assembly can be diverted. Thus, there is a
maximum number of tracks that can be reached without a seek operation. These tracks occupy a
portion of the disk platter, on the left or on the right of the current head position. This portion is
called the proximal window. In the remainder of this section, we survey the analysis for proximal
window accesses as found in ([3, 4, 5, 8]).
1 However, there is a significant quantitative difference; although the models are the same, CLV optical disks have
smaller average angular velocities, which result in worse transfer and rotational delays, and more expensive seek
operations.
Because storage densities of optical disks are very high, the laser diversion might lose the
target track. This is why the drive must be resynchronized and verify the identity of the landing
track. This resynchronization cost is denoted by SY NC and is measured in terms of the number of
the sectors which have passed under the head during the SY NC operation. Usually, the position
is verified by scanning a single sector.
As soon as the laser beam finds the target track, the drive waits for the target sector to pass
beneath the laser focus spot. After reaching the qualifying sector, data transfer can begin. The
head starts moving (seeking) towards the target track as soon as the laser finds the target track.
In this way, the drive does not have to wait until the head is positioned to read a sector; instead
the laser finds the track, data transfer begins and the head moves in parallel to the data transfer.
Another feature that complicates accesses within the proximal window is the command processing
time. This is the time it takes for the drive to examine the next retrieval request. During
this time the disk platter keeps rotating, so the disk head has scanned a number of sectors before
the processing is over. Thus, since optical CLV disks have a single spiral track, the longer the
command processing time is, the further the head will have moved by the end of the command
processing time. Generally, command processing time depends on the distance of the target sector.
The proximal window is divided into three regions. The regions are termed backward-access,
middle and forward-access (figure 1). Within each region the command processing time is different.
In the middle region, the command processing time is optimized (it is the minimum of the command
processing times of all regions). The limit sector between the middle and the forward-access region
is denoted by LIMIT F while the limit sector between the middle and the backward-access region is
denoted by LIMITB . We index with a 0 the sector just scanned at the moment the request arrived.
The sectors lying to the left (right) of sector 0 are indexed with -1, -2 (+1, +2) etc (figure 1).
region
current track (i th zone)
backward-access forward-access region
middle region
head position when request arrives
head reads in this direction (forwards)
Figure
1: Proximal window access regions.
The middle access region is also termed the jump back region. This is because the LIMIT F
sector is calculated based on the command processing time. After the command processing time,
the head will be lying at the starting track of the forward access region. Therefore, the access to
any target track in the middle access region, is made by diverting the laser focus one track back
each time. If it is determined that the track on which the laser landed is not the target one, then
the laser diverts once more jumping back one track and paying a synchronization and a verification
cost for a second time. If the target track is not in the middle region and is missed, then the laser
does not jump back, but diverts once more. In the following, it is assumed that laser diversions
out of the middle region are accurate, thus paying once the resynchronization and verification cost
As stated previously, the number of sectors scanned during command processing time depends
on the offset index I of the next qualifying sector (expressed as the number of sectors that lie
between the source and the target sectors) and is analogous to the command processing time. Let
CS F , CSB and CSM be the number of sectors scanned by the disk head during command processing
time when the next target sector lies in the forward-access, the backward-access and the middle
region respectively (CSM
Track of i th zone
Here arrives the command
If request is in the middle region
If request is in the forward-access region
If request is in the backward-access region
Figure
2: Sectors scanned during command processing.
It can be shown that the proximal window cost when accessing a sector indexed by I is given
by the following formula ([8]):
Cost proxW in
I \Gamma(CS F +SY NC+1)
(2)
where h i is the time needed for a sector to be scanned in the i th zone, C i is the track capacity in
the i th zone, W is the proximal window size in number of tracks and JumpBacks(I
the number of jump backs in the case of accessing the middle region:
2.3 Motivations and Goals
We can see from the above overview of the related work in the area of disk-device performance
modeling that only a few possible technologies have been modeled. There are existing disk
technologies which are not covered by the above modeling efforts. As an example, we mention the
zoned magnetic, ZCAV, disks. These disks have dominated, due to their higher storage capacities
and transfer rates ([1, 9, 12, 13, 14, 19]). Another example, also recently produced, is the Hitachi
(which behaves as a CAV optical disk when reading from the
innermost half part with a speed varying from 8X to 16X, and as a CLV disk when reading from
the outermost part with a speed of 16X).
Thus, understanding the performance behavior of the disk-based devices is important for of our
"champion" applications (see section 1.2). Furthermore, even if models for all currently available
technologies were developed, new models will be needed for future products. This constituted the
driving force behind the work described in this paper. Therefore, we have developed a comprehensive
performance model for disk device technologies. Our comprehensive model can be reduced to
known models and to models for previously unmodeled existing disk technologies. In addition, because
our model is based on the fundamental features of disk technology, it can be used to model the
performance of future/possible disk technologies. This may enable manufacturers to measure the
performance of "theoretical" technologies and compare them against existing technologies before
their production.
Our unifying performance model accounts for the fundamental characteristics of disk-storage
device technologies, namely the optical/magnetic nature and their constant or variable angular
velocities, transfer rates and storage capacities. Thus, it constitutes a common framework for
analyzing the performance of all disk-storage technologies characterized by any combination of the
above features.
Moreover, one can observe that important factors in determining the disk access costs, either
have not been modeled at all, or have not been modeled in detail in related work. As an example of
the former, we mention the significant cost for head switching, which occurs when a different platter
surface of the same cylinder must be accessed. Although it has been found that head switching
contributes significantly to disk access costs ([16]), it has not been modeled analytically. As an
example of the latter we mention the simplistic modeling of rotational delays, as presented above.
Currently, disk controllers can operate on a batch of requests for sectors of the same cylinder (a
feature that is termed command queuing). This can have a beneficial impact in minimizing the
rotational delay, since the disk controller can retrieve the blocks in the optimal order given the
current position of the head. Thus, modeling the above features can help improve the performance
predictions based on our model considerably.
3 A Comprehensive Model
The device parameters that will be used in order to develop the comprehensive device model
are depicted in the following table:
number of platters D
number of recordable surfaces per platter S
number of cylinders L
maximum number of cylinders in short seeks Q
number of tracks T
number of sectors M
number of zones Z
number of cylinders of zone Z i kZ i k
number of sectors per track (track capacity) of zone Z i C i
time to read a sector of zone Z i h i
block size (in number of sectors)
proximal window size W
settle constant constant settle
head switch constant constant headSwitch
The following equations generally hold:
Z
Z
Our unifying device model will account for all the features of magnetic and optical disks. It will
incorporate all the functionality which has been found to significantly influence the performance of
disks such as the seek cost, the rotational delay, the transfer cost, the settle cost, the head switch
cost and the proximal window cost.
In our comprehensive model we make the following conventions:
1. The time a CLV drive needs to adjust its rotational speed is encapsulated in the seek cost
component.
2. We believe that a cache cost analysis is out of the scope of this paper in which we mainly
focus on the data not found in the cache memory, i.e., cache misses. The interested reader
can find an analytical and validated cache model in [11, 17]. Also, we stress that for the
random workloads typically found in our "champion" applications, caching would introduce
only marginal benefits. In [17], all the crucial performance parameters, such as the hit ratio,
are derived and validated.
3. We selected a block size equal to 1. The expansion of the model to an arbitrary block size is
straightforward but this would make much more complex and hard to read our equations. On
the other hand, a block size which equals 1 sector, preserves the essence of our disk technology
modeling.
4. We selected the SCAN policy to schedule the disk arm, because this is a very widely used
(and well studied) policy.
3.1 Proximal Window Cost for Concentric Tracks
The proximal window cost in our comprehensive model is two-branched. The first branch
corresponds to those disks having a single spiral track and the second to those disks having a set
of concentric tracks. In the former case, the proximal window cost is given by equation (2). In the
latter case, we have to develop a new cost expression. The reason is that, in the single spiral track
model, after the command processing time, the head might be a few "tracks" away from the source
track. In the concentric track model, however, the head will always be lying on the departure track.
Recall that the proximal window cost is dependent only on the number of sectors which must
pass underneath the disk's head, after the laser beam has been diverted to the target track. (This
is so since the laser beam diversion has a negligible cost). Thus, in essence, the proximal window
cost is a "rotational delay" type cost. To compute the proximal window cost we first develop a
sector indexing which will allow us to determine the aforementioned rotational delay. Again we
index with a 0 the sector just scanned by the head when the next retrieval command arrived. The
departure track, as well as the tracks closer to the disk edge, are termed forward tracks, while the
rest are termed backward tracks. We draw a radial line to the left of sector 0, which can be used
to define the desired sector indexing. The first sector of the k th forward track is on the right of
the radial line, indexed by k C i and the last sector is on the left of the radial line, indexed by
1. Similarly, the first sector of the k th backward track is on the right of the radial line,
indexed by \Gammak C i , and the last sector is on the left of the radial line, indexed by \Gamma(k
The indexing scheme is illustrated in figure 3.
Departure
track
Tracks of zone i
Head is lying here when command arrives
Radial line
Figure
3: Index scheme for concentric tracks.
Suppose that the target sector has an index I (as modeled above) and is in some forward
track of the i th zone. The number of the sectors scanned during the command processing time
and during the synchronization and verification procedure is (CS F 1). After diversion
(and synchronization/verification), we must wait until the target sector, which is lying on the
landing track, is brought beneath the head. Given that we are focusing on the i th zone, the relative
position of the qualifying sector is I rel while the relative position of the head
(after synchronization and verification) is H rel (CS Therefore,
for forward accesses, the number of the sectors needed to be scanned, before reaching the target
sector, is:
I rel
The number of the sectors scanned, in the case of a backward access is derived from
the previous expression by considering the absolute value of the index of the target sector (jI j):
I rel (jI j; C
Let CS be equal to CS F in the case of forward accesses and equal to CSB in the case of backward
accesses:
CSB , for backward accesses
Finally, the proximal window cost for the concentric track model, takes the following form:
Cost proxW in
As an example, consider that suppose that a
forward access is about to be performed. Then I
sectors are needed to be scanned, before reaching the target sector.
3.2 Seek Cost
A unifying seek cost function has the form shown in the graphs of figure 4. Notice that, in
accelaration
cost
Cost seek
Cost seektime
distance
time
distance
a) d >= Q b) d < 2Q
linear phase
d
Figure
4: Seek cost.
the general case, seeks at distance less than the proximal window size, are never performed since a
proximal window access is invoked instead. So, the cost for a single seek access at distance d is:
Cost seek
a 1
a
Note that we set in the case of magnetic disks 2 .
3.3 Settle Cost
When the head reaches the destination cylinder, a procedure is triggered in order to verify the
current cylinder and correct any head mispositioning. The time paid for this settling operation is
called the settle cost. In our model, the settle cost is constant:
Cost
Typical values of the settle cost range from 1 ms to 3 ms.
3.4 Rotational Delay
Normally, the rotational delay depends on the rotational speed and on the number of the
sectors that must be passed and is a fraction of the time of a full platter revolution. Since the
rotational speed depends on the zone index, the rotational delay for scanning s sectors in i th zone
before the first qualifying sector is brought beneath the head, is
Delay rot (s;
3.5 Transfer Cost
The transfer of data can begin at the moment the first sector of the block has been brought
beneath the head. The transfer cost is determined by the time to read a sector and is analogous
2 For CLV disks, the rotational speed adjustment time is enclosed in the seek cost.
to the block size and the underlying zone Z
Cost transfer
3.6 Head Switch Cost
Magnetic disks are composed of a number of vertically-aligned platters forming cylinders.
Usually, the platters are two sided, meaning that data can be read and written on both platter
surfaces. For each readable/recordable surface there is a single head for reading and writing this
All heads are moved together and at any time all reside in a single cylinder only. One
head is active each time sending/receiving data to/from the disk's read/write channel. When data
have been accessed from a platter surface and some other data are next to be accessed on another
surface, a head switch must take place. Due to the fact that the tracks of modern disks are almost-
concentric (as opposed to exactly-concentric tracks) the newly activated head may not be above
the track of the cylinder. In this case, a repositioning cost is paid. The time for a head switch is
termed head switch cost and is considered constant:
Cost headSwitch = constant headSwitch
Typical values of the head switch cost constant range from 0.5 ms to 1.5 ms.
4 Average Cost Analysis
In this section, we will model the performance of retrieving N cache-missed sectors. These
sectors are randomly distributed on the disk and are termed qualifying sectors. The tracks/cylinders
that contain at least one qualifying sector are termed qualifying tracks/cylinders. Using the above,
we will derive analytical models for the average retrieval cost of the N sectors.
The average retrieval cost of the N cache-missed sectors randomly distributed on the disk, is
the sum of the average costs previously described. Therefore, the total expected retrieval cost, is:
Cost disk Cost proxW in (N) Cost seek (N) Cost settle (N)
+Delay rot (N) Cost transfer (N) Cost headSwitch (N)
4.1 Average Seek Cost
To estimate the average seek cost of the N sectors, we determine the number of seeks at each
possible distance and multiply it with the seek cost for that distance. Thus, we get:
Cost seek
expected # seeks at distance j ? seek cost at distance j ?)
probability of seek at distance j ?) \Delta ! seek cost at distance j ?)
where Q c is the expected number of qualifying cylinders.
The expected number of seeks at distance j is equal to the expected number of qualifying
cylinders times the probability that a seek at distance j will occur. The expected number of qualifying
cylinders is equal to the total number of cylinders times the probability that a cylinder will
qualify 3 .
In turn, the probability that a cylinder qualifies depends on the zone in which the cylinder belongs,
since larger cylinders have a higher access probability. Thus,
Z
Z
does not qualifyjY 2 Z i
Z
# ways we can pick N sectors from
# ways we can pick N sectors from M ones
Z
Z
where kZ i k is the number of cylinders contained in i th zone.
The probability that a seek at distance j depends on the number of the qualifying
cylinders Q c and the total number of cylinders L:
ways we can pick a pair of successive qualifying cylinders with distance
ways we can pick a pair of successive qualifying cylinders ?
Consider a pair of successive qualifying cylinders with a gap of j cylinders between them. This
pair can be found in (L different positions. Given this pair, there are
combinations of the remaining Q qualifying cylinders. Therefore, the number of possible pairs
of successive qualifying cylinders with distance
1). On the other
hand, the number of ways we can choose the Q c cylinders is
and for each of these we can
pick a pair of qualifying cylinders with (Q ways. Thus, the number of pairs of successive
qualifying cylinders is
1). Finally, the probability P gap (j; L; Q c ) takes the form:
Eng is a sampling space with n mutually exclusive events E i , then the occurrence probability
of an event A is:
The average seek cost is
Cost seek
4.2 Average Proximal Window Cost
The proximal window cost is dependent upon the direction of the head travel. For this reason
we will develop two expressions for the average proximal window cost; one for forward accesses and
one for backward accesses.
The average cost for a single forward access in the proximal window, given that the window
is lying in zone Z i , is the sum, over every position in the window, of the product of the access
probability at a position of the window and the cost for that access:
Cost proxW in (j
Note that for every j W \Delta C i , the proximal window access cost is zero because a seek access is
invoked instead. P gap (j; M;N) is the probability that j non-qualifying sectors are lying between
two successive qualifying sectors (equation (6)). Although the P gap expression was developed to
estimate the probability that two successive qualifying tracks are lying at a certain distance, the
same expression holds for the probability that two successive qualifying sectors are lying at a certain
distance. Also note that when the access is made at sector j then the gap between the current
and the next sector to be retrieved is j. The total average proximal window cost is:
Cost proxW inforwards
Z
expected # accesses in Z access cost for Z i ?
The expected number of proximal window accesses in zone Z i is the product of the probability
that a proximal window access will occur and the expected number of the sectors accessed in zone
Z i . For the former we have:
D) is the expected number of qualifying cylinders (equation (5)). The
latter is proportional to the number of tracks of zone Z
sectors of Z i ? \DeltaP rob(! a sector qualifies
So, the expression for the expected proximal window cost for the forward accesses takes the form:
Cost proxW inforwards
Z
Cost proxW in (j
Z
Cost proxW in (j
The expression for backward accesses is almost identical with the one just developed. The
difference stems from the fact that sectors are read forwards and the retrieval is made backwards.
So, after reading one sector forwards, the head has to travel backwards and to bypass the sector
just read, the gap j, and finally the sector to be retrieved next (figure 5).
head here after 1st read
head reads forward
. gap j .
next sector to be retrieved
Figure
5: Backward access in proximal window.
The total average cost for backward retrievals is:
Cost proxW inbackwards
Z
expected # accesses in Z access cost for Z i ?
Z
Cost proxW in (\Gamma(j
Z
Cost proxW in (\Gamma(j
4.3 Average Settle Cost
The settle cost is paid every time a seek operation is performed. Thus, to calculate the average
settle time, we only need to know the expected number of qualifying cylinders (equation (5)). So,
the total expected settle cost for all N sectors for the Q c (M; N; C; Z ; cylinders is:
Cost settle
4.4 Average Rotational Delay
With current technology (e.g., SCSI), controllers can operate on a queue of requests. Thus,
requests for the same track can be queued in the controller and it is up to the controller to determine
the best way of retrieving the appropriate sectors. This way, detailed information about the disk's
rotation can be used in order to reduce the rotational delay. One way to achieve this goal is for
the controller to arrange the requests in an order consistent with the order with which the desired
sectors will pass underneath the head.
In the case of magnetic disks consisting of a set of concentric tracks, the total rotational delay
paid for all sectors on the same track is the sum of the delay to reach the first sector plus the delays
to bypass the sectors between the qualifying sectors. Considering the expected positions of the j
qualifying sectors of a track of the i th zone, the disk head could be lying anywhere at the end of the
seek operation. The expected position of the head is the middle of the beginnings of two successive
qualifying sectors. Because the number of the sectors between the beginnings of two successive
qualifying sectors is C i
j , the expected number of sectors scanned before reaching the first sector
that qualifies on average is C i
(figure 6b). The expected positions of j qualifying sectors on the
same track is such that the distance, in number of sectors, between any two sectors that qualify,
is the same. This means that if the number of sectors per track is C i , then the above intersector
distance is C i \Gammaj
(figure 6a) 4 . Summing up, the number of sectors bypassed, for a single track of
expected
head position
expected
head position
distance
intersector
a) Intersector distance
beginning of 1st sector
beginning of 2nd sector
head reads this way
Expected head position
Figure
Expected sector and head positions.
the i th zone which has j qualifying sectors randomly distributed over it, is
reach cost for 1 st sector reach cost for 2 nd sector reach cost for j th sector ?
For the case of optical disks, we assume that the first qualifying sector is reached by a seek operation
(as opposed to a proximal window access). Therefore, we only need to calculate the rotational delay
required to reach the first of the qualifying sectors of the track. The rotational delay for the rest of
the sectors of the track will be taken into account in the average proximal window cost (as these
will be accessed via a proximal window access). The expected number of sectors scanned until the
first qualifying sector is brought under the head, if we are in zone Z i , is
So the non-qualifying sectors passed by the disk head is given by:
where is equal to 1 for magnetic disks and 0 for optical disks. The average rotational latency for
that track is
then the intersector distance cannot be equal to C i \Gammaj
j for every successive pair of qualifying
sectors. However, in our analysis we assume, without any noticeable impacts, that C i mod
where Delay rot (s; i) is the rotational cost for scanning s sectors in zone Z i (equation (3)) and
is the probability that a track contains j qualifying sectors:
sectors from another track ? sectors in the track ?
ways we can distribute the N sectors ?
Finally, the expected rotational delay is the summation over all zones of the partial rotational
delays and depends on the total number of tracks per zone:
Delay rot
Z
D) is the number of tracks in zone i and is the probability that an access is a
proximal window access.
4.5 Average Transfer Cost
Every operating system adopts the notion of block to deal with data transfers from and to the
disk. The size of a block may be variable but definitely is multiple of the smallest physical storage
unit which is the sector. The cost depends on the zones and is the product of the expected number
of block accesses and a weighted cost sum over the set of the zones.
Cost transfer
Z
(Prob(Z i is cost in Z i ?)
Z
sectors in Z i ?
Z
Z
M is the probability of accessing the i th zone and Cost transfer (i) is the cost for
block transfer (equation (4)).
4.6 Average Head Switch Cost
This cost is equal to the average number of head switches times the cost paid for each such
switch. The average number of head switches is equal to the expected number of qualifying tracks
minus the expected number of qualifying cylinders. This is because once the set of heads reaches
a cylinder, a head switch is performed for all the qualifying tracks of the cylinder except the first
one. So, the total average head switch cost is:
Cost headSwitch
D) is the expected number of qualifying cylinders (see equation (5)) and
D) is the expected number of qualifying tracks. The expected number of qualifying
tracks is sum of the expected number of qualifying tracks in each zone:
Z
and
#tracks of Z i ? \DeltaP rob(a track of Z i qualifies)
track of Z i does not qualify))
# ways one can pick N sectors from
# ways one can pick N sectors from M ones
Remember that all the qualifying tracks of the same cylinder are accessed before moving to another
cylinder (since we assume a SCAN-like scheduling algorithm).
5 Reductions of the Comprehensive Model
In this section, we show how one can derive the performance models for the currently known
disk technologies from our comprehensive performance model. Additionally, we show how one can
derive the performance models for some disk technologies that might appear in the future (see
figure 7).
CAV/CLV Multi-spiral
optical disks
h C <> const
=const
const
Generalized Disk Model
transfer transfer
Figure
7: Reductions of the comprehensive disk model.
ffl CAV model
We can derive a model for CAV optical disks from the comprehensive model. This can be
done if we set the number of the zones to 1 1). If we, additionally, consider that there
is no proximal window cost (i.e., W =0), then we get the model for CAV magnetic disks.
ffl CLV model
The difference between the CLV optical disk model and the unifying model stems from the
fact that the read sector time is constant throughout a CLV disk, i.e., h 1 =h
constant. The track capacity (per zone) increases as we move towards the outer disk edge.
Additionally, there is no head switch cost since optical disks are single plattered 5 . If we set
W =0, then the comprehensive model is further reduced to that for CLV magnetic disks.
ffl ZCAV model
In the model for ZCAV disks, the read sector time decreases as we get closer to the disk edge,
thus, increasing the transfer rate. A full disk revolution takes constant time, independently
of the zone in which the head is lying. This means that the product
throughout the disk.
ffl ZCLV model
In the zoned CLV disk model, the transfer rate increases near the outer edge, as is the case
with the ZCAV disks. However, the product (i.e., the angular velocity) does not,
necessarily, remain constant across the disk.
ffl CLV/CAV and CAV/CLV model
When we started the development of this unifying disk performance model both of CLV/CAV
and CAV/CLV did not exist. Since then, optical CAV/CLV appeared as a product (Hitachi
12-16X CD-ROM drive [2]). They are divided into two major parts: the CLV (CAV) part is
the innermost and the CAV (CLV) part is the outermost. The reason for their existence could
be to strike a balance between low rotational delays and high storage capacities. Having a part
as CLV results in large storage capacities. At the same time, a CAV part insures low access
times (since there are no angular velocity adjustments) and maximum angular velocity which
implies low rotational delays. We obtain the CAV/CLV model from the comprehensive model
as follows: if the innermost part has Z 1 zones, then we simply set h 1 =h
=constant
and C 1
constant. On the other hand, if the outermost part has Z 2 zones,
we should set hZ 1
constant and CZ 1
The CLV/CAV model can be obtained similarly.
ffl Optical multi-spiral model
This is also an optical model that already exists. In this disk, n spirals are wrapped up in
parallel as shown in figure 8. The single head of the drive splits the laser beam into n distinct
beams, each headed on a separate spiral. This way the transfer rate is multiplied by a factor
of n since n spirals can be read simultaneously and, thus, the transfer cost is down scaled by
a factor of n. Our model reduces to the optical multi-spiral model, if we replace our transfer
cost expression of equation (4) with a new one:
Cost 0
For all the above models, we can derive the corresponding magnetic model if we set
Otherwise, if we assign a nonzero value to the proximal window size, we get the corresponding
optical model.
5 Even if the optical has two readable surfaces, there is only one optical head.
Disk surface
Sectors
1st spiral
2nd spiral
Figure
8: Multi-Spiral Disk.
6 Performance Results
The purpose of this section is threefold. First, we show the validation of our analytical results
by comparing them with results we obtained from simulations of disk accesses. Second, we wish
to present the performance behavior we obtain from our comprehensive model after reducing it to
the model for the ZCAV disk technology (whose performance has not been analytically modeled
previously). Third, we show the performance behavior of some possible future disk technologies.
The reader can compare the performance of CAV and CLV disks against that of ZCAV disks
and CLV/CAV disks, with respect to the total seek, rotational, transfer and head switch cost of
retrieving N qualifying sectors (which have missed the cache). The qualifying sectors are randomly
distributed and retrieved using the SCAN scheduling algorithm. For our performance comparisons
we choose to include the models for two existent products (magnetic CAV and ZCAV disks). In
addition, we included the models for two theoretical disks (magnetic CLV and CLV/CAV) in order
to corroborate our intuitive explanations of (and gain insights into) the fundamental performance
behavior.
The ZCAV disk has 8 zones. The number of cylinders in the zones (moving towards the disk
are: 252, 268, 318, 138, 144, 136, 192 and 533 respectively. Using a sector size of 1 KB,
the number of sectors per track for these zones are: 28, 32, 36, 40, 42, 44, 46 and 48 respectively.
For all disks, the track capacity of the innermost tracks are 28 sectors, each with size of 1KB, so
to correspond to real disk products, such as the HP C2240, which have a total of 56KB in the
innermost track. For the CLV disk the track capacity increases with constant rate so to reach
the maximum track capacity of 48 sectors/track in order to meaningfully compare it against the
ZCAV disk. The CAV disk has 28 sectors per track for all its tracks. The CLV part of the hybrid
disk (CLV/CAV) starts with innermost track capacity of 28 sectors, increasing uniformly to the
maximum capacity of the outermost track of 38 sectors. The CAV part of the same disk has 39
sectors per track and 1150 cylinders.
All disks with constant angular velocity rotate with the same speed of 7200 r.p.m. In addition,
the angular velocity of the CLV and the CLV/CAV disks is also equal to 7200 r.p.m. The average
angular velocity of the CLV disk is 11.3 ms per revolution, which amounts to 5305 r.p.m. The
average angular velocity for the CLV part of the hybrid disk is 6109 r.p.m. and for the CAV part
is 5169 r.p.m.
This rotational speed for the ZCAV disk results in a read sector time of 8:33=C i ms, where
8.33 ms is the full revolution time and C i is the number of the sectors of a track in the i th zone.
The read sector time for the CAV disk is constant and equal to 8:33=28 ms. For the CLV disk and
the CLV part of the hybrid disk the read sector time is constant and equal to 8:33=28 ms. (Note
that although the track capacity in CLV disks increases, the angular velocity decreases in a way
that results in a constant read sector time). Finally, the read sector time for the CAV part of the
hybrid disk is 11:6=39 ms, where 11:6 ms is the full revolution time. Observe that the read sector
times for the CLV, CLV/CAV and CAV disks are equal by construction.
All disks are single plattered (S =1), have the same number of platters (D=13) and the same
seek cost parameters (a 1 =0:4; b 1 =3:24; a 2 =0:008; b 2 =8; Q=383) and they have (almost) the same
storage capacity, e.g., 1052090 KB for the ZCAV disk, 1052100 KB for the CAV disk, 1051830 KB
for the CLV disk and 1051973 KB for the CLV/CAV disk. The head switch constant is 0.5 ms for
all disks. The ZCAV disk has a total of 1981 cylinders, the CAV disk has 2890 cylinders, the CLV
disk has 2145 cylinders and the CLV/CAV disk has 2232 cylinders.
6.1 Model Validation
In this section, we report on the results of our efforts to validate the analytical performance
model derived earlier. Specifically, we simulated accesses to a ZCAV disk and validated the empirical
results against those obtained analytically for ZCAV disks. We focused on ZCAV disks since they
incorporate most of the additional essential features of our model (e.g., variable transfer rates and
storage capacities) and can validate all of the mathematical expressions derived earlier (except for
that of the proximal window access cost).
Our simulation model uses the same disk configuration as the one of the ZCAV disk described in
the previous section. This means that the disk layout (i.e., Z, C i , S, D), the seek cost parameters
(i.e., a 1 , b 1 , a 2 , b 2 , Q), the rotational speed (i.e., 8:33 ms per full revolution), the block size
1), the settle cost constant (i.e., constant settle ) and the head switch cost constant (i.e.,
constant headSwitch ) are the same for both simulation and analytical models. Note that the above
disk configuration is a mixture of the products HP C2240 and HP 97560.
Our simulation model is based on the simulation model developed in [16] which was proved
to be accurate and validated. We use the same seek cost model which is a two branched function
of the distance (a root function for the short seeks and a linear function for the long seeks), the
rotational delay is calculated keeping track of the head while the platter rotates and optimizing the
retrieval of the qualifying sectors when more than one reside in the same track (so that they are
retrieved at most in a full revolution). Additionally, the head switch and the settle costs have been
incorporated in our simulator in the same way they have been embedded in the simulation model
of [16]. Thus, we managed, first, to match our analytical results with the results of our simulator
and second, to match our simulator's basic characteristics (i.e., takes into account settle phase,
head switch, command queuing, etc) with the one of [16].
In figure 9, we plot the empirical and analytical measurements for the total average seek cost,
rotational delay, transfer cost and head switch cost. In the figure, we can see that there is a very
close match between the analytical and the simulation results for all four metrics. The following
table shows the offness, which is defined to be the percentage deviation between our simulation
measurements and the analytical expressions.
Total
Average
Cost
(ms)
Number of Cache-Missed Qualifying Sectors (N)
ZCAV disk
Simulated
Analytical50001500025000
Total
Average
Rotational
Delay
(ms)
Number of Cache-Missed Qualifying Sectors (N)
ZCAV disk
Simulated
Analytical200600100014001800
Total
Average
Head
Switch
Cost
(ms)
Number of Cache-Missed Qualifying Sectors (N)
ZCAV disk
Simulated
Analytical20060010001400
Total
Average
Transfer
Cost
(ms)
Number of Cache-Missed Qualifying Sectors (N)
ZCAV disk
Simulated
Analytical
Figure
9: Validation
(a) (b)
(c) (d)
Metric Offness %
Rotational 0:852259
Head switch 0:544137
Transfer 0:396091
6.2 Comparison of Different Technologies
In figure 10a, we can see the total expected seek cost for the ZCAV, CAV, CLV and CLV/CAV
disks with respect to the number of the qualifying sectors. The seek cost for the CAV disk is always
higher than the seek cost of the ZCAV disk. This occurs because the CAV disk has more cylinders
than the ZCAV disk, which in turn is attributed to the fact that both disks have the same storage
Total
Average
Cost
(ms)
Number of Cache-Missed Qualifying Sectors (N)
ZCAV50000150000250000
Total
Average
Rotational
Delay
(ms)
Number of Cache-Missed Qualifying Sectors (N) (x1000)
ZCAV20006000100001400018000
Total
Average
Head
Switch
(ms)
Number of Cache-Missed Qualifying Sectors (N) (x1000)
Total
Average
Transfer
Cost
(ms)
Number of Cache-Missed Qualifying Sectors (N)
Figure
10: Total average costs.
(a) (b)
(c) (d)
capacity while the ZCAV disk has a higher average track capacity. The more the cylinders, the
more the qualifying cylinders and the more the expected distance between two successive qualifying
cylinders. The longest this distance, the higher the seek cost. There is an upper bound for the
total seek cost for both disks and this is reached when all the disk cylinders are qualifying. The
same explanation holds for the performance differencies between the other disk models.
The total average rotational delay of the ZCAV disk is lower than the rotational delay of the
CAV disk (see figure 10b). The reason is that the expected number of the qualifying sectors per
track is greater in the case of the ZCAV disk (because ZCAV has fewer but "larger" tracks (i.e.,
with higher average storage capacities)). The more the qualifying sectors per track, the lower is
the rotational delay. The CLV and CLV/CAV disk models have higher rotational latencies (despite
the fewer and larger tracks) because of the decreasing angular velocity when moving towards the
outer edge. This is evidenced by the better rotational delay of the CLV/CAV disk compared to
that of the CLV disk.
In figure 10c, we can see how the total head switch cost is affected by the number of the
qualifying sectors (N ). For small values of N , a qualifying cylinder has very few qualifying sectors.
For the cylinders with one qualifying sector we pay no head switch cost and, therefore, the CAV
head switch cost is lower (although not shown clearly in the figure) since it has more cylinders.
When the number of the sectors to be retrieved becomes large enough, the ZCAV disk provides
lower head switch cost. This happens because, for large values of N , all the cylinders are qualifying
for both disks and the key point becomes the expected number of the qualifying tracks. The greater
the number of the qualifying tracks, the greater the number of head switches. And since the CAV
disk has more and "smaller" tracks than the ZCAV disk, it has also more qualifying tracks. For
the same reason, CLV/CAV disk performs worse than the CLV disk.
Finally, in figure 10d, we can see that the total transfer cost of the ZCAV disk is lower than
the one of the CAV disk. The explanation is based on the "larger" tracks of the ZCAV disk, which
in turn imply shorter sectors (in inches). This coupled with the fact that the two disks have the
same angular velocity implies shorter average sector transfer time for the ZCAV disk. Notice also
that the transfer cost is the same for the CLV and the CLV/CAV disks. Recall that, by their
construction, these disks have the same transfer rate.
7 Contribution and Concluding Remarks
This paper presents an attempt to provide a common framework for studying the performance
behavior of retrieval operations in disk-storage device technologies under random workloads found
in several applications. To do this, we identify the fundamental characteristics of disk-storage device
technology which are the optical/magnetic nature and the constant or variable angular/linear
velocity, transfer rate and storage capacity. Based on these characteristics, we derived a comprehensive
analytical model which can be used in important system components such as the query
optimizer in a large-scale traditional database and in an admission controller in a multimedia storage
server. The comprehensive nature of our model makes it useful for studying the performance
of existing and possible future disk-drive products and is important for estimating the retrieval
cost in large database systems containing different types of disk drives (such as magnetic disks in
secondary storage and optical disk jukeboxes in tertiary storage).
For optical disks our model accounts for proximal window accesses. In doing this, we augmented
related work by providing an expected cost formula for proximal window accesses in the
case of optical disks with concentric tracks.
For magnetic disks, we derived expected costs for the traditionally-studied operations (e.g.,
seek and rotational delays). Our expected cost formulas accounted for the possibility of constant
or variable angular/linear velocities, transfer rates and storage capacities. In addition, we included
a more detailed analytical characterization of modern disk behavior, such as command queuing
(which allows disks controllers to minimize rotational delay). Also, we contributed expected cost
analysis of disk functionalities such as head switching which has not, to our knowledge, been
modeled before, although is has been found to contribute to disk access cost.
After deriving the unifying formulas for the expected disk retrieval cost, we reduced our formulas
to model costs for existing disk products and for theoretical disks. This allowed us to gain
insights into the fundamental performance trade-offs. It also gave us the opportunity to contribute
an analysis of the performance of the recently-produced zoned magnetic disks.
Our analytical results were validated against detailed simulation studies. The models for our
simulations are similar to models in the literature which have been found to very closely approximate
the performance of real disks [16], with the exception of cache modeling (which is not needed for
our target applications) and of detailed modeling of the controller's internals and bus contention
which are dependent on particular system configurations (such as the number of drives attached to
I/O buses, the size of drive-level speed-matching buffers and caches) and which change with time.
The same model can form a basis to develop a model to predict the performance of write disk
operations as well. Write operations, in general, are costlier. The exact cost model depends on
whether a written block can be read and checked for consistency without having to wait for an
additional revolution (a feature typically unavailable in optical disks), whether the disk block needs
to be erased before written (as is typically the case in optical disks) and depends also on the disk's
reliability (i.e., how many write operations must on average be repeated). Finally, the cost model
for write operations must also take into account the fact that many disks require data blocks to be
written on the disk's surface, whereas reads can be satisfied directly from the controller's cache. In
the future, we plan to extend this model to account for write operations as well.
--R
"Track-Pairing: a Novel Data Layout for VOD Servers with Multi-Zone- Recording Disks"
"Comparison: All 12X CD-ROM Drives Are Not Equal"
"Analysis of Retrieval Performance for Records and Objects Using Optical Disk Technology"
"Performance Analysis and Fundamental Trade Offs for Optical Disks"
"Optical Mass Storage Systems and their Performance"
"Estimating Block Selectivities for Physical Database Design"
"Estimating Block Accesses in Database Organizations"
"Performance Optimizations for Optical Discs Architectures"
"Placement of Continuous Media in Multi-Zone Disks"
"Tutorial on Multimedia Storage Servers"
"Hit-Ratio of Caching Disk Buffer"
"Magnetic Disk Drives Evaluation"
"Stochastic Service Guarantees for Continuous Data on Multi-Zone Disks"
"Observing the Effects of Multi-Zone Disks"
"Disk Scheduling for Mixed-Media Workloads in Multimedia Servers"
"An Introduction to Disk Drive Modeling"
"Performance Modeling for Realistic Storage Devices"
"Multimedia: Computing, Communications and Appli- cations"
"Placement of Multimedia Blocks on Zoned Disks"
"Optimal Data Placement on Disks: A Comprehensive Solution for Different Disk Technologies"
"Overlay striping and optimal parallel I/O for modern applications"
"Video striping in video server environments"
"An Exact Analysis of Expected Seeks in Shadowed Disks"
"Minimizing Expected Head Movement in One-Dimensional and Two-Dimensional Mass Storage Systems"
"Algorithmic Studies In Mass Storage Systems"
"A Tight Upper Bound of the Lumped Disk Seek Time for the SCAN Disk Scheduling Policy"
--TR
--CTR
Zhenghong Deng , Wei Zheng , Fang Wang , Zhengguo Hu, Research and implementation of management software for SAN based on clustering technology, Proceedings of the 2004 ACM SIGGRAPH international conference on Virtual Reality continuum and its applications in industry, June 16-18, 2004, Singapore
Daniel Ellard , Margo Seltzer, NFS tricks and benchmarking traps, Proceedings of the USENIX Annual Technical Conference on USENIX Annual Technical Conference, p.16-16, June 09-14, 2003, San Antonio, Texas
Athena Vakali , Evimaria Terzi , Elisa Bertino , Ahmed Elmagarmid, Hierarchical data placement for navigational multimedia applications, Data & Knowledge Engineering, v.44 n.1, p.49-80, January | performance modeling and prediction;disk technologies;zoned disks |
628198 | High-Dimensional Similarity Joins. | AbstractMany emerging data mining applications require a similarity join between points in a high-dimensional domain. We present a new algorithm that utilizes a new index structure, called the $\epsilon$ tree, for fast spatial similarity joins on high-dimensional points. This index structure reduces the number of neighboring leaf nodes that are considered for the join test, as well as the traversal cost of finding appropriate branches in the internal nodes. The storage cost for internal nodes is independent of the number of dimensions. Hence, the proposed index structure scales to high-dimensional data. We analyze the cost of the join for the $\epsilon$ tree and the R-tree family, and show that the $\epsilon$ tree will perform better for high-dimensional joins. Empirical evaluation, using synthetic and real-life data sets, shows that similarity join using the $\epsilon$ tree is twice to an order of magnitude faster than the $R^+$ tree, with the performance gap increasing with the number of dimensions. We also discuss how some of the ideas of the $\epsilon$ tree can be applied to the R-tree family. These biased R-trees perform better than the corresponding traditional R-trees for high-dimensional similarity joins, but do not match the performance of the $\epsilon$ tree. | Introduction
Many emerging data mining applications require efficient
processing of similarity joins on high-dimensional
points. Examples include applications in time-series
databases[1, 2], multimedia databases [9, 14, 13], medical
databases [3, 21], and scientific databases[22]. Some typical
queries in these applications include: (1) discover all
stocks with similar price movements; (2) find all pairs of
similar images; (3) retrieve music scores similar to a target
music score. These queries are often a prelude to clustering
the objects. For example, given all pairs of similar images,
the images can be clustered into groups such that the images
in each group are similar.
To motivate the need for multidimensional indices in
such applications, consider the problem of finding all pairs
of similar time-sequences. The technique in [2] solves this
problem by breaking each time-sequences into a set of contiguous
subsequences, and finding all subsequences similar
to each other. If two sequences have "enough" similar
Currently at Bell Laboratories, Murray Hill, NJ.
subsequences, they are considered similar. To find similar
subsequences, each subsequence is mapped to a point in a
multi-dimensional space. Typically, the dimensionality of
this space is quite high. The problem of finding similar sub-sequences
is now reduced to the problem of finding points
that are close to the given point in the multi-dimensional
space. A pair of points are considered "close" if they are
within ffl distance of each other with some distance metric
(such as L 2 or L1 norms) that involves all dimensions,
where ffl is specified by the user. A multi-dimensional index
structure (the R + tree) was used for finding all pairs of
close points.
This approach holds for other domains, such as image
data. In this case, the image is broken into a grid of sub-
images, key attributes of each sub-image mapped to a point
in a multi-dimensional space, and all pair of similar sub-images
are found. If "enough" sub-images of two images
match, a more complex matching algorithm is applied to
the images.
A closely related problem is to find all objects similar to
a given objects. This translates to finding all points close to
a query point.
Even if there is no direct mapping from an object to a
point in a multi-dimensional space, this paradigm can still
be used if a distance function between objects is available.
An algorithm is presented in [7] for generating a mapping
from an object to a multi-dimensional point, given a set of
objects and a distance function.
Current spatial access methods (see [18, 8] for an
overview) have mainly concentrated on storing map infor-
mation, which is a 2-dimensional or 3-dimensional space.
While they work well with low dimensional data points, the
time and space for these indices grow rapidly with dimen-
sionality. Moreover, while CPU cost is high for similarity
joins, existing indices have been designed with the reduction
of I/O cost as their primary goal. We discuss these
points further later in the paper, after reviewing current multidimensional
indices.
To overcome the shortcomings of current indices for
high-dimensional similarity joins, we propose a structure
called the ffl-kdB tree. This is a main-memory data structure
optimized for performing similarity joins. The ffl-kdB tree
also has a very small build time. This lets the ffl-kdB tree
use the similarity distance limit ffl as a parameter in building
the tree. Empirical evaluation shows that the build plus join
time for the ffl-kdB tree is typically 3 to 35 times less than the
join time for the R + tree [19], 1 with the performance gap
increasing with the number of dimensions. A pure main-memory
data structure would not be very useful, since the
data in many applications will not fit in memory. We extend
the join algorithm to handle large amount of data while still
using the ffl-kdB tree.
Problem Definition We will consider two versions of the
spatial similarity join problem:
ffl Self-join: Given a set of N high-dimensional points
and a distance metric, find all pairs of points that are
within ffl distance of each other.
ffl Non-self-join: Given two sets S 1 and S 2 of high-dimensional
points and a distance metric, find pairs
of points, one each from S 1 and S 2 , that are within ffl
distance of each other.
The distance metric for two n dimensional points ~
X and ~
Y
that we consider is
2 is the familiar Euclidean distance, L 1 the Manhattan dis-
tance, and L1 corresponds to the maximum distance in any
dimension.
Paper Organization. In Section 2, we give an overview
of existing spatial indices, and describe their shortcomings
when used for high-dimensional similarity joins. Section 3
describes the ffl-kdB tree and the algorithm for similarity
joins. We give a performance evaluation in Section 4 and
conclude in Section 5.
Current Multidimensional Index Structure
We first discuss the R-tree family of indices, which are
the most popular multi-dimensional indices, and describe
how to use them for similarity joins. We also give a brief
overview of other indices. We then discuss inadequacies of
the current index structures.
Our experiments indicated that the R + tree was better than the R tree
[8] or the R tree [4] tree for high-dimensional similarity joins.
(a) Space Covered by Bounding Rectangles (b) R-tree
Figure
1. Example of an R-tree
2.1 The R-tree family
R-tree [8] is a balanced tree in which each node represents
a rectangular region. Each internal node in a R-tree
stores a minimum bounding rectangle (MBR) for each of
its children. The MBR covers the space of the points in the
child node. The MBRs of siblings can overlap. The decision
whether to traverse a subtree in an internal node depends on
whether its MBR overlaps with the space covered by query.
When a node becomes full, it is split. Total area of the two
MBRs resulting from the split is minimized while splitting a
node.
Figure
1 shows an example of R-tree. This tree consists
of 4 leaf nodes and 3 internal nodes. The MBRs are
N1,N2,L1,L2,L3 and L4. The root node has two children
whose MBRs are N1 and N2.
R tree [4] added two major enhancements to R-tree.
First, rather than just considering the area, the node splitting
heuristic in R tree also minimizes the perimeter and
overlap of the bounding regions. Second, R tree introduced
the notion of forced reinsert to make the shape of the
tree less dependent on the order of the insertion. When a
node becomes full, it is not split immediately, but a portion
of the node is reinserted from the top level. With these two
enhancements, the R tree generally outperforms R-tree.
imposes the constraint that no two bounding
regions of a non-leaf node overlap. Thus, except for the
boundary surfaces, there will be only one path to every leaf
region, which can reduce search and join costs.
X-tree [6] avoids splits that could result in high degree of
overlap of bounding regions for R -tree. Their experiments
show that the overlap of bounding regions increases significantly
for high dimensional data resulting in performance
deterioration in the R -tree. Instead of allowing splits that
produce high degree of overlaps, the nodes in X-tree are
extended to more than the usual block size, resulting in
so called super-nodes. Experiments show that X-tree improves
the performance of point query and nearest-neighbor
query compared to R -tree and TV-tree (described below).
No comparison with R + -tree is given in [6] for point data.
However, since the R + -tree does not have any overlap, and
the gains for the X-tree are obtained by avoiding overlap,
one would not expect the X-tree to be better than the R
tree for point data.
(a) Overlapping MBRs (R tree and R ? tree)
(b) Non-overlapping MBRs (R
Figure
2. Screening points for join test
Similarity Join The join algorithm using R-tree considers
each leaf node, extends its MBR with ffl-distance, and
finds all leaf nodes whose MBR intersects with this extended
MBR. The algorithm then performs a nested-loop
join or sort-merge join for the points in those leaf nodes, the
join condition being that the distance between the points is
at most ffl. (For the sort-merge join, the points are first sorted
on one of the dimensions.)
To reduce redundant comparisons between points when
joining two leaf nodes, we could first screen points. The
boundary of each leaf node is extended by ffl, and only points
that lie within the intersection of the two extended regions
need be joined. Figure 2 shows an example, where the
rectangles with solid lines represent the MBRs of two leaf
nodes and the dotted lines illustrate the extended bound-
aries. The shaded area contains screened points.
kdB tree [17] is similar to the R + tree. The main differ-
unlike the MBRs of the R + tree.
hB-tree [12] is similar to the kdB tree except that bounding
rectangles of the children of an internal node are organized
as a K-D tree [5] rather than as a list of MBRs.
(The K-D-tree is a binary tree for multi-dimensional points.
In each level of the K-D-tree, only one dimension, chosen
cyclically, is used to decide the subtree for traversal.) Fur-
ther, the bounding regions may have rectangular holes in
Figure
3. Number of neighboring leaf nodes.
them. This reduces the cost of splitting a node compared to
the kdB tree.
TV-tree [10] uses a variable number of dimensions for
indexing. TV-tree has a design parameter ff ("active dimen-
sion") which is typically a small integer (1 or 2). For any
node, only ff dimensions are used to represent bounding regions
and to split nodes. For the nodes close to the root, the
first ff dimensions are used to define bounding rectangles.
As the tree grows, some nodes may consist of points that all
have the same value on their first, say, k dimensions. Since
the first k dimensions can no longer distinguish the points in
those nodes, the next ff dimensions (after the k dimensions)
are used to store bounding regions and for splitting. This
reduces the storage and traversal cost for internal nodes.
Grid-file [15] partitions the k-dimensional space as a
grid; multiple grid buckets may be placed in a single disk
page. A directory structure keeps track of the mapping from
grid buckets to disk pages. A grid bucket must fit within a
leaf page. If a bucket overflows, the grid is split on one of
the dimensions.
2.3 Problems with Current Indices
The index structures described above suffer from following
inadequacies for performing similarity joins with high-dimensional
points:
Number of Neighboring Leaf Nodes. The splitting algorithms
in the R-tree variants utilize every dimension equally
for splitting in order to minimize the volume of hyper-
rectangles. This causes the number of neighboring leaf
nodes within ffl-distance of a given leaf node to increase dramatically
with the number of dimensions. To see why this
happens, assume that a R-tree has partitioned the space so
that there is no "dead region" between bounding rectangles.
Then, with a uniform distribution in a 3-dimensional space,
we may get 8 leaf nodes as shown in Figure 3. Notice that
every leaf node is within ffl-distance of every other leaf node.
In an n dimensional space, there may be O(2 n ) leaf nodes
within ffl-distance of every leaf node. The problem is somewhat
mitigated because of the use of MBRs. However, the
number of neighbors within ffl-distance still increases dramatically
with the number of dimensions.
This problem also holds for other multi-dimensional
structures, except perhaps the TV-tree. However, the TV-
tree suffers from a different problem - it will only use the
first k dimensions for splitting, and does not consider any
of the others (unless many points have the same value in the
first k dimensions). With enough data points, this leads to
the same problem as for the R-tree, though for the opposite
reason. Since the TV-tree uses only the first k dimensions
for splitting, each leaf node will have many neighboring leaf
nodes within ffl-distance.
Note that the problem affects both the CPU and I/O cost.
The CPU cost is affected because of the traversal time as
well as time to screen all the neighboring pages. I/O cost
is affected because we have to access all the neighboring
pages.
Storage Utilization. The kdB tree and R-tree family, including
the X-tree, represent the bounding regions of each
node by rectangles. The bounding rectangles are represented
by "min" and "max" points of the hyper-rectangle.
Thus, the space needed to store the representation of bounding
rectangles increases linearly with the number of dimen-
sions. This is not a problem for the hB-tree (which does not
store MBRs), the TV-tree (which only uses a few dimensions
at a time), or the grid file.
Traversal Cost. When traversing a R-tree or kdB tree,
we have to examine the bounding regions of children in the
node to determine whether to traverse the subtree. This step
requires checking the ranges of every dimension in the representation
of bounding rectangles. Thus, the CPU cost of
examining bounding rectangles increases proportionally to
the number of dimensions of data points. This problem is
mitigated for the hB-tree or the TV-tree. This is not a problem
for the grid-file.
Build Time. The set of objects participating in a spatial
join may often be pruned by selection predicates [11] (e.g.
find all similar international funds). In those cases, it may
be faster to perform the non-spatial selection predicate first
(select international funds) and then perform spatial join on
the result. Thus it is sometimes necessary to build a spatial
index on-the-fly. Current indices are designed to be built
once; the cost of building them can be more than the cost of
the join [16].
Skewed Data. Handling skewed data is not a problem for
most current indices except the grid-file. In a k-dimensional
space, a single data page overflow may result in a
dimensional slice being added to the grid-file directory. If
the grid-file had n buckets before the split, and the splitting
dimension had m partitions, n=m new cells are added to the
grid after the split. Thus, the size of the directory structure
can grow rapidly for skewed high-dimensional points.
Summary
. Each index has good and bad features for similarity
join of high-dimensional points. It would be difficult
to design a general-purpose multi-dimensional index which
does not have any of the shortcomings listed above. How-
ever, by designing a special-purpose index, we can attack
these problems. The problem of high-dimensional similarity
joins with some distance metric and ffl parameter has the
following properties:
ffl The feature vector chosen for similarity comparison
has a high dimension.
Every dimension of the feature vector is mapped into
numeric value.
ffl The distance function is computed considering every
dimension of the feature vector.
ffl The similarity distance limit ffl is not large since indices
are not effective when the selectivity of the similarity
join is large (i.e. when every point matches
with every other point).
We now describe a new index structure, ffl-kdB tree,
which is a special-purpose index for this purpose.
3 The ffl-kdB tree
We introduce the ffl-kdB tree in Section 3.1 and then discuss
its design rationale in Section 3.2.
3.1 ffl-kdB tree definition
We first define the ffl-kdB tree 2 . We then describe how
to perform similarity joins using the ffl-kdB tree, first for the
case where the data fits in memory, and then for the case
where it does not.
ffl-kdB tree We assume, without loss of generality, that the
co-ordinates of the points in each dimension lie between
0 and +1. We start with a single leaf node. For better
space utilization, pointers to the data points are stored in
leaf nodes. Whenever the number of points in a leaf node
exceeds a threshold, the leaf node is split, and converted to
an interior node. If the leaf node was at level i, the ith dimension
is used for splitting the node. The node is split into
b1=fflc parts, such that the width of each new leaf node in
the ith dimension is either ffl or slightly greater than ffl. (In
the rest of this section, we assume without loss of generality
that ffl is an exact divisor of 1.) An example of ffl-kdB tree
for two dimensional space is shown in Figure 4.
Note that for any interior node x, the points in a child y
of x will not join with any points in any of the other children
of x, except for the 2 children adjacent to y. This holds for
any of the L p distance metrics. Thus the same join code can
be used for these metrics, with only the final test between a
pair of points being metric-dependent.
2 It is really a trie, but we call it a tree since it is conceptually similar to
tree.
leaves
root
leaf
leaf
leaf
Figure
4. ffl-kdB tree
Similarity Join using the ffl-kdB tree Let x be an internal
node in the ffl-kdB tree. We use x[i] to denote the ith child of
x. Let f be the fanout of the tree. Note that
5 describes the join algorithm. The algorithm initially
calls self-join(root), for the self-join version, or join(root1,
root2), for the non-self-join version. The procedures leaf-
join(x, y) and leaf-self-join(x) perform a sort-merge join on
leaf nodes.
For high-dimensional data, the ffl-kdB tree will rarely use
all the dimensions for splitting. (For instance, with 10 dimensions
and a ffl of 0.1, there would have to be more than
points before all dimensions are used.) Thus we can
usually use one of the free unsplit dimension as a common
"sort dimension". The points in every leaf node are kept
sorted on this dimension, rather then being sorted repeatedly
during the join. When joining two leaf nodes, the algorithm
does a sort-merge using this dimension.
Memory Management The value of ffl is often given at
run-time. Thus, since the value of ffl is a parameter for
building the index, it may not be possible to build a disk-based
version of the index in advance. Instead, we sort the
multi-dimensional points with the first splitting dimension
and keep them as an external file.
We first describe the join algorithm, assuming that main-memory
can hold all points within a 2 ffl distance on the
first dimension, and then generalize it. The join algorithm
first reads points whose values in the sorted dimension lie
between 0 and 2 ffl, builds the ffl-kdB tree for those points in
main memory, and performs the similarity join in memory.
The algorithm then deallocates the space used for the points
whose values in the sorted dimension are between 0 and
ffl, reads points whose values are between 2 ffl and 3 ffl,
build the ffl-kdB tree for these points, and performs the join
procedure again. This procedure is continued until all the
points have been processed. Note that we only read each
procedure join(x, y)
begin
if leaf-node(x) and leaf-node(y) then
else if leaf-node(x) then begin
do
else if leaf-node(y) then begin
do
join(x[i],
else begin
join(x[i],
join(x[i],
join(x[i+1],
join(x[f],
procedure self-join(x)
begin
if leaf-node(x) then
else begin
self-join(x[i], x[i]);
join(x[i], x[i+1]);
self-join(x[f], x[f]);
Figure
5. Join algorithm
point off the disk once.
This procedure works because the build time for the ffl-
kdB tree is extremely small. It can be generalized to the
case where a 2 ffl chunk of the data does not fit in memory.
The basic idea is to partition the data into ffl 2 chunks using
an additional dimension. Then, the join procedure (i.e read
points into memory, build ffl-kdB , perform join and so on)
is instead repeated for each 4 ffl 2 chunk of the data using
the additional dimension.
3.2 Design Rationale
Two distinguishing features of ffl-kdB tree are:
ffl Biased Splitting : The dimension used in previous
split is selected again for splitting as long as the
length of the dimension in the bounding rectangle of
each resulting leaf node is at least ffl.
When we split a node, we split the
node in ffl sized chunks.
We discuss below how these features help ffl-kdB tree solve
the problems with current indices outlined in Section 2.
Number of Neighboring Leaf Nodes. Recall that with
current indices, the number of neighboring leaf pages may
increase exponentially with the number of dimensions. The
ffl-kdB solves this problem because of the biased splitting.
When the length of the bounding rectangle of each leaf
nodes in the split dimension is at least ffl, at most two neighboring
leaf nodes need to be considered for the join test.
However, as the length of the bounding rectangle in the split
dimension becomes less than ffl, the number of neighbor leaf
nodes for join test increases. Hence we split in one dimension
as long as the length of the bounding rectangle of each
resulting children is at least ffl, and then start splitting in the
next dimension. When a leaf node becomes full, we split
the node into several children, each of size ffl in the split dimension
at once, rather than gradually, in order to reduce
the build time.
We have two alternatives for choosing the next splitting
dimension: global ordering and local ordering. Global ordering
uses the same split dimension for all the nodes in
the same level, while local ordering chooses the split dimension
based on the distribution of points in each node.
Examples of these two cases are shown in Figure 6, for a 3-
dimensional space. For both orderings, the dimension D0
is used for splitting in the root node (i.e. level 0). For global
only D1 is used for splitting in level 1. However,
for local ordering, both D1 and D2 are chosen alternatively
for neighboring nodes in level 1. Consider the leaf node labeled
X. With global ordering, it has 5 neighbor leaf nodes
(shaded in the figure). The number of neighbors increases
to 9 for local ordering. Notice that the space covered by the
neighbors for global order is a proper subset of that covered
by the neighbors for local ordering. The difference in the
space covered by the two orderings increases as ffl decreases.
Hence we chose global ordering for splitting dimensions,
rather than local ordering.
When the number of points are so huge that the ffl-kdB
tree is forced to split every dimension, then the number of
neighbors will be comparable to other indices. However, till
that limit, the number of neighbors depends on the number
of points (and their distribution) and ffl, and is independent
of the number of dimensions.
The order in which dimensions are chosen for splitting
can significantly affect the space utilization and join cost
if correlations exist between some of the dimensions. This
problem can be solved by statistically analyzing a sample of
the data, and choosing for the next split the dimension that
has the least correlation with the dimensions already used
for splitting.
(a) Choosing splitting dimension globally
(b) Choosing splitting dimension locally
Figure
6. Global and Local Ordering of Splitting
Dimensions
Space Requirements. For each internal node, we simply
need an array of pointers to its children. We do not need
to store minimum bounding rectangles because they can be
computed. Hence the space required depends only on the
number of points (and their distribution), and is independent
of the number of dimensions.
Traversal Cost. Since we split nodes in ffl sized chunks,
traversal cost is extremely small. The join procedure
never has to check bounding rectangles of nodes to decide
whether or not they may contain points within ffl distance.
Build time. The build time is small because we do not
have complex splitting algorithms, or splits that propagate
upwards.
Skewed data. Since splitting a node does not affect other
nodes, the ffl-kdB tree will handle skewed data reasonably.
Performance Evaluation
We empirically compared the performance of the ffl-kdB
tree with both R+ tree and a sort-merge algorithm. The experiments
were performed on an IBM RS/6000 250 workstation
with a CPU clock rate of 66 MHz, 128 MB of main
memory, and running AIX 3.2.5. Data was stored on a local
disk, with measured throughput of about 1.5 MB/sec.
We first describe the algorithms compared in Section 4.1,
and the datasets used in experiments in Section 4.2. Next,
we show the performance of the algorithms on synthetic and
real-life datasets in Sections 4.3 and 4.4 respectively.
4.1 Algorithms
ffl-kdB tree. We implemented the ffl-kdB tree algorithm described
in Section 3.1. A leaf node was converted to an internal
node (i.e. split) if its memory usage exceeded 4096
bytes. However, if there were no dimensions left for split-
ting, the leaf node was allowed to exceed this limit. The
execution times for the ffl-kdB tree include the I/O cost of
reading an external sorted file containing the data points, as
well as the cost of building the index. Since the external
file can be generated once and reused for different value of
ffl, the execution times do not include the time to sort the
external file.
tree. Our experiments indicated that the R + tree was
faster than the R tree for similarity joins on a set of high-dimensional
points. (Recall that the difference between R
tree and R ? tree is that R + tree does not allow overlap between
minimum bounding rectangles. Hence it reduces the
number of overlapping leaf nodes to be considered for the
spatial similarity join, resulting in faster execution time.)
We therefore used R + tree for our experiments. We used
a page size of 4096 bytes. In our experiments, we ensured
that the R + tree always fit in memory and a built R
was available in memory before the join execution began.
Thus, the execution time for R + tree does not include any
build time - it only includes CPU time for main-memory
join. (Although this gives the R + tree an unfair advantage,
we err on the conservative side.)
2-level Sort-Merge. Consider a simple sort-merge algo-
rithm, which reads the data from a file sorted on one of the
dimensions and performs the join test on all pairs of points
whose values in the sort dimension are closer than ffl. We
implemented a more sophisticated version of this algorithm,
which reads a 2ffl chunk of the sorted data into memory, further
sorts in memory this data on a second dimension, and
then performs the join test on pairs of points whose values
in the second sort dimension are close than ffl. The algorithm
then drops the first ffl chunk from memory and reads
the next ffl chunk, and so on. The execution times reported
for this algorithm also do not include the external sort time.
Table
1 summarizes the costs included in the execution
times for each algorithm.
4.2 Data Sets and Performance Metrics
Synthetic Datasets. We generated two types of synthetic
datasets: uniform and gaussian. The values in each dimen-
Join Cost Yes Yes Yes
Build Cost
Sort Cost (first dim.)
Table
1. Costs included in the execution
times.
Parameter Default Value Range of Values
Number of Points 100,000 10,000 to 1 million
Number of Dimensions 10 4 to 28
ffl (join distance) 0.1 0.01 to 0.2
Range of Points -1 to +1 -same-
Distance Metric L2-norm L1 , L2 , L1 norms
Table
2. Synthetic Data Parameters
sion were randomly generated in the range \Gamma1:0 to 1:0 with
either uniform or gaussian distribution. For the Gaussian
distribution, the mean and the standard deviation were 0
and 0.25 respectively. Table 2 shows the parameters for
the datasets, along with their default values and the range
of values for which we conducted experiments.
Distance Functions We used L 1 , L 2 and L1 as distance
functions in our experiments. The extended bounding rectangles
obtained by extending MBRs by ffl differ slightly in
tree depending on distance functions. Figure 7 shows
the extended bounding regions for the L 1 , L 2 and L1
norms. The rectangles with solid line represents the MBR
of a leaf node and the dashed lines the extended bounding
regions. This difference in the regions covered by the extended
regions may result in a slightly different number of
intersecting leaf nodes for a given a leaf node. However,
in the R-tree family of spatial indices, the selection query
is usually represented by rectangles to reduce the cost of
traversing the index. Thus, the extended bounding rectangles
to be used to traverse the index for both L 1 and L 2
become the same as that for L1 .
4.3 Results on Synthetic Data
Distance Metric. We first experimented varying ffl for the
norms. The relative performance of the
(a) L1 (b) L2 (c) L1
Figure
7. Bounding Regions extended by ffl
Uniform Distribution101000
Execution
Time
Epsilon
2-level Sort-merge
R+ tree
e-K-D-B tree
Gaussian
Execution
Time
Epsilon
2-level Sort-merge
R+ tree
e-K-D-B tree
Figure
8. Performance on Synthetic
Value
algorithms is almost identical for the three distance metrics
(See [20]). We only show the results for the L 2 -norm in the
remaining experiments.
ffl value. Figure 8 shows the results of varying ffl from 0.01
to 0.2, for both uniform and gaussian data distributions. L 2
is used as distance metric. We did not explore the behavior
of the algorithms for ffl greater than 0.2 since the join result
becomes too large to be meaningful. Note that the execution
times are shown on a log scale. The ffl-kdB tree algorithm
is typically around 2 to 20 times faster than the other algo-
rithms. For low values of ffl (0.01), the 2-level sort-merge
algorithm is quite effective. In fact, the sort-merge algorithm
and the ffl-kdB algorithm do almost the same actions,
since the ffl-kdB will only have around 2 levels (excluding
the root). For the gaussian distribution, the performance
gap between the ffl-kdB tree and the R + tree narrows for
high values of ffl because the join result is very large.
Uniform 28
Execution
Time
Dimension
2-level Sort-merge
R+ tree
e-K-D-B tree
Gaussian Distribution10010000
28
Execution
Time
Dimension
2-level Sort-merge
R+ tree
e-K-D-B tree
Figure
9. Performance on Synthetic Data:
Number of Dimensions
Number of Dimensions. Figure 9 shows the results of increasing
the number of dimensions from 4 to 28. Again,
the execution times are shown using a log scale. The ffl-kdB
algorithm is around 5 to 19 times faster than the sort-merge
algorithm. For 8 dimensions or higher, it is around 3 to 47
times faster than the R + tree, the performance gap increasing
with the number of dimensions. For 4 dimensions, it
is only slightly faster, since there are enough points for the
ffl-kdB tree to be filled in all dimensions.
For the R + tree, increasing the number of dimensions
increases the overhead of traversing the index, as well as the
number of neighboring leaf nodes and the cost of screening
them. Hence the time increases dramatically when going
from 4 to 28 dimensions. 3 Even the sort-merge algorithm
performs better than the R + tree at higher dimensions. In
3 The dip in the R + tree execution time when going from 4 to 8 dimension
for the gaussian distribution is because of the decrease in join result
size. This effect is also noticeable for the ffl-kdB tree, for both distributions.
Uniform Distribution101000100000
Execution
Time
Number of Points ('000s)
2-level Sort-merge
R+ tree
e-K-D-B tree
Gaussian Distribution101000100000
Execution
Time
Number of Points ('000s)
2-level Sort-merge
R+ tree
e-K-D-B tree
Figure
10. Performance on Synthetic Data:
Number of Points
contrast, the execution time for the ffl-kdB remains roughly
constant as the number of dimensions increases.
Number of Points. To see the scale up of ffl-kdB tree, we
varied the number of points from 10,000 to 1,000,000. The
results are shown in Figure 10. For R + tree, we do not show
results for 1,000,000 points because the tree no longer fit in
main memory. None of the algorithms have linear scale-up;
but the sort-merge algorithms has somewhat worse scaleup
than the other two algorithms. For the gaussian distribution,
the performance advantage of the ffl-kdB tree compared to
the R + tree remains fairly constant (as a percentage). For
the uniform distribution, the relative performance advantage
of the ffl-kdB tree varies since the average depth of the ffl-kdB
tree does not increase gradually as the number of points in-
creases. Rather, it jumps suddenly, from around 3 to around
4, etc. These transitions occur between 20,000 and 50,000
points, and between 500,000 and 750,000 points.
Uniform
Execution
Time
Ratio of Size of Two Data Set
R+ tree
e-K-D-B tree
Gaussian
Execution
Time
Ratio of Size of Two Data Set
R+ tree
e-K-D-B tree
Figure
11. Non-self-joins
Non-self-joins. Figure 11 shows the execution times for
a similarity join between two different datasets (generated
with different random seeds). The size of one of the datasets
was fixed at 100,000 points, and the size of the other dataset
was varied from 100,000 points down to 5,000 points. For
experiments where the second dataset had 10,000 points or
fewer, each experiment was run 5 times with different random
seeds for the second dataset and the results averaged.
With both datasets at 100,000 points, the performance gap
between the R + tree and the ffl-kdB tree is similar to that on
a self-join with 200,000 points. As the size of the second
dataset decreases, the performance gap also decreases. The
reason is that the time to build the index is included for the
ffl-kdB tree, but not for the R + tree.
4.4 Experiment with a Real-life Data Set
We experimented with the following real-life dataset.
Similar Time Sequences Consider the problem of finding
similar time sequences. The algorithm proposed in [2]
first finds similar "atomic" subsequences, and then stitches
together the atomic subsequence matches to get similar sub-sequences
or similar sequences. Each sequence is broken
into atomic subsequences by using a sliding window of size
w. The atomic subsequences are then mapped to points
in a w-dimensional space. The problem of finding similar
atomic subsequences now corresponds to the problem of
finding pairs of w-dimensional points within ffl distance of
each other, using the L1 norm. (See [2] for the rationale
behind this approach.)10010000
Execution
Time
Epsilon
2-level Sort-merge
R+ tree
Execution
Time
Dimension
2-level Sort-merge
R+ tree
e-K-D-B tree
Figure
12. Performance on Mutual Fund Data
The time sequences in our experiment were the daily
closing prices of 795 U.S. mutual funds, from Jan 4, 1993
to March 3, 1995. There were around 400,000 points
for the experiment (since each sequence is broken using
a sliding window). The data was obtained from the MIT
AI Laboratories' Experimental Stock Market Data Server
(http://www.ai.mit.edu/stocks/mf.html). We varied the window
size (i.e. dimension) from 8 to 16 and ffl from 0.05
to 0.2.
Figure
12 shows the resulting execution times for
the three algorithms. The results are quite similar to those
obtained on the synthetic dataset, with the ffl-kdB tree out-performing
the other two algorithms.
4.5
Summary
The ffl-kdB tree was typically 2 to 47 times faster than the
self-joins, with the performance gap increasing
with the number of dimensions. It was typically 2 to 20
times faster than the sort-merge. The 2-level sort-merge was
usually slower than R tree. But for high dimensions (?
or low values of ffl (0.01), it was faster than the R + tree.
For non-self-joins, the results were similar when the
datasets being joined were not of very different sizes. For
datasets with different sizes (e.g. 1:10 ratio), the ffl-kdB tree
was still faster than the R + tree. But the performance gap
narrowed since we include the build time for the ffl-kdB tree,
but not for the R + tree.
The distance metric did not significantly affect the re-
sults: the relative performance of the algorithms was almost
identical for the L 1 , L 2 and L1 norms.
Conclusions
We presented a new algorithm and an index structure,
called the ffl-kdB tree, for fast spatial similarity joins on
high-dimensional points. Such similarity joins are needed
in many emerging data mining applications. The new index
structure reduces the number of neighbor leaf nodes that are
considered for the join test, as well as the traversal cost of
finding appropriate branches in the internal nodes. The storage
cost for internal nodes is independent of the number of
dimensions. Hence it scales to high-dimensional data.
We studied the performance of ffl-kdB tree using both
synthetic and real-life datasets. The join time for the ffl-kdB
tree was 2 to an order of magnitude less than the join time
for the R + tree on these datasets, with the performance gap
increasing with the number of dimensions. We have also
analyzed the number of join and screen tests for the ffl-kdB
tree and the R + tree. The analysis showed that the ffl-kdB
tree will perform considerably better for high-dimensional
points. This analysis can be found in [20].
Given the popularity of the R-tree family of index struc-
tures, we have also studied how the ideas of the ffl-kdB tree
can be grafted to the R-tree family. We found that the resulting
"biased R-tree" performs much better than the R-tree for
high-dimensional similarity joins, but the ffl-kdB tree still
did better. The details of this study can be found in [20].
--R
Efficient similarity search in sequence databases.
Fast similarity search in the presence of noise
QBISM: A prototype 3-d medical image database system
Multidimensional binary search trees used for associative searching.
Fastmap: A fast algorithm for indexing
R-trees: a dynamic index structure for spatial searching.
A retrieval technique for similar shapes.
Spatial joins using seeded trees.
Multimedia information systems: the unfolding of a reality.
The qbic project: Querying images by content using color
The grid file: an adaptable
Partition Based Spatial-Merge Join
The Design and Analysis of Spatial Data Structures.
The ffl-kdb tree: A fast index structure for high-dimensional similarity joins
Warping 3d models for interbrain comparisons.
The input-state space approach to the prediction of auroral geomagnetic activity from solar wind variables
--TR | similar time sequences;data mining;similarity join |
628213 | Production Systems with Negation as Failure. | We study action rule-based systems with two forms of negation, namely classical negation and negation as failure to find a course of actions. We show by several examples that adding negation as failure to such systems increases their expressiveness in the sense that real life problems can be represented in a natural and simple way. Then, we address the problem of providing a formal declarative semantics to these extended systems by adopting an argumentation-based approach which has been shown to be a simple unifying framework for understanding the declarative semantics of various nonmonotonic formalisms. In this way, we naturally define the grounded (well-founded), stable, and preferred semantics for production systems with negation as failure. Next, we characterize the class of stratified production systems, which enjoy the properties that the above mentioned semantics coincide and that negation as failure to find a course of actions can be computed by a simple bottom-up operator. Stratified production systems can be implemented on top of conventional production systems in two ways. The first way corresponds to the understanding of stratification as a form of priority assignment between rules. We show that this implementation, though sound, is not complete in the general case. Hence, we propose a second implementation by means of an algorithm which transforms a finite stratified production system into a classical one. This is a sound and complete implementation, though computationally hard, as shown in the paper. | This is a sound and complete implementation, though computationally
hard as shown in the paper.
Keywords: rule-based systems, knowledge-based systems, rule-based process-
ing, expert systems, knowledge representation
Note: This paper is a revised and extended version of [9].
1 Introduction and Motivations
In this section we rst give examples to motivate the extension of the production
systems paradigm [17] by the introduction of negation as failure (to nd a course
of actions). We then discuss its role as a specication mechanism for reactive
systems.
1.1 On the need for negation as failure in production systems
Example 1.1
Imagine the situation of a person doing his household work. Clothes have to
be washed and the person has two options, either hand washing or machine
washing. If there is machine powder in house, then machine washing can take
place. This is represented by the production rule
If no machine powder is in house, then it can be acquired by either buying it
in the shop (provided the shops are open) or by borrowing it from the neighbor
(if he is in). The rules for acquiring powder can be represented by the following
two classical production rules
Neighbor-In then borrow.
Of course, hand washing is undesirable and will be taken up if there is no way
to acquire machine powder . The naive representation of this rule using classical
negation
is clearly not correct, since the meaning of such a rule is that if there is no
machine powder in house at the current state, then the clothes should be hand
washed, while the intuitive meaning of \there is no way to acquire machine pow-
der" is that there is no course of actions starting from the current state leading
to acquiring machine powder. Hence in a state where there is no machine powder
in house and the neighbor is in, the above naive representation would allow
hand washing though there is a way to acquire machine powder by borrowing
it from the neighbor. Hence it fails to capture the intuitive understanding of
the problem.
Here we need to use a dierent kind of negation, called negation as failure
(to nd a course of actions) and denoted by the operator not . The previous
naive representation is now replaced by
not Powder then hand-wash.
Clearly, there are other ways of representing this situation which do not use
negation as failure at all, as we will see in the next Section. We will argue,
however, that the representation using negation as failure provides a better
specication for the problem at hand. 2
It is not di-cult to nd other real life situations governed by rules with negation
as failure.
Example 1.2
Consider the rules for reviewing the work of faculties at the end of each academic
year in a university. The rst rule species the conditions for oering tenure to
assistant professors. It states that assistant professors with good publications
and with a working experience of at least ve years should be oered tenure.
This rule could be formalized by:
if Assistant-Prof(X), Good-Pub(X), Work-at-least-5(X) then oer-tenure(X).
The second rule states that if an assistant professor has no prospect of getting
a tenure then re him. Though the intuitive meaning of this rule is clear, it
is not possible to represent it as a classical production rule since the premises
of a classical production rule represent conditions which must be satised in
the current state of the world while the premises of the second rule represent a
projection into the future. It says that if there is no possibility for an assistant
professor to get a tenure in the future then sack him now. In other words, the
rule says that if an assistant will fail in all possible course of actions in the future
to get a tenure then re him. To represent this rule, we use again negation as
failure to nd a course of actions. The second rule can then be represented as
follows:
if Assistant-Prof(X), not Getting-Tenure(X) then re(X) 2
In real life, we often nd ourselves in situations where we have to deal with risky
or undesirable actions. For example, a doctor may have to take the decision of
cutting the foot of his patient due to some severe frostbite. This is a very risky,
undesirable decision and the common-sense rule specifying the conditions for
taking this action is that the doctor is allowed to cut if there is no other way to
save the foot of the patient. This can be represented, using negation as failure,
as follows:
if not Save then cut.
Finally, we can expect that in real life, intelligent systems could be employed
to satisfy multiple goals. These goals can have dierent priorities, and negation
as failure (to nd a course of actions) can be used to represent these priorities
as in the following example.
Example 1.3
Consider a robot re ghter that should be sent into a re to save lives and
properties. The priority here is certainly saving lives rst. Imagine now that
the robot is standing before a valuable artifact. Should it take it and get out
of the re? The answer should be yes only if the robot is certain that there is
nothing it can do to save any life. The rule can be represented as follows
if Artifact(X), In-Danger(X), not Human-Found then save(X)
Note that the not Human-Found here means that no human being could be
found in the current and all other possible states of the world reachable by
ring a sequence of actions the robot is enabled to perform. 2
1.2 Negation as failure as a specication mechanism for reactive
systems
Let us consider again the example 1.1. Checking whether the conditions of the
rule
not Powder then hand-wash
are satised in the current state involves checking whether there is any way to
acquire machine powder from the current state, a process which could be time
consuming and expensive. In a concrete application, as in our example where
there are only two ways to get powder: buying it in the shop or borrowing it
from the neighbor, negation as failure can be \compiled" into classical negation
to produce a more e-cient rule:
Neighbor-In then hand-wash.
However, the environment in which a production system with the above rule is
applied can change. For example, you may get a new neighbor who may not
have any interest for good relations to other peoples, and so you will not be able
to borrow anything from him. Hence the rule for borrowing must be dropped.
Consequently the above production rule must be revised to
It is clear that the rule r 4 with negation as failure is still correct and serves as
a specication for checking the correctness of the new rule.
The point we want to make here is that in many cases, though negation as
failure is not employed directly, it could be used as a specication mechanism for
a classical production system. This situation can be encountered quite often in
many real life situations. Imagine the work of a physician in an emergency case
dealing with a patient who is severely injured in a road accident. In such cases,
where time is crucial, what a doctor would do is to follow certain treatments
he has been taught to apply in such situations. He more or less simply react
depending on the physical conditions of the patient. The treatment may even
suggest a fateful decision to operate the patient to cut some of his organs.
Now it is clear that such treatment changes according to the progress of the
medical science. One treatment which was correct yesterday may be wrong
today. So what decides the correctness of such treatments? We can think of
such treatments in a simplied way as a set of production rules telling the
doctors what to do in a concrete state of a patient. The correctness of such
rules are determined by such common-sense principles like: Operate and cut an
organ only if there is no other way to save the patient. And such a principle
can be expressed using negation as failure.
As we already pointed out, negation as failure can be seen as a mechanism to
specify priorities between dierent goals. For example, in the robot reghter
example 1.3, negation as failure is used to give the goal of saving humans a
higher priority than the goal of saving artifacts.
Explicit priorities between rules is often used in production systems and active
database systems [2, 16, 11] to in
uence the way rules are executed. In such
systems, whenever dierent rules can be triggered in a state, the rules which
have higher priority are triggered rst. Clearly, the notion of priority that
negation as failure induces, is dierent from the one used in classical production
and active database systems, as the former has to do with goals whether the
latter has to do with rules which are employed to implement goals. Moreover,
it is often di-cult to understand declaratively why a rule should have higher
priority than another rule. We believe that in many cases negation as failure
can be used as a high-level tool to specify the (implicit) priority between goals
which could then be implemented by dening explicit priorities between rules.
1.3 Aim of this work
We have seen in the examples that using negation as failure in production
systems allows one to naturally and correctly represent many real life problems.
The main aim of this paper, which is an extended and revised version of [9],
is to provide a declarative semantics to production systems where two kinds of
negation are used, classical negation and negation as failure (to nd a course
of actions). In this respect, we show that the argumentation based approach
[8], which has been successfully adopted to understand logic programming with
negation as failure as well as many other non-monotonic formalisms, can also
be adopted to provide a natural and simple declarative semantics to production
systems with two kind of negations. The basic idea is that negation as failure
literals, such as \not Powder" in example 1.1, represent assumptions underlying
potential computations of a production system. The intuitive meaning of such
an assumption is that the computation goes on by assuming that there is no
course of actions (i.e. computation) from the current state of the world leading
to a state which defeats the assumption itself. Referring back to example 1.1,
assuming not Powder corresponds to assuming that from the current state there
is no course of actions leading to a state where machine powder is in house.
A computation which is supported by a sequence of assumptions is plausible
(acceptable) if its underlying assumptions cannot be defeated by actually nd
a course of actions which defeats them.
These informal, intuitive notions can be formalized by viewing a production
system as an argumentation system along the lines of [8]. This provides us with
many natural semantics, such as the grounded (well-founded), the preferred
and the stable semantics [8]. These semantics are arguably the most popular
and widely accepted semantics for non-monotonic and common-sense reasoning
in the literature [15, 4, 19, 27].
Moreover, we address the problem of actually computing negation as failure.
In this respect, we introduce the class of stratied production systems, where
negation as failure can be computed using a simple bottom-up operator. As for
the case of general stratied argumentation systems, stratied production systems
enjoy the property that all the previously mentioned semantics (grounded,
preferred and stable) coincide.
We show that classical production systems with explicit priorities between rules
can be used to obtain a straightforward sound implementation of stratied production
systems with negation as failure. We will also show that a complete
implementation requires a more sophisticated method of compiling away negation
as failure even for the class if stratied production systems. This method
yields a classical production systems, but its complexity, in the worst case, is
not polynomial in the number of atoms occurring in the production system.
The rest of the paper is structured as follows. In Section 2 we introduce the
basic notations and terminologies we use for classical production rule systems.
In Section 3 we extend production systems to general production systems where
negation as failure can be used in the condition part of the rules, and we provide
them with an argumentation based semantics (argumentation systems are
brie
y reviewed in section 3.1). In Section 4 we address the problem of computing
with negation as failure and introduce the class of stratied production
systems, the semantics of which can be characterized by a simple bottom-up
operator. In Section 5 we discuss two ways of implementing stratied production
systems, along the lines mentioned above. Finally, in Section 6 we address
some open issues and future work.
Preliminaries: Classical production rule systems
We introduce here the notations and basic terminologies we are going to use in
the following. The production systems language we use is similar to classical
ones (see, e.g. [12]). We assume a rst order language L representing the ontology
used to describe the domain of interest. A state of the world is interpreted
as a snapshot of this world, hence is represented as a Herbrand interpretation
of L, i.e. as a set of ground atoms of L. The set of states is denoted by Stat .
Further we assume that a set of primitive actions A is given. The semantics
(eect) of actions is described by the function
eect
A production rule is a rule of the form
are ground literals of L and a is an action in A. The conditions
(resp. action) of a rule r will be referred to by cond (r) (resp. action(r)). A
production system P is a set of production rules.
A production rule if l then a is said to be applicable in a state S i
the conditions l are true in S, i.e. S
Denition 2.1 (Computations)
A computation C of a production system P is a sequence
0, such that S i 's are states, r i 's are production rules in P and for each
applicable in S i 1 , and S will be
referred to as initial(C) and S n as nal (C). 2
Note that, if then the computation is an empty computation.
Denition 2.2 (Complete Computations)
A computation C
is called a complete computation if there is no production rule in P which is
applicable in S n . 2
The behavior of a production system P can be dened as the set of pairs of
states is the initial (resp. nal) state of some
complete computation of P . This is formalized in the next denition.
Denition 2.3 (Input-Output semantics)
For a production system P , the input-output semantics of P is dened by
nal (C)) j C is a complete computation of Pg: 2
Even though we have considered only ground production rules, our approach can
be easily adopted for production systems in which rules may contain variables.
Given such a collection of rules, we consider all their possible ground instances,
obtained by replacing the variables occurring in them by any ground term. This
approach is usually adopted in the study of the semantics of logic programming
[15, 27, 7].
It is worth noting that the above semantics re
ects the inherent nondeterministic
nature of production systems. It is indeed natural to expect that from an
initial state S there may exist many dierent computations leading to possibly
dierent nal states. This means that, for a given S, there might be many different
pairs in the IO semantics. This nondeterminism arises naturally
in the representation of real life problems through production rules. Referring
back to example 1.1 if both the shops are open and the neighbor is in there
are two possible ways of acquiring machine powder, which are both plausible,
i.e. buying it from the shop or borrowing it from the neighbor. Clearly the
corresponding computations of the production system are both plausible and
there is no intuitive reason to prefer one to the other.
The need for nondeterministic rule based languages has been pointed out by
many authors (see, e.g. [1]). Indeed, nondeterministic rule based languages have
been mainly studied with respect to their expressive power and computational
complexity as opposed to purely deterministic languages. In this paper, we
argue that nondeterminism is needed to naturally represent real life problems.
Some work in the literature (see, e.g. [13, 24]) has also been devoted to dene
(operational) semantics for classical production systems, in such a way that
nondeterminism is avoided by adopting an ad hoc computational mechanism
which (either implicitly or explicitly) assigns a sort of priority to the production
rules which can be red in the same state. But the lack of an intuitive
motivation makes a full understanding of their technical results di-cult.
3 Production rules with NAF
We introduce a new form of negation into the language L, denoted by not. A
general literal is now either a (classical) literal l or a naf-literal not l, where l
is a classical literal. For each classical literal l, the intuition of not l is that it
is not possible to nd a course of actions to achieve l.
Denition 3.1 (General production rules)
A general production rule has the form
where each l i is a ground general literal. 2
Given a general production rule
the set of classical literals in r will be referred to as cl-cond (r), and the set of
naf-literals will be referred to as hyp(r).
A general production system (GPS) P is a set of general production rules.
A general production rule is possibly applicable in a state S if S
Denition 3.2 (Possible Computations)
Given a GPS P , a possible computation in P is a sequence
are states, r i 's are general production rules in P , for each i 1, r i
is possibly applicable in S i 1 and S
We denote by C(P ) the set of all possible computations of a GPS P . 2
Given a possible computation as above, the sequence hhyp(r 1
referred to as the sequence of hypotheses underlying the computation. The
basic idea in understanding the meaning of naf-conditions not l's in general
production rules is to view them as hypotheses which can be assumed if there
is no possible course of actions to achieve l. So, intuitively we can say that a rule
is applicable in a state S if it is possibly applicable and each of its hypotheses
could be assumed. A computation is then an acceptable computation if each of
its rules is applicable. The whole problem here is to understand formally what
does it mean that there is no possible course of actions starting from a state S
to achieve some result l.
Let us consider again the example 1.1.
1 Notice that a rule satisfying the condition that S j= cl-cond(r) in a state S is not necessarily
applicable in S since it is not clear whether its naf-conditions are satised in S.
Example 3.3
Let us rst recall the production rules:
not Machine-Powder then hand-wash
Neighbor-In then borrow-powder
The eects of the actions are specied below:
eect(hand-wash,
eect(buy-powder,
eect(borrow-powder,
Assume that in the initial state we have no powder, shops are closed and the
neighbor is in. This state is represented by the interpretation S
From this state there are three possible nonempty computations starting from
namely
r 4
fNeighbor-In, Machine-Powder, Clothes-Cleang.
r 4
First, notice that both C 2 and C 3 are not based on any assumption. Our
common-sense dictate that C 2 and C 3 represent acceptable course of actions
from the initial state which lead to the common-sense result that clothes are
machine washed. Hence they both must be accepted as possible courses of ac-
tions. On the other hand C 1 is based on the assumption \not Machine-Powder",
meaning that C 1 assumes that there is no possible way to acquire the machine
powder. However, C 3 represents just one such possible way. Hence, C 3 represents
an attack against the assumption \not Machine-Powder". So C 3 can also
be viewed as an attack against the acceptability of C 1 as a legitimate compu-
tation. On the other hand, both C 2 and C 3 are not based on any assumption,
hence there is no way they can be attacked. 2
This example points out that the semantics of GPS's is a form of argumentation
reasoning, where arguments are represented by possible computations. In the
following, we rst recall the general notion of argumentation systems from [8]
and then we show that the natural semantics of GPS can be dened using the
theory of argumentation.
3.1 Argumentation systems
Argumentation has been recognized lately as an important and natural approach
to nonmonotonic reasoning [5, 14, 18, 21, 22, 23, 26, 28]. It has been
shown [8] that many major nonmonotonic logics [20, 19, 25] represent in fact
dierent forms of a simple system of argumentation reasoning. Based on the
results in [8], a simple logic-based argumentation system has been developed
in [4] which captures well known nonmonotonic logics like autoepistemic logics,
Reiter's default logics and logic programming as special cases. In [14], argumentation
has been employed to give a proof procedure for conditional logics.
Argumentation has also been applied to give an elegant semantics for reasoning
with specicity in [10].
We review here the basic notions and denitions of argumentation systems
(the reader can refer to [8] for more details and for a discussion of the role of
argumentation systems in many elds of Articial Intelligence).
An argumentation system is a pair hAR; attacksi where AR is the set of all
possible arguments and attacks ARAR, representing the attack relationship
between arguments. If the pair (A; B) 2 attacks , then we say that A attacks B
or B is attacked by A. Moreover, A attacks a set of arguments H if A attacks
an argument B 2 H.
set H of arguments is con
ict-free if no argument in H attacks H.
An argument A is defended by a set of arguments H if H attacks any attack
against A. We also say that H defends A if A is defended by H.
The basic notion which underlies all the semantics for argumentation systems
that we are going to review in the rest of this section, is the following, intuitive
notion of acceptability of a set of arguments. A set H of arguments is acceptable
if it is con
ict-free and it can defend each argument in it.
Let H be a set of arguments and let Def (H) be the set of all arguments which
are defended by H. It is not di-cult to see that H is acceptable i H
and H is con
ict-free. Further it is easy to see that
is monotonic. Hence the equation has a least solution which is
also acceptable (following from the fact that Def (;) is acceptable and if H is
acceptable then also H [
The various semantics for argumentation systems are basically solutions of
the above equation In particular, the grounded (well-founded)
semantics of an argumentation system is the least solution of the equation
Another semantics for argumentation systems, called the preferred semantics,
is dened by the maximal acceptable sets of arguments. It is not di-cult
to see that these sets are the maximal con
ict-free solutions of the equation
In general, preferred sets contain the grounded semantics, but
do not coincide with it. In the next section, we will give an example for this.
Finally, a popular semantics of non-monotonic reasoning and argumentation
systems is the stable semantics, dened as follows. A con
ict-free set of arguments
H is said to be stable if it attacks each argument not belonging to it. It is
not di-cult to see that each stable set of arguments is acceptable. Furthermore,
it is also easy to see that each stable set is preferred, hence it is a maximal,
con
ict-free solution of the equation but not vice versa.
In [8] it has been shown that logic programming with negation as failure can
be seen as a form of argumentation systems. In this view, the various semantics
of argumentation systems presented above capture in a unifying framework
various well-known semantics for logic programming with negation as failure.
For instance, the grounded semantics corresponds to the well-founded semantics
in logic programming [27] and the stable semantics corresponds to stable
semantics of logic programming [15].
In the following we show that general production systems with two kinds of
negation are also a form of argumentation systems. A philosophical explanation
of this result can be seen in the fact that the computations of production systems
represent also a form of common-sense reasoning.
3.2 Computations as arguments
The semantics of a GPS P is dened by viewing it as an argumentation frame-work
is the set of all possible computations of
P and the relation attacks is dened as follows.
Denition 3.4 (Attacks)
Let C be a possible computation
An attack against C is a possible computation C 0 such that initial(C 0
for some i, and there exists an underlying assumption not l in hyp(r i+1 ) such
that l holds in nal(C 0 ). 2
Remark 3.5 Empty computations cannot be attacked. Hence, empty computations
are contained in any semantics. 2
Notice that, in the above denition, the initial state of the attack C 0 , which
defeats the assumption not l underlying C, has to be the actual state S i in which
such an assumption was made. In other words, whether an assumption not l
can be defeated or not, depends on the state in which this assumption is made
and on whether or not this state can lead by a computation to a state in which
l holds. Referring back to example 3.3, it is easy to see that C 3 attacks C 1 , by
defeating the assumption \not Machine-Powder" on which C 1 is based. This is
because such an assumption is made in the state S 0 and C 3 shows an alternative
course of actions leading S 0 to a state where Machine-Powder actually holds.
Consider now an initial state S 0
among others, the fact that
there is no machine powder in house, shops are closed and the neighbor is not
in. It is clear that there is only one possible computation (apart from the empty
one) leading S 0
0 to the nal state fClothes-Cleang. This state represents the
fact that clothes have been hand washed. This computation is also based on
the assumption \not Machine-Powder" which is made in the state S 0
0 and hence
cannot be defeated (the powder cannot be bought since shops are closed neither
it can be borrowed since the neighbor is not in).
The view of a GPS as an argumentation system, allows us to provide it with
three dierent semantics: grounded (well-founded), preferred and stable se-
mantics. Recall that, given a set H of arguments (i.e. possible computations),
(H) is the set of all arguments which are defended by H (see Section 3.1).
Denition 3.6 Let P be a GPS and K be a set of possible computations.
Then:
K is grounded if K is the least solution of the equation
K is preferred if K is a maximal con
ict-free solution of the equation
K is stable if K is con
ict-free and it attacks every possible computation
not in K.Let us elaborate in detail the example 3.3 to compute its grounded semantics.
Example 3.7
We compute the grounded semantics by using the We use the
following abbreviations: MP for Machine-Powder, NI for Neighbor-In, CC for
Clothes-Clean, SO for Shop-Open, hw for the action hand-wash, mw for the
action machine-wash, bp for the action Buy-Powder and bop for the action
borrow-powder. Hence the rules of example 3.3 become:
not MP then hw
As mentioned in the previous section, the grounded semantics is the least solution
of the equation be the empty set of arguments
(computations). contains all the computations which
can never be attacked (H 0 cannot defend any computation). These are clearly
all the empty computations and all the non-empty computations which are not
based on any assumptions. Let S so be any state such that S so
S so . Then the following computations
S so
S so
belong to H 1 , since both hyp(r 3 ) and hyp(r 2 ) are empty. Similarly, let S ni be any
state such that S ni . Then the following computations
r 4
belong to H 1 . Let us move now to H
us consider now the possible computations with some underlying assumptions.
All such computations must use rule r 1 , which is the only rule containing a
naf-literal in its premises. Let S 1 be any state such that S 1 Consider
then a computation C of the form
which is based on the assumption not MP . Now, if S 1 there is
a computation of the form (a) above, which attacks this computation (since it
leads to a nal state where MP holds) and belongs to H 1 . Since H 1 is con
ict-
defend the computation C. Similarly, if S 1
cannot defend C. So, we conclude that the only computations in H 2 n H 1 are
the computations of the form (e) such that S 1
It is easy to see that H 2 is indeed a solution, hence the least solution, of the
equation this case. Moreover it is not di-cult to see that, in
this example, the grounded, preferred and stable semantics coincide, and hence
H 2 is also stable and preferred. This is also a consequence of the fact that this
production system is stratied in the sense of section 4, and we will show that
for stratied production systems all dierent semantics coincide. 2
It is easy to see that the following propositions hold.
Proposition 3.8
If a computation C attacks a computation C 0 and C 0 is a prex of C 00 , then C
also attacks C 00 . 2
Proposition 3.9
Let H be a set of computations which is either the grounded, or a preferred, or
a stable set. For any C 2 H, any prex of C also belongs to H. 2
We give now some examples of general production systems, where the grounded,
preferred and stable semantics do not coincide.
Example 3.10
Let P 0 be the following GPS:
where the eect of assert(p) on a state S is adding p to S. Let S It is
easy to see that the nonempty possible computations starting from S 0 are:
fbg with underlying assumptions hfnot agi
underlying assumptions hfnot ag; fnot bgi
fag with underlying assumptions hfnot bgi
underlying assumptions hfnot bg; fnot agi
The empty computation starting from fbg attacks C 0
1 and the empty computation
starting from fag attacks C 0
2 . On the other hand, an empty computation
cannot be attacked. Hence, C 0
2 are not contained in any acceptable
set of computations. Furthermore, it is also clear that C 2 attacks C 1 , since
C 1 is based on the assumption not a, meaning that there is no way to achieve
a starting from S 0 . For similar reasons, C 1 also attacks C 2 . Hence, the only
computation starting from S 0 which is contained in the grounded semantics is
the empty computation.
But there are two stable sets of computations, one containing C 2 and the other
containing C 1 . This is because C 1 (resp. C 2 ) can defend itself against all
attacks. In this example preferred and stable semantics coincide.
Notice that rules similar to r 1 and r 2 may be needed to represent real world
situations. Imagine the situation of a team leader who needs to hire a person for
an important position and he has two applications for it, say Mary and Ann.
The leader asks his advisors to express their opinion. The rst advisor likes
Ann, hence he says: \if there is no way you can hire Ann, then hire Mary".
This can be expressed by the rule
Mary hired, not Ann hired then Hire(Mary)
Notice that using negation as failure we capture here the intuition that there is
no possible course of actions to hire Ann. Imagine now that the second advisor
has exactly the opposite view, i.e. his opinion can be expressed as:
Mary hired then Hire(Ann)
The team leader reasoning is represented by two computations which corresponds
to computations C 1 and C 2 above. 2
The next example shows that preferred sets may not be stable.
Example 3.11
Let P 1 be the following general production system.
The intuition of this rule is that if a is not true in the current state and there
is no way to achieve a, then add it to the state. This is clearly a paradox, since
if there is no way to achieve a then the rule itself allows to achieve it. The only
nonempty possible computation starting from S
fag.
It is clear that C attacks itself. Hence, in this example the grounded and
the preferred semantics coincide, and contain only the empty computations.
Clearly, there is no stable set of computations. 2
We can now dene the set of complete acceptable computations, with respect
to a selected semantics.
Denition 3.12 (Complete Computations)
Let P be a GPS and Sem be a selected semantics of P , i.e. Sem is either
the grounded, or a preferred, or a stable, set of possible computations. A
computation C 2 Sem is called a Sem-complete computation if there exists no
other computation C 0 2 Sem such that C is a prex of C 0 . 2
If all the semantics of a GPS coincide, we simply talk about complete computations
instead of Sem-complete computations. Referring back to the example
3.3, the only complete computation starting from S 0 is C 2 .
The input-output semantics of classical production systems can be extended
to general production systems with respect to a selected (grounded, preferred,
stable) semantics.
Denition 3.13 (Input-Output semantics for GPS)
Let P be a GPS.
The grounded input-output semantics of P is dened by
nal (C)) j C is a grounded complete computationg
Let Sem be a set of arguments which is preferred or stable. Then
IO Sem (P nal (C)) j C is a Sem-complete computation
of P g4 Stratied Production Systems
In this section we consider only special kinds of GPS, where the actions are
of two types, assert(p) and retract(p), where p is an atom. The eect of
assert(p) (resp. retract(p)) on a state S is adding (resp. removing) p to S
(resp. from S). Moreover, the rules have the following structure:
where l i 's are classical literals. Rules of the rst kind are called assert rules,
and rules of the second kind are called retract rules. If it is not important to
distinguish between assert and retract rules, we will simply write a rule as
Further, we dene stratied GPS's in such a way that negation as failure can
be computed bottom-up.
In the following, given a classical literal l, we refer to the atom of l as l if it is
a positive atom, and as p if l is :p.
Denition 4.1 (Stratied Production Systems)
A GPS P is stratied if there exists a partition of its rules such
that the following conditions are satised. Let
be a rule in P j . Then
(i) for each l i , each rule containing the atom of l i in the head
must belong to S
Pm
(ii) for each l i , h, each rule containing the atom of l i in the
head must belong to S
Pm .It is worth noting that the intuition underlying the above denition of stratied
GPS's is similar to the one underlying the denition of stratication in logic
programming [3].
For stratied GPS's, the grounded, preferred and stable semantics coincide (see
theorem 4.3). Moreover, this semantics can be computed in a bottom-up way,
by a simple operator S, that we are going to dene next.
Let C be a possible computation
Then for any h; k such that 0 h < k n, the sequence
is called a subcomputation of C. Further, we denote by rules(C) the sequence
Denition 4.2 Let be a stratied GPS. Let S be the operator
dened as follows.
{ for each subcomputation C 0 of C if C
{ for each subcomputation of C of the form S
for each not l 2 hyp(r j ), there is no computation
lgRoughly speaking, the operator S formalizes the intuition that the acceptability
of a possible computation using rules in P depends only on
computations in P Thus, the semantics of a stratied GPS P can
be computed bottom-up by iterating the operator S on the strata of P .
Theorem 4.3
Let P be a stratied GPS. Then:
is the unique preferred set of computations
is the unique stable set of computationsProof. : See appendix.
Implementing Stratied PS
In this section we address the issue of implementing stratied PS by translating
stratied PS into classical PS. From now on, we restrict ourselves to production
systems where the collection of (the ground instances) of rules is nite.
Let us rst introduce the notion of implementation of a production system. Let
be two production systems. We say that P 0 is a sound implementation
of P if the following two conditions hold:
(i) for each computation C 0 of P 0 there exists a computation C of P such
that nal (C).
(ii) for each complete computation C 0 of P 0 there exists a complete computation
C of P such that initial(C 0
If P 0 is a sound implementation of P , we say it is also complete if the reverse
of conditions (i) and (ii) above hold, namely:
for each computation C of P there exists a computation C 0 of P 0 such
that
for each complete computation C of P there exists a complete computation
C 0 of P 0 such that
In the next section we introduce the class of prioritized production systems PPS
where rules can have dierent priorities, and dene accordingly a suitable notion
of computation. We show that viewing stratication as a priority assignment
to rules (the lower the stratum, the higher the priority), yields a sound implementation
of a stratied GPS through PPS. However, we will see by means of a
simple example that this implementations is not complete in the general case.
This means that the priorities induced by stratication are not powerful enough
to completely capture the implicit priorities induced by the negation as failure
mechanism. In order to obtain a sound and complete implementation, we need
a more sophisticated method which compiles away negation as failure from a
stratied GPS and yields a classical production system with no priorities at
all. We show that the classical PS obtained by this transformation is indeed a
sound and complete implementation of the original stratied PS. However, the
transformation, in the worst case, is not polynomial in the number of atoms of
the production system.
5.1 Implementing negation as failure with priorities
Let us introduce the class of PPS, namely classical production systems where
rules can be assigned dierent priorities. This type of systems have been extensively
studied in the literature [11].
Denition 5.1 A prioritized production system (PPS) is a pair hP; i where
P is a classical PS and is a partial order relation between rules 2 . 2
2 a partial order is a irre
exive, transitive and asymmetric relation
In the sequel, if r r 0 , we will say that r has higher priority than r 0 . Notice
that a classical PS is a PPS with being the empty relation.
The priority relation between rules aects the applicability of rules.
Denition 5.2 Let hP; i be a PPS. A production rule if l then a is
applicable in a state S i S there is no rule r 0 r such that
r 0 is applicable in S. 2
The notion of computations in PPS is the same as the one given in Denition 2.1,
provided that applicability of rules is understood as in the previous denition.
Example 5.3 Let us formulate the washing machine example 3.3 as a PPS.
Let us rst recall the production rules:
Neighbor-In then borrow-powder
Notice that negation as failure in the rst rule has been replaced by classical
negation. To correctly represent the fact that hand washing has lower priority
than machine washing we can add the following priority relation between rules.
In this way, rule r 1 cannot be applied in states where there is no machine powder
but either the shops are open or the neighbor is in, thus achieving the desired
behavior. 2
now P be a stratied GPS. The idea is to view stratication as an implicit
assignment of priorities between rules: making this assignment explicit, allows
us to get rid of negation as failure and translate it directly into classical negation.
We need rst to introduce some useful notations.
Let l be a naf literal. Then we denote by l the classical literal
:a, if l = not a
a, if l = not :a
Moreover, given a general production rule r, we denote by r the rule obtained
from it by replacing each naf-literal l by l .
Denition 5.4 Let be a stratied GPS. Its prioritized
is the PPS obtained by replacing each rule r of P by r and by
dening as follows:
The following theorem shows that, given a stratied GPS P , hP ; i is a sound
implementation of it.
Theorem 5.5 Let P be a stratied GPS and hP ; i its prioritized form. Then
is a sound implementation of P , i.e.
(i) for each computation C in hP ; i there exists a computation C 0 in P
such that
(ii) for each complete computation C in hP ; i there exists a complete computation
in P such that
nal (C
The following example shows that the implementation we have just given is not
complete in the general case.
Example 5.6 Consider the following
and its stratication g.
Consider the following complete computation
The prioritized form of P is given by the following PPS hP ; i:
r
r
r
r
where is dened by r
1 , for each 4. It is not di-cult to see that
there is no complete computation in hP ; i starting from the empty state and
ending in the state fa; c; dg. This is due to the fact that r
3 and r
4 are both rule
with highest priority and then they are red before r
1 . Since the eect of r
4 is
adding fdg to the state, this will prevent r
1 from being applicable.
The point here is that the naf literal not b in r 1 induces a priority between
rules which is dierent from the naive one obtained by the given stratication.
Indeed, not b intuitively means that there is no way to achieve b and, given
a state where all classical conditions of r 1 are satised, this can be ensured
by enforcing c to be true, i.e. by making r 2 , the only rule for asserting b not
applicable. This can be obtained in two ways: either by enforcing the priority
r
1 or by transforming r 1 into the following classical rule
We thus need a deeper understanding of stratication, generalizing the intuition
sketched in the above example which allowed us to compile away negation as
failure in rule r 0
1 . In the next section we propose such a transformation, which,
given a stratied GPS P , yields a classical PS P 0 which is a sound and complete
implementation of P . However, in order to get a complete implementation we
have to pay a high price in terms of the complexity of the transformation, as
shown in Section 5.3.
5.2 Compiling away negation as failure
Let us rst introduce the notion of incomplete state and computations with
respect to incomplete states.
Denition 5.7 An incomplete state is a consistent set of ground literals of L.
A production rule if l then a is said to be applicable in an incomplete
state I i the conditions l are true in I, i.e. I
Often it is not necessary to have complete information about the state of the
world to carry out some computations. Indeed, if a computation relies only on
classical production rules, the following notion of computation, which we refer
to as partial computation, is su-cient.
Denition 5.8 (Partial Computation)
Let I be incomplete states, and r i 's be production rules in a
classical production system P . Then a sequence
I 0
is a partial computation C of P if for each i 1, r i is applicable in I i 1 , and
I i =<
I
I retract(p)Notice that a computation is also a partial computation. Moreover, any partial
computation C starting from an incomplete state I can be viewed as a collection
of all computations C 0 starting from a state S such that S j= I and
Denition 5.9 Let P be a classical PS, I be an incomplete state and l be a
ground literal. Then we say that l is achievable from I in P if there is a partial
computation C starting from I such that l 2 final(C). 2
Let us rst consider the problem of compiling away negation as failure for a
stratied PS with only two strata
Let r be a rule in P 1 of the following form
l 0
I n be the collection of all incomplete states such that, for each
are satised:
cl-cond (r) [ I i is consistent,
l 0 is achievable from I i in P 0
For each cl-cond (r). There are two cases here:
some J i is empty. This means that there exists a (classical) computation
in P 0 starting from cl-cond (r) and leading to a state in which l 0 holds.
Hence, the rule r can never be applied in any computation in S(P ) and
so it can be simply dropped;
no J i is empty.
Let R l 0 ;r be the empty set if one of the J i is empty. Otherwise, R l 0 ;r is the
following
R is minimalg
Notice that each non-empty J i is viewed as a conjunction of literals. Also, by
minimality, we mean that there exists no proper subset of R which implies :J i
for each
Each set R 2 R l 0 ;r is a set of conditions that, if satised by a state, make l 0 not
achievable from this state, and the set R l 0 ;r covers all such conditions. This is
formalized in the following lemma.
Lemma 5.10
Let S be a state such that S cl-cond (r). Then, S
if and only if there exists no computation C in C(P 0 ) such that
and final(C)
Proof. From the denition of R l 0 ;r it is clear that:
if and only if
for all I such that l 0 is achievable from I and I is consistent with cl-cond(r),
cl-cond (r)).
It is clear now that if the above condition is satised, then there is no computation
C in C(P 0 ) such that
Suppose now that there is no computation C in C(P 0 ) such that
and final(C) Assume now that there exists an incomplete state I such
that l 0 is achievable from I, I is consistent with cl-cond (r) and S 6j= :(I n
cl-cond (r)). From the assumption that S j= cl-cond(r), it follows that S
and hence there exists a computation C in C(P 0 ) such that
For each R 2 R l 0 ;r , we dene a new production rule r R as follows:
cl-cond (r R
hyp(r R
action(r R
We can now dene:
Example 5.11
The machine wash example 3.7 is translated into the following stratied PS.
g. Then, the set of all
incomplete states from which MP is achievable is:
Hence,
Hence, PMP ;r 1
consists only of the following rule:
production system clearly still stratied, and it is equivalent
to shown by the following results.
Lemma 5.12
only if there exists r ;r such that
Proof. Let us consider the following assertion:
(*) there is no computation C 2 S(P 0 ) such that
(*) holds
if and only if
for each consistent set I of literals such that l 0 is achievable
from I in P 0 , S
if and only if (from Lemma 5.10)
there exists R 2 R l 0 ;r such that S
By denition of the operator S:
if and only if
there is no computation C 2 S(P 0 ) such that
if and only if
(*) and 8l there is no computation C 2 S(P 0 )
such that
if and only if
there exists R 2 R l 0 ;r such that S
there is no computation
such that
if and only if
S r R
Theorem 5.13
r h
only if S 0
r
Proof. Obvious, by taking r 0
as in the above
The procedure of compiling away negation as failure from a two strata production
system then the following:
Step 1. Select r 2 P 1 and not l 2 hyp(r). If no such r exists then stop.
Step 2. if
then
else
Step 3. Goto Step 1.
The generalization of the above procedure for the general case of stratied
production systems with more than two strata is obvious.
We rst compile away naf from P 1 and obtain a stratied production system
with By continuing this process, we eventually obtain a classical
production system.
Coming back to Example 5.11, the result of applying the above transformation
is the classical PS consisting of the rules in the original P 0
together with the new rule
It should be obvious that the new production system is equivalent to the original
one.
In the above procedure, we assume that P l;r is given. We dene now a method
to compute it.
Denition 5.14 A set fI of incomplete states is called a base for l
with respect to P 0 if the following conditions hold:
l is achievable from each I i in P
for any incomplete state I such that l is achievable from I in P 0 , I I i ,
for some each r 2 P 1 and l 2 hyp(r), it is easy to see that the following lemma holds.
Lemma 5.15
I n be a base for l with respect to P 0 , and J
for each Suppose that no J i is empty. Then
R is
Constructing P l;r consists of computing a base for l with respect to P 0 and
computing R l;r . The set R l;r can be computed from a base by applying standard
methods in propositional logic for computing disjunctive normal forms. In
the following we give an algorithm for computing a base for l. We do so by
constructing a tree satisfying the following properties:
Each node N of the tree is labeled with an incomplete state label(N ).
The root of the tree is labeled with the set flg.
Each link is labeled by a rule r
For any node M , a node N is a child of M with a link labeled by a rule
has no ancestor with the same label, label(N) is consistent,
and
{ if action(r 0
label(M)
{ if
f:pg.
It is clear from the above construction that if a node N has some ancestor with
the same label then N is a leaf node. Hence, since P 0 is nite, the tree is also
nite. Further, it should be clear from the construction of the tree that for
each there is exactly one tree satisfying the above conditions. Let
us refer to this tree as T P 0 ;l .
As an example, consider the production system of Example 5.11. Then T
is the following tree.
fMPg
f: MP, SOg f: MP, NIg
@
@
@
@ @
Theorem 5.16
is a node in T P 0 ;l g is a base for l with respect to P 0 . 2
Proof. See appendix.
5.3 Complexity of the transformation
In this section we show that compiling away NAF is, in general, not polynomial
in the number of atoms of the production system. The method we have given
to compile away a naf literal not l occurring in a rule r basically consists of
computing a base for l and then computing the set R l;r . The set R l;r can be
computed from a base by applying standard methods in propositional logic for
computing disjunctive normal forms. We show next that computing the set
R l;r is not polynomial by taking into account a restricted class of stratied
production systems.
First of all we will consider in the sequel classical production systems where
each rule has the following structure:
where b; a are all positive atoms. In the sequel a rule of this form is
referred to as a rule for b. Given two atoms a; b we say that a is a successor of
b, denoted by a < b if a occurs in the body of a rule for b. A production system
is acyclic if the relation < is well founded, i.e. each decreasing sequence of the
is nite.
Give an acyclic production system, we dene the rank of an atom a, denoted
by kak, as follows:
ag
Let now be a stratied production system where:
contains exactly one rule
r: :p; not l ! assert(p)
P 0 is a classical production system satisfying the following conditions:
(c2) each rule in P 0 has the following structure:
(c3) each atom either appears in the head of exactly two rules or it appears
in the head of no rule
a positive atom either occurs in the body of exactly one rule or it
does not occur in the body of any rule
for each atom a, either
for each atom a there exists a decreasing sequence
where l is the atom occurring in the naf literal of rule r .
Notice that condition (c5) implies that for any atoms a; b; c, if both b < a and
c < a then also that each atom has either no successors or
exactly four successors due to conditions (c2) and (c3) above.
We want to show that the number of rules obtained by compiling away the
naf literal from rule r is not polynomial in the number of atoms of P 0 . This
amounts at proving that the cardinality of the set R l;r is not polynomial in
the number of atoms of P 0 . Recall that the set R l;r can be computed from a
base for l by applying standard methods in propositional logic for computing
disjunctive normal forms.
Let a be an atom and be a partial state satisfying the following conditions:
(i) a is not achievable from any state satisfying
(ii) is minimal, i.e. no proper subset of it satises (i)
Let also R a be the set of partial states satisfying conditions (i) and (ii) above.
Then obviously R l;r l . In the sequel we show that the cardinality of R l is
not polynomial in the number of atoms of P 0 .
First of all, let us generalize the above denition of R l to a conjunction of
atoms. Given a conjunction a is the set of all partial states
satisfying the following conditions:
is not achievable from any state satisfying
(iv) is minimal, i.e. no proper subset of it satises (iii)
Consider now a conjunction of distinct atoms a such that ka 1
let be a partial state in R a1 ;:::;a n . By the syntactic restrictions on
is clear that 2 R a i for some a i . Moreover it is clear that R a i and R a j ,
with i 6= j, are disjoint sets.
For be the cardinality of R a i . By the previous observations,
it is clear that the cardinality of R
Consider now an atom b such that kbk > 0. Then P 0 contains exactly two rules
for b,
Consider a partial state 2 R b . By the syntactic restrictions on P 0 and the
minimality of , it is clear that can be split into two disjoint sets 0 and 00
such that 0 2 R a 1 ;a 2
and 00 2 R a 3 ;a 4
Recall that, if both a < b and c < b, then now P a i ,
4, be the set of rules of P 0 which are rules for a i or rules for
any atom c such that c < ::::: < a i . By the structure of P 0 and the fact that
it is clear that P a i and P a j , i 6= j are
equal up to the renaming of atoms. Hence it is also clear that the sets R a i , R a j ,
are equal up to the renaming of atoms. Let then m be the cardinality of
R a i , 4. By the previous arguments, it is obvious that the cardinality
of both R a 1 ;a 2
and R a 3 ;a 4
are 2 m. Hence, the cardinality of R
Let now We have just seen that the cardinality of R b can be given
inductively as
Furthermore it is obvious that since any atom a such that
does not occur in the head of any rule and hence f:ag is the only set in R a .
It is easy to show by induction that the relation f(k) we have just dened has
the unique solution
Let us now go back to the rule of P 1
r: :p; not l ! assert(p).
k. It is not di-cult to see that, due to the syntactic restrictions on
the number n of atoms occurring in P 0 are exactly
1from which we can easily calculate
i.e.
i.e.
Hence we can dene the cardinality of R l as a function g(n) as follows
Let now
2 . Then
4 . It is easy to see that, for any constants
c; d and su-ciently big x, 4 x > d x 2c . Hence, for any constant c and su-ciently
We can conclude that g(n) is not polynomial in n, the number of atoms of P 0 .
Hence, the worst case complexity of translating away naf is not polynomial in
the number of atoms.
6 Discussion and Conclusions
Production systems with negation as failure to nd a course of actions are
a natural extension of classical production systems, which increases their expressiveness
in the sense that they allow a natural and simple representation
(specication) of many real life problems. This extension can be given a simple
semantics based on an argumentation theoretic framework.
Negation as failure to nd a course of actions is tightly related to negation as
failure to prove in logic programming. Indeed, any normal logic program P can
be viewed as a GPS G P by transforming each rule
a a
into a production rule
if :a; a not a not a k then assert(a).
It is not di-cult to show that the semantics of P and G P coincide, in the sense
that each (grounded, preferred, stable) set of computations of G P corresponds
to a (grounded, preferred, stable) model of P . On the other side, in [8] it has
been shown that argumentation can be represented in logic programming with
negation as failure. Therefore we can say that the mechanisms of negation as
failure to nd a course of actions in production systems and negation as failure
to prove in logic programming are dierent sides of the same coin.
There are still several issues which deserve a deeper study and understanding.
First, we have seen that our semantics re
ects the inherent nondeterminism of
production systems. In fact, in our semantics dierent complete computations
starting from the same initial state can yield dierent nal states, even for
stratied GPS's. This contrasts with many eorts in the literature aiming at
nding a method to select one of the complete computations as the expected
semantics [13, 24]. Even though we believe that in many cases these eorts
contrast with the inherent nondeterministic nature of the problems represented
by the production rules, there are situations in which selecting only one out of
(possibly) many complete computations may not harm at all. In these cases,
it is worth studying computational strategies which basically provide us with
a deterministic operational semantics for production systems. Still, the declarative
semantics serves as a basis for reasoning about the correctness of these
methods.
Secondly, we intend to study computational mechanisms and proof procedures
for general production systems with negation as failure. We have seen that for
stratied production systems there is a way to compute negation as failure in
a bottom-up fashion. This can be seen as a dynamic or run-time method to
compute negation as failure. It is worth addressing the point of whether the
class of stratied PS can be extended to more general classes, for which similar
bottom-up methods exist. On the other hand, in some cases negation as failure
can be compiled down to classical negation, as we have seen in Section 1.2.
This can be seen as a compile-time or static method to compute negation as
failure. Finding general techniques to achieve this is another interesting issue.
As we have already mentioned, in this case negation as failure still serves as a
specication for classical (i.e. without naf) production systems.
Finally, we are investigating the application of our approach in the active
databases area. Active databases [6] are an important research topic in the
database community, due to the fact that they nd many applications in real
world problems. Many commercial databases systems have been extended to
allow the user to express active rules. Still, active databases are faced with
many open issues, such as the lack of a well understood declarative semantics
(see, e.g. [11, 29]). Such a semantics would provide a common basis for understanding
the operational semantics dened by the implementation of active
rules in database systems, as well as for comparing dierent implementations.
The typical active rule in active databases is an event-condition-action rule of
the form on Event : if Condition then Action . We are currently extending
our argumentation based approach to these active rules, and we are addressing
also in this case the use of negation as failure in the Condition part of the rules.
--R
Static analysis techniques for predicting the behavior of active database rules.
Towards a theory of declarative knowledge.
An abstract
Preferred answer sets for extended logic programs.
Active Database Systems: Triggers and Rules For Advanced Database Processing.
Negation as hypotheses: An abductive foundation for logic programming.
The acceptability of arguments and its fundamental role in logic programming
Production systems need negation as failure.
reasoning with speci
Active database systems.
OPS5 user's manual.
logic for action rule-based systems
The stable model semantics for logic pro- gramming
An overview of production rules in database systems.
Rule based systems.
Computing argumentation in logic programming.
Semantical considerations on nonmonotonic logics.
Defeasible reasoning.
A system for defeasible argumentation
Logical systems for defeasible argumen- tation
A semantics for a class of strati
A logic for default reasoning.
A mathematical treatment of defeasible reasoning and its implementation.
Unfounded sets and well-founded semantics for general logic programs
The feasibility of defeat in defeasible reasoning.
Active database rules with transaction-conscious stable-model semantics
--TR | rule-based processing;knowledge-based systems;rule-based systems;expert systems;knowledge representation |
628215 | A Comparative Study of Various Nested Normal Forms. | As object-relational databases (ORDBs) become popular in the industry, it is important for database designers to produce database schemes with good properties in these new kinds of databases. One distinguishing feature of an ORDB is that its tables may not be in first normal form. Hence, ORDBs may contain nested relations along with other collection types. To help the design process of an ORDB, several normal forms for nested relations have recently been defined, and some of them are called nested normal forms. In this paper, we investigate four nested normal forms, which are NNF 20, NNF 21, NNF 23, and NNF 25, with respect to generalizing 4NF and BCNF, reducing redundant data values, and design flexibility. Another major contribution of this paper is that we provide an improved algorithm that generates nested relation schemes in NNF 2 from an $ database scheme, which is the most general type of acyclic database schemes. After presenting the algorithm for NNF 20, the algorithms of all of the four nested normal forms and the nested database schemes that they generate are compared. We discovered that when the given set of MVDs is not conflict-free, NNF 20 is inferior to the other three nested normal forms in reducing redundant data values. However, in all of the other cases considered in this paper, NNF 20 is at least as good as all of the other three nested normal forms. | Introduction
Object-Relational Databases (ORDBs) could be an alternative for the next-generation databases [SBM98].
This hybrid approach is sound because it is based on mature relational technology. By adding object-oriented
features to a relational database, an ORDB is obtained. Since this approach seems like a natural extension of
a relational database, numerous relational commercial products are already supporting many object-oriented
features [Kim97], [Urm97].
One distinguished feature of an ORDB is that a relation can be nested in another relation; thus, a nested
relation. Since ORDBs support nested relations, it is imperative for database designers to be able to design
nested databases with good properties. In the past, numerous normal forms have been defined for flat
relations so that if a flat relation scheme satisfies a certain normal form, then the relations on that scheme
will enjoy the properties of the normal form. For a long time, database designers have been using these
normal forms as guides for flat relational database design. In the same spirit, numerous normal forms have
been defined recently for nested relations as well [MNE96], [OY87a], [OY89], [RK87], [RKS88]. Among
all of these cited normal forms, Partition Normal Form (PNF), which is defined in [RKS88], is the most
fundamental. In essence, PNF basically states that in a nested relation, there can never be distinct tuples
that agree on the atomic attributes of either the nested relation itself or of any nested relation embedded
within it [RKS88]. Since this is a basic property of nested relations, the normal forms defined in [MNE96],
[OY87a], [OY89], [RK87] all imply PNF.
As guides for database design, normal forms should be used with cautions. Database designers should
understand the strengths and the weaknesses of a normal form in order to use it intelligently. As an example,
it is well known that 4NF is able to remove redundancy caused by FDs; however, it is not dependency
preserving. On the other hand, 3NF is dependency preserving but it is not able to remove redundancy
caused by FDs in all cases. Knowing information like this is in fact vital for a successful database design.
Hence, the main purpose of this paper is to compare the normal forms defined in [MNE96], [OY87a], [OY89],
[RK87], and to find out their strengths and weaknesses. In particular, we investigate them with respect to
generalizing 4NF and BCNF, reducing redundant data values, and design flexibility. In addition, we also
examine their algorithms and the nested database schemes that they generate.
Since the algorithms of the normal forms will be investigated, we take this opportunity to provide a more
general algorithm for the normal form defined in [MNE96] than the ones we gave in [ME96], [ME98]. It
turns out to be another contribution of this paper.
Here, we would like to recognize some other normal forms defined for nested relations, such as the ones in
[LY94], [TSS97]. However, they are not included in the investigation because the normal form in [TSS97] is
mainly dealing with semantic issues as opposed to removing redundancy; and the one in [LY94] is based on
extended MVDs, which makes it very hard to be compared with the others.
In the following, to avoid being too wordy, we abbreviate the normal form defined in [MNE96], which is
called nested normal form, as NNF [MNE96] . Similarly, the normal form defined in [OY87a] is abbreviated as
NNF [OY87a] , the one defined in [OY89] as NNF [OY89] , and the one defined in [RK87] as NNF [RK87] . Notice
that the comparisons among NNF [OY87a] , NNF [OY89] , and NNF [RK87] are already done, as presented in
[OY89], [RK87], and the results are not reproduced here.
The paper is organized as follows. In Section II, we present some basic definitions and concepts. The
normal forms are formally compared in Section III and we conclude in Section IV.
II. Basic Concepts & Terminology
We first present some basic definitions. After which, the definition of each of the normal forms is presented.
A. Nested Relation Schemes & Nested Relations
The following definitions of nested relation schemes, nested relations, and scheme trees are adapted from
[MNE96]. However, any equivalent definitions of these concepts, such as those in [OY87a], [OY89], [RK87],
can be used as well.
nested relation allows each tuple component to be either atomic or another nested relation, which may
itself be nested several levels deep.
1: Let U be a set of attributes. A nested relation scheme is recursively defined as follows:
1. If X is a nonempty subset of U , then X is a nested relation scheme over the set of attributes X .
2. If X , X 1 , . , are pairwise disjoint, nonempty subsets of U , and R 1 , . , Rn are nested relation
schemes over X 1 , . , respectively, then X (R 1 ) . (Rn ) is a nested relation scheme over XX 1 .
Dept Chair (Prof (Hobby) (Matriculation (Student (Interest)
CS Turing Jane Skiing Ph.D. Young Chess
Soccer
Barker Skiing
M.S. Adams Skiing
Pat Hiking Ph.D. Lee Travel
Math Polya Steve Dance M.S. Carter Travel
Hiking Skiing
Fig. 1. Nested relation.
Definition 2: Let R be a nested relation scheme over a nonempty set of attributes Z. Let the domain of
an attribute A 2 Z be denoted by dom(A). A nested relation over R is recursively defined as follows:
1. If R has the form X where X is a set of attributes fA 1 , . , An g, n 1, then r is a nested relation over
if r is a (possibly empty) set of functions where each function t i , 1 i m, maps A j to an
element in
2. If R has the form X (R 1 ) . (Rm is a set of attributes fA 1 , . , An g, n 1, then
r is a nested relation over R if
(a) r is a (possibly empty) set of functions ft 1 , . , t p g where each function t i , 1 i p, maps A j to an
element in to a nested relation over R k , 1 k m, and
Each function of a nested relation r over nested relation scheme R is a nested tuple of r. 2
Several observations can be made about Definitions 1 and 2. First, flat relation schemes are also nested
relation schemes. Second, any two distinct embedded nested relation schemes do not have any attribute in
common. For example, a nested relation scheme such as A (C)* (B (C)* is not allowed. Third, in this
paper, every nested relation is in PNF [RKS88].
Example 1: Figure 1 shows a nested relation. Its scheme is Dept Chair (Prof (Hobby)* (Matriculation
(Student (Interest)* and it contains two nested tuples. Each embedded nested relation also contains
nested tuples of its own. For example, !Young, fChess, Soccerg? and !Barker, fSkiingg? are nested tuples
under the embedded nested relation scheme Student (Interest)*. Notice that, as required, PNF is satisfied.
Thus, the values for the atomic attributes, Dept Chair, differ, and in each embedded nested relation, the
atomic values differ. 2
Definition 3: Let R be a nested relation scheme. Let r be a nested relation on R. The total unnesting of
r is recursively defined as follows:
1. If R has the form X , where X is a set of attributes, then r is the total unnesting of r.
2. If R has the form X (R 1 ) . (Rn ) , where X i is the set of attributes in R i , 1 i n, then the total
unnesting of there exists a nested tuple u 2 r such that tuple in the
total unnesting of u(R i
Example 2: Figure 2 shows the total unnesting of the nested relation in Figure 1. 2
Dept Chair Prof Hobby Matriculation Student Interest
CS Turing Jane Skiing Ph.D. Young Chess
CS Turing Jane Skiing Ph.D. Young Soccer
CS Turing Jane Skiing Ph.D. Barker Skiing
CS Turing Jane Skiing M.S. Adams Skiing
CS Turing Pat Hiking Ph.D. Lee Travel
Math Polya Steve Dance M.S. Carter Travel
Math Polya Steve Dance M.S. Carter Skiing
Math Polya Steve Hiking M.S. Carter Travel
Math Polya Steve Hiking M.S. Carter Skiing
Fig. 2. Total unnesting of nested relation in Fig. 1.
We can graphically represent a nested relation scheme by a tree, called a scheme tree. A scheme tree
captures the logical structure of a nested relation scheme and explicitly represents a set of MVDs.
Definition 4: A scheme tree T corresponding to a nested relation scheme R is recursively defined as follows:
1. If R has the form X , then T is a single node scheme tree whose root node is the set of attributes X .
2. If R has the form X (R 1 ) . (Rn ) , then the root node of T is the set of attributes X , and a child of
the root of T is the root of the scheme tree T i , where T i is the corresponding scheme tree for the nested
relation scheme R i
The one-to-one correspondence between a scheme tree and a nested relation scheme along with the definition
of a nested relation scheme impose several properties on a scheme tree. Let T be a scheme tree. We
denote the set of attributes in T by Aset(T ). Observe that the atomic attributes of a nested relation scheme,
at any level of nesting, constitute a node in a scheme tree. Observe further that since Definition 1 requires
nonempty sets of attributes, every node in T consists of a nonempty set of attributes. Furthermore, since
the sets of attributes corresponding to nodes in T are pairwise disjoint and include all the attributes of T ,
the nodes in T are pairwise disjoint, and their union is Aset(T ).
Let N be a node in T . Notationly, Ancestor(N ) denotes the union of attributes in all ancestors of N ,
including N . Similarly, Descendent(N ) denotes the union of attributes in all descendants of N , including
N .
In a scheme tree T each edge (V; W ), where V is the parent of W , denotes an MVD Ancestor(V ) !!
Notationly, we use MVD(T ) to denote the set of all the MVDs represented by the edges in
T . By construction, each MVD in MVD(T ) is satisfied in the total unnesting of any nested relation for T .
Since FDs are also of interest, we use FD(T ) to denote any set of FDs equivalent to all FDs
by a given set of FDs and MVDs over a set of attributes U such that Aset(T ) ' U and XY ' Aset(T ).
Example 3: Figure 3 shows the scheme tree T for the scheme of the nested relation in Figure 1. Figure 3
also gives the set of attributes in Aset(T ) and the set of MVDs in MVD(T ). Observe that each of the MVDs
in MVD(T ) is satisfied in the unnested relation in Figure 2. 2
Given a set D of MVDs and FDs over a set of attributes U , and a scheme tree T such that Aset(T ) ' U ,
Aset(T ) may be a proper subset of U . However, D may imply MVDs and FDs that hold for T . By Theorem 5
in [Fag77], an MVD X !! Y holds for T with respect to D if X ' Aset(T ) and there exists a set of attributes
Prof
Hobby Matriculation
Student
Interest
Matriculation Student Interest
Matriculation Student Interest,
Dept Chair Prof !! Hobby,
Dept Chair Prof !! Matriculation Student Interest,
Dept Chair Prof Matriculation !! Student Interest,
Dept Chair Prof Matriculation Student !! Interestg
Fig. 3. Scheme tree T , Aset(T ), and MVD(T ) for nested relation scheme in Fig. 1.
Student, Interestg
fStudent !! Interest, Prof !! Hobby Hobby-Equipment,
Hobby !! Hobby-Equipmentg
Fig. 4. Some given constraints over a set of attributes.
Z ' U such that
to D if XY ' Aset(T ) and D implies X ! Y on U .
Example 4: Figure 4 shows a given set of attributes U and a given set of FDs F over U and a given set of
MVDs M over U . All the FDs in F hold for the scheme tree T in Figure 3. Not all the MVDs in M hold for
In particular, neither Hobby !! Hobby-Equipment nor Prof !! Hobby Hobby-Equipment holds
for T . Since Hobby Hobby-Equipment " Aset(T does hold for T . Although Prof
!! Hobby does hold for T , observe that it is not implied by M [ F on U . 2
B. Conflict-Free Sets of MVDs & Acyclic Database Schemes
Some researchers have claimed that most real-world sets of MVDs are conflict-free and that acyclic database
schemes are sufficiently general to encompass most real-world situations [BFMY83], [Sci81]. In fact, conflict-free
sets of MVDs and acyclic database schemes have numerous desirable properties [BFMY83]. Therefore,
we would like to examine the normal forms with respect to conflict-free sets of MVDs and acyclic database
schemes. Their definitions are now presented.
An MVD X !! Y (with X and Y disjoint) splits two attributes A and B if one of them is in Y and the
other is in U \Gamma XY , where U is the set of all the attributes. A set M of MVDs splits A and B if some
MVD in M splits them. An MVD (or a set of MVDs) splits a set X , where X ' U , if it splits two distinct
attributes in X . Let D be a set of MVDs and FDs over U . LHS (D) denotes the set of left-hand sides of the
members of D. As usual, DEP(X ) denotes the dependency basis of X , which is a partition of U \Gamma X .
Definition 5: A set M of MVDs is conflict-free if
1. M does not split any element in LHS (M ).
2. For every X 2 LHS (M ) and for every Y 2 LHS (M ),
A conflict-free set of MVDs allows a unique 4NF decomposition [BFMY83]. We shall use this fact in the
proof of Lemma 6.
A database scheme over a set of attributes U is a set of relation schemes where each
relation scheme R i is a subset of U and [ n
Notice that every database scheme R corresponds
to a unique join dependency, namely 1R [Mai83]. A database scheme R is acyclic if and only if the join
dependency 1R is equivalent to a conflict-free set of MVDs [BFMY83]. Also, R is acyclic if and only if R
has a join tree [BFMY83].
Definition be a database scheme. A join tree for R is a tree where each
R i is a node, and
1. Each edge (R i , R j ) is labeled by the set of attributes R i " R j , and
2. For every pair R i and R j (R i 6= R j ) and for every A in R i " R j , each edge along the unique path between
R i and R j includes label A (possibly among others). 2
Let M be a set of MVDs over a set of attributes U . Notationly, we use M + to denote the closure of M .
M has the intersection property if whenever the MVDs X !! Z and Y !! Z are implied by M (with Z
disjoint from both X and Y ), then by M . Furthermore, M has the intersection
property if and only if M + is implied by a join dependency 1R [BFMY83]. We shall use this property in
the proof of Theorem 6.
C. NNF [MNE96]
We now present NNF [MNE96] .
Definition 7: Let U be a set of attributes. Let M be a set of MVDs over U and F be a set of FDs over U .
Let T be a scheme tree such that Aset(T ) ' U . T is in NNF [MNE96] with respect to M [ F if the following
conditions are satisfied.
1. If D is the set of MVDs and FDs that hold for T with respect to M [F , then D is equivalent to MVD(T )
2. For each nontrivial FD X ! A that holds for T with respect to M [ F ,
with respect to M [ F , where NA is the node in T that contains A. 2
D. NNF [OY 87a]
The definition of reduced MVDs is fundamental to NNF [OY87a] , NNF [OY89] , and NNF [RK87] and is now
adapted from [OY87a], [OY87b].
Definition 8: Let U be a set of attributes. Let M be a set of MVDs over U . X !! W in M + is
1. trivial if
2. left-reducible if there is an X 0 ae X such that X 0 !! W is in M
3. right-reducible if there is a W 0 ae W such that X !! W 0 is a nontrivial MVD in M
4. transferable if there is an X 0 ae X such that X 0 !!
An MVD X !! W is reduced if it is nontrivial, left-reduced (non-left-reducible), right-reduced (non-right-
reducible), and non-transferable. 2
Let M 1 and M 2 be sets of MVDs over a set of attributes U . M 1 is a cover of M 2 if and only if M
.
Definition 9: Let U be a set of attributes. Let M be a set of MVDs over U . Let
X !! W is a reduced MVD in M + g. Elements in LHS are called of M . A minimal cover Mmin
of M is a subset of M \Gamma and no proper subset of Mmin is a cover of M . 2
NNF [OY87a] disallows several configurations of scheme trees. To achieve this goal, transitive dependencies
and fundamental keys in a scheme tree are defined. Let M be a set of MVDs over a set of attributes U . Let
T be a scheme tree such that Aset(T ) ' U . We say that M implies MVD(T ) on Aset(T ) if for each MVD
implies an MVD X !! Z on U such that Assuming that M
implies MVD(T ) on Aset(T ). Let (V; W ) be an edge in T . Suppose that there is a key X of M such that
there exists a Z 2 DEP(X ) and Descendent(W there exist some sibling nodes W 1 , . ,
Wn of W in T such that
not hold for T with respect to M , then W is transitive redundant with respect to X in T . In this case X
Descendent(W ) on Aset(T ) is a transitive dependency in Aset(T ). Let V be a subset of U . The set of
fundamental keys on V , denoted by FK (V ), is defined as FK (V
When FDs are given, NNF [OY87a] uses the MVDs counterparts of FDs. That is, for each given FD
replaced by the set of MVDs fX !! A j A 2 Y g. We are now ready to present NNF [OY87a] .
Definition 10: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let M be the
set g. Let T be a scheme tree such that
Aset(T . T is in NNF [OY87a] with respect to D if
1. M implies MVD(T ) on Aset(T ).
2. For each edge (V; W ) of T , Ancestor(V ) !! Descendent(W ) on Aset(T ) is left- and right-reduced with
respect to M .
3. For each node N in T , there is no key X of M such that N is transitive redundant with respect to X .
4. The root of T is a key of M , and for each other node N in T , if FK (Descendent(N
When FDs are given, this normal form uses the envelope sets defined in [YO92] to handle MVDs and
FDs together. Hence, it is not necessary to consider the MVDs counterparts of FDs. From a given set D
of MVDs and FDs, the authors first derive the envelope set E (D) of D and the normal form is defined in
terms of E (D) and D.
The envelope set E (D) of D is defined as fX !! W j X 2 LHS (D) and W 2 DEP(X ) and X 6! Wg.
Notice that the authors also redefine transitive dependencies and fundamental keys for this normal form,
which affect Conditions 3 and 4. The new definition of fundamental keys on a set of attributes V is denoted by
defined as fV " and there is no Y 2 LHS (E (D) min )
such that ;g. The new definition of transitive dependencies is lengthy and involved,
however. Furthermore, since we will not use this new definition of transitive dependencies in this paper, we
do not reproduce it here.
Definition 11: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let E (D) be the
envelope set of D. Let T be a scheme tree such that Aset(T ) ' U . T is in NNF [OY89] with respect to D if
1. E (D) implies MVD(T ) on Aset(T ).
2. For each edge (V; W ) of T , Ancestor(V ) !! Descendent(W ) on Aset(T ) is left- and right-reduced with
respect to E (D).
3. For each node N in T , there is no key X of E (D) such that N is transitive redundant with respect to X .
4. The root of T is a key of E (D), and for each edge (V; W ) in T , if D does not imply Ancestor(V
is a leaf node
of T . 2
F. NNF [RK87]
When FDs are given, this normal form uses the integrated approach described in [BK86] to handle MVDs
and FDs together. Hence, it is not necessary to consider the MVDs counterparts of FDs. From a given
set D of MVDs and FDs, the authors first derive another set M 0 of MVDs and the normal form is defined
mainly in terms of M 0 . Notice in the following that X + denotes the closure of a set of attributes X and that
NNF [RK87] uses the original definition of fundamental keys in Section II-D.
Definition 12: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let M 0 be the
set g. Let T be a scheme tree such that Aset(T ) ' U . T is in
NNF [RK87] with respect to D if
1. M 0 implies MVD(T ) on Aset(T ).
2. For each edge (V; W ) of T , Ancestor(V ) !! Descendent(W ) on Aset(T ) is left- and right-reduced with
respect to M 0 .
3. For each node N in T , there is no key X of M 0 such that N is transitive redundant with respect to X .
4. The root of T is a key of M 0 and is in LHS (M 0 ), and for each other node N in T , if FK (Descendent(N
Further normalization is specified to remove redundancy caused by FDs. If T contains a node N such
that X ! N where X ae N , then replace N by N 0 where N 0 ae N , N 0 ! N , and for no N 00 ae N 0 does
In essence, each node N in T is replaced by one of its candidate keys defined in the usual
sense [Ram98], [SKS99]. 2
III. Comparison of the normal forms
In this section, we compare the normal forms with respect to generalizing 4NF and BCNF, reducing
redundant data values in a nested relation, and providing flexibility in nested relation schemes design.
Furthermore, we also examine their algorithms to see if they can generate nested relation schemes that
preserve the set of given MVDs and FDs. Another contribution here is providing a more general algorithm
for NNF [MNE96] than the ones in [ME96], [ME98].
A. Generalizing 4NF & BCNF
As mentioned in Section II-A, flat relation schemes are also nested relation schemes. Here, we show
that NNF [MNE96] , NNF [OY87a] , NNF [OY89] , and NNF [RK87] all imply 4NF with respect to the given set of
MVDs and FDs if the nested relation scheme is actually flat. Furthermore, each of the normal forms also
implies BCNF when there are only FDs. However, the converses of these results are not true for NNF [OY87a] ,
NNF [OY89] , and NNF [RK87] .
Theorem 1: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let T be a single
node scheme tree such that Aset(T ) ' U . T is in NNF [MNE96] with respect to D if and only if T is in 4NF
with respect to D.
Proof. Theorem 6.1 in [MNE96]. 2
Lemma 1: Let U be a set of attributes. Let M be a set of MVDs over U . Let Z be a key of M and X ae Z.
There exists a V 2 DEP(X ) such that Z ae XV .
Proof. Lemma 3.5 in [OY87b]. 2
Lemma 2: Let U be a set of attributes. Let M be a set of MVDs over U . Let Z be a key of M . Z is in
4NF with respect to M .
Proof. Let X ae Z. By Lemma 1, there exists a V 2 DEP(X ) such that Z ae XV . Hence, X !! V i does not
split Z for every Therefore, Z is in 4NF with respect to M . 2
Theorem 2: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let T be a single
node scheme tree such that Aset(T ) ' U . If T is in NNF [OY87a] with respect to D, then T is also in 4NF
with respect to D.
Proof. Since T is a single node scheme tree, it only consists of the root. By Condition 4 of NNF [OY87a] , the
root is a key of the set of MVDs M , which is defined in Definition 10. By Lemma 2, the root is in 4NF with
respect to M . By the definition of M , it is clear that the root is also in 4NF with respect to D. 2
Lemma 3: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let E (D) be the
envelope set of D. Let R ' U . If R is in 4NF with respect to E (D), then R is also in 4NF with respect to
D.
Proof. Proposition 4.2 in [YO92]. 2
Theorem 3: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let T be a single
node scheme tree such that Aset(T ) ' U . If T is in NNF [OY89] with respect to D, then T is also in 4NF
with respect to D.
Proof. Since T is a single node scheme tree, it only consists of the root. By Condition 4 of NNF [OY89] , the
root is a key of E (D). By Lemma 2, the root is in 4NF with respect to E (D). By Lemma 3, the root is also
in 4NF with respect to D. 2
Theorem 4: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let T be a single
node scheme tree such that Aset(T ) ' U . If T is in NNF [RK87] with respect to D, then T is also in 4NF
with respect to D.
Proof. By Condition 4 of NNF [RK87] , the root K is a key of M 0 and is in LHS (M 0 ), where the set of MVDs
M 0 is defined in Definition 12. Following this, K is replaced by one of its candidate keys, namely T . Since
K is a key of M 0 , by Lemma 2, K is in 4NF with respect to M 0 . We now show that T is in 4NF with respect
to D. Assume not, then there is a nontrivial MVD X !! Y that holds for T with respect to D such that
implies an MVD X !! Z on U such that . We now claim that
is a nontrivial MVD that holds for K with respect to M 0 such that X
Therefore, K is not in 4NF with respect to M 0 , which is a contradiction. We first show that X + !! W
holds for K with respect to M 0 . Since D implies X !! Z, D implies X Z. By the definition of M 0 ,
holds for K with respect
to M 0 . We next show that X + !! W is nontrivial. Since X !! Y is nontrivial and holds for T , neither Y
nor is not a candidate key. Therefore, there is an
attribute A 2 Y such that X 6! A. Since A 2 Y and
Therefore, A 2 W . Since X 6! A, A 62 X . Similarly, there is an attribute B such that
is nontrivial on K. 2
However, the converses of Theorems 2, 3 and 4 are all false, as the following example shows.
Example 5: Let R = AB be a relation scheme and Bg be the given set of MVDs and FDs.
Trivially, R is in 4NF with respect to D. However, since D only contains a trivial MVD, each of the sets of
MVDs M , E (D), and M 0 as defined in Definitions 10, 11 and 12 respectively only contains trivial MVDs.
Thus, by Definition 9, none of these sets of MVDs has any key. Hence, R violates Conditions 4 of NNF [OY87a] ,
Notice the fact that NNF [MNE96] , NNF [OY87a] , NNF [OY89] , and NNF [RK87] all imply BCNF when only FDs
are given follow immediately from Theorems 1, 2, 3 and 4.
Example 5 sounds trivial; however, its implications cannot be underestimated. First, D is clearly equivalent
to the empty set of MVDs, which is vacuously conflict-free. Hence, NNF [OY87a] , NNF [OY89] , and NNF [RK87] ,
at least in their current forms, have difficulties in generating nested relation schemes with respect to conflict-free
sets of MVDs. Second, many-to-many relationships between two attributes, such as the one between A
and B in Example 5, arise naturally in practice. It is important for a normal form to be well-defined for
these simple situations. We will elaborate more on this subject in Section III-C.
We conclude this subsection by stating that NNF [MNE96] is superior to NNF [OY87a] , NNF [OY89] , and
NNF [RK87] in generalizing 4NF and BCNF.
Prof (Article-Title)* (Publication-Location)*
Steve Programming in C++ USA
Programming in Ada Hong Kong
Pat Programming in Ada USA
Hong Kong
Prof Article-Title Publication-Location
Steve Programming in C++ USA
Steve Programming in Ada USA
Steve Programming in C++ Hong Kong
Steve Programming in Ada Hong Kong
Pat Programming in Ada USA
Pat Programming in Ada Hong Kong
Fig. 5. Nested relation with redundancy caused by an MVD.
B. Reducing Redundant Data Values
In this subsection, each of the normal forms is investigated with respect to reducing redundant data values
in a nested relation. Two cases are considered: when the given set of MVDs is not conflict-free and when
the given set of MVDs is conflict-free.
B.1 Non Conflict-Free Sets of MVDs
Some terminology is needed here. Given a set D of MVDs and FDs over a set of attributes U , and a
scheme tree T such that Aset(T ) ' U , T is consistent with D if for each MVD X !! Y in MVD(T ), D
implies an MVD X !! Z on U such that scheme tree should be consistent with the
given MVDs and FDs; otherwise its scheme implies an MVD that does not follow from the given MVDs and
FDs. Hence, only consistent scheme trees are considered in this paper.
Theorem 5: Let U be a set of attributes. Let D be a set of MVDs and FDs over U . Let T be a scheme
tree such that Aset(T ) ' U . If T is consistent with D, then T is in NNF [MNE96] with respect to D if and
only if for every nested relation R on T , R does not have redundancy caused by any MVD or FD implied
by D that holds for T .
Proof. Follow immediately from Theorems 5.1 and 5.2 in [MNE96]. 2
Neither NNF [OY87a] , nor NNF [OY89] , nor NNF [RK87] has this property, as the following example shows,
which is Example 2.3.5 in [MNE96].
Example Publication-Locationg and let
Article-Title !! Prof g be the given set of MVDs. Consider the nested relation and its total unnesting in
Figure
5. There are redundant data values in the nested relation. For example, the last Hong Kong value
under (Publication-Location)* is redundant because if it is covered up, based on the MVD Article-Title !!
Publication-Location and the other data values in the nested relation, we can deduce that it must be Hong
Kong. Notice that M is not conflict-free because it violates Condition 2 of Definition 5. 2
If a scheme tree T violates Condition 1 or Condition 2 of NNF [MNE96] , then by Lemmas 5.2, 5.3 and 5.4 in
[MNE96], there exists a nested relation on T that has redundancy caused by an MVD that holds for T . The
Prof (Article-Title)* Prof (Publication-Location)*
Steve Programming in C++ Steve USA
Programming in Ada Hong Kong
Pat Programming in Ada Pat USA
Hong Kong
Fig. 6. Decomposition of nested relation in Fig. 5.
scheme of the nested relation in Figure 5 violates Condition 1 of NNF [MNE96] and Example 6 is designed to
show the redundancy. By Theorem 5, the nested relation scheme in Figure 5 violates NNF [MNE96] . However,
as stated in [MNE96], this nested relation scheme satisfies NNF [OY87a] , NNF [OY89] , and NNF [RK87] . From this
example, we can see that NNF [OY87a] , NNF [OY89] , and NNF [RK87] all allow nested relations with redundancy.
To satisfy NNF [MNE96] , the nested relation in Figure 5 needs to be decomposed into two smaller nested
relations, which are shown in Figure 6. The number of data values, however, increases from 9 to 11.
Therefore, decomposing the nested relation in Figure 5 cannot remove the redundant data values and thus
cannot reduce the number of data values. On the other hand, NNF [OY87a] , NNF [OY89] , and NNF [RK87] all
accept the nested relation in Figure 5 and without requiring it to be decomposed. In general, the paths of
a scheme tree have to be separated to satisfy NNF [MNE96] . When the given set of MVDs is not conflict-free,
it is beneficial not to separate the paths for some situations, as Example 6 shows. On the other hand, as
stated in [BK86], there is rarely a satisfactory solution to normalization with respect to non conflict-free sets
of MVDs.
Notice that there is no FD in Example 6. However, even if some FDs are added to Example 6, our
arguments still hold. Suppose we add an attribute Dept and an FD Prof ! Dept to Example 6. We also
modify the nested relation scheme in Figure 5 by adding Dept as a child to the root Prof. As in Figure 1, Steve
is a professor in the Mathematics Department and Pat is a professor in the Computer Science Department.
This new scheme tree still satisfies NNF [OY87a] and NNF [OY89] and it still violates NNF [MNE96] . Furthermore,
the modified nested relation and the nested relation in Figure 5 both have the same redundant data values.
Interestingly, if the given set of MVDs has the intersection property, then Conditions 1 and 2 of NNF [OY87a]
imply Condition 3 of NNF [OY87a] .
Theorem U be a set of attributes. Let M be a set of MVDs over U that has the intersection
property. Let T be a scheme tree such that Aset(T ) ' U . If T satisfies Conditions 1 and 2 of NNF [OY87a]
with respect to M , then T also satisfies Condition 3 of NNF [OY87a] with respect to M .
Proof. Suppose T does not satisfy Condition 3 of NNF [OY87a] . We shall derive a contradiction. Assume there
is an edge (V; W ) in T and there is a key X of M such that there exists a Z 2 DEP(X ) and Descendent(W )
Assume also that there are some sibling nodes W 1 , . , Wn of W in T such that Y
does not hold for T with respect
to M . Ancestor(V
Ancestor(V ); otherwise Ancestor(V ) !! Descendent(W ) on Aset(T ) is not left-reduced with respect to M
and thus T violates Condition 2 of NNF [OY87a] . Hence, X 6' Ancestor(V ). Since T satisfies Condition 1 of
implies an MVD Ancestor(V ) !! Z 0 on U such that Descendent(W
the nodes in T are pairwise disjoint,
is disjoint from both Z). By the intersection property
of M ,
proper subset of Ancestor(V ). Thus, Ancestor(V ) !! Descendent(W ) on Aset(T )
is not left-reduced with respect to M , which is a contradiction. 2
We conclude this subsection by stating that NNF [OY87a] , NNF [OY89] , and NNF [RK87] are all superior to
NNF [MNE96] in reducing redundant data values with respect to non conflict-free sets of MVDs. In addition,
if the given set of MVDs has the intersection property, then Condition 3 of NNF [OY87a] is redundant. This
implies that if the given set of MVDs is conflict-free, then we do not need to check for Condition 3 of
NNF [OY87a] since conflict-free sets of MVDs have the intersection property [BFMY83].
B.2 Conflict-Free Sets of MVDs
Here, we shall show that if a scheme tree T is in NNF [OY87a] with respect to a conflict-free set of MVDs,
then T is also in NNF [MNE96] . The converse of this result, however, is not true. Furthermore, if a scheme
tree T is consistent with a conflict-free set of MVDs, and each path (defined below) of T is in 4NF with
respect to M , then the nesting structure of T is able to squeeze out redundant data values. Since when
there is no given FD, NNF [OY87a] , NNF [OY89] , and NNF [RK87] are all equivalent, therefore, these results also
hold for NNF [OY89] and NNF [RK87] . Notice that we do not consider FDs here; instead, FDs are considered
in Section III-D.
Given a scheme tree T and a leaf node V of T , Ancestor(V ) is called a path of T . The set of paths of T
is denoted by Path(T ). Notice that this definition of a path is different from the one in [MNE96], in which
a path is defined as the list of nodes from V to the root of T . However, defining a path as Ancestor(V ) is
convenient for this paper.
Lemma 4: Let U be a set of attributes. Let M be a conflict-free set of MVDs over U . Let T be a scheme
tree such that Aset(T ) ' U . If T is in NNF [OY87a] with respect to M , then Path(T ) is in 4NF with respect
to M .
Proof. Stated in the conclusions of [OY87a]. 2
Lemma 5: Let U be a set of attributes. Let T be a scheme tree such that Aset(T ) ' U . If MVD(T ) does
not imply an MVD X !! Y on Aset(T ), then X !! Y splits a path of T .
Proof. Lemma 5.1 in [MNE96]. 2
Lemma U be a set of attributes. Let M be a conflict-free set of MVDs over U . Let T be a scheme
tree such that Aset(T ) ' U and is consistent with M . Let D be the set of MVDs implied by M that hold
for T . If Path(T ) is in 4NF with respect to M , then MVD(T ) implies D on Aset(T ).
Proof. As mentioned in Section II-B, a conflict-free set of MVDs allows a unique 4NF decomposition
[BFMY83]. Since Path(T ) is in 4NF with respect to M , which is conflict-free, and T is consistent with M ,
Path(T ) is the unique 4NF decomposition of Aset(T ) with respect to M . Now, suppose MVD(T ) does not
imply D. Then, there is an MVD X !! Y in D such that MVD(T ) does not imply X !! Y . By Lemma 5,
splits a path of T and thus Path(T ) is not a unique 4NF decomposition of Aset(T ) with respect to
M , which is a contradiction. 2
Notice that if the given set of MVDs is not conflict-free, then Lemma 6 does not hold, as demonstrated
by Example 6.
Theorem 7: Let U be a set of attributes. Let M be a conflict-free set of MVDs over U . Let T be a scheme
tree such that Aset(T ) ' U . If T is in NNF [OY87a] with respect to M , then T is also in NNF [MNE96] with
respect to M .
Proof. Since there is no given FD, we only need to show that T satisfies Condition 1 of NNF [MNE96] . Since
Condition 1 of NNF [OY87a] , T is consistent with M . Thus, the set D defined in Condition 1 of
NNF [MNE96] implies MVD(T ) on Aset(T ). By Lemma 4, Path(T ) is in 4NF with respect to M and then
by Lemma 6, MVD(T ) implies the set D defined in Condition 1 of NNF [MNE96] on Aset(T ). 2
Theorem 7 says that if a given set of MVDs is conflict-free, a nested relation scheme that is acceptable
to NNF [OY87a] is also acceptable to NNF [MNE96] . The converse of Theorem 7, however, is not true, as
demonstrated by Example 5 and Theorem 1. Notice that the given set of MVDs in Example 5 is equivalent
to the empty set of MVDs, which is clearly conflict-free.
Lemma 7: Let U be a set of attributes. Let M be a conflict-free set of MVDs over U . Let T be a scheme
tree such that Aset(T ) ' U . If T is in NNF [MNE96] with respect to M , then Path(T ) is in 4NF with respect
to M .
Proof. Assume there is a path P in Path(T ) such that P is not in 4NF with respect to M . Then there is a
nontrivial MVD X !! Y that holds for P . Since there is no given FD, MVD(T does not imply
Condition 1 of NNF [MNE96] . 2
When there is no given FD, the main cause of the problem of Examples 6 is that the given set of MVDs is not
conflict-free. If a set of MVDs is not conflict-free, there may be more than one possible 4NF decomposition.
Hence, even if each path of a scheme tree T is in 4NF, there may still be MVDs that hold for T which do
not follow from MVD(T ). These MVDs are the ones that cause the redundancy. Therefore, even after the
paths of a scheme tree are separated to satisfy NNF [MNE96] , the redundant data values remain.
This problem will not happen with conflict-free sets of MVDs. By Lemmas 4 and 6, if a scheme tree T is
in NNF [OY87a] , then all the MVDs that hold for T follow from MVD(T ). Thus, the nesting structure of T
is able to squeeze out the redundant data values. By Lemmas 6 and 7, NNF [MNE96] also has this property.
Since when there is no given FD, NNF [OY87a] , NNF [OY89] , and NNF [RK87] are all equivalent, NNF [OY89] and
NNF [RK87] both have this property.
Example 7: Let
Hobby-Equipmentg be the given set of MVDs. Notice that M is conflict-free. Consider the nested relation in
Figure
7. Its scheme violates NNF [MNE96] , NNF [OY87a] , NNF [OY89] , and NNF [RK87] because one of its paths,
namely Prof, Hobby, Hobby-Equipment, is not in 4NF with respect to M . Due to this violation, there are
redundant data values in the nested relation. As we can see, the equipments of hiking are stored twice in the
nested relation. After we decompose this nested relation into two smaller nested relations in Figure 8, the
redundant data values are removed. Notice that every path of the two nested relation schemes in Figure 8
is in 4NF and both nested relation schemes satisfy NNF [MNE96] , NNF [OY87a] , NNF [OY89] , and NNF [RK87] . 2
We conclude this subsection by stating that when no FD is given, NNF [MNE96] , NNF [OY87a] , NNF [OY89] ,
and NNF [RK87] are all able to reduce redundant data values with respect to conflict-free sets of MVDs.
Furthermore, if there is no given FD, NNF [OY87a] implies NNF [MNE96] with respect to conflict-free sets of
MVDs. The same results hold for NNF [OY89] and NNF [RK87] .
Prof (Student)* (Hobby (Hobby-Equipment)*
Pat Lee Hiking Water Bottle
Hat
Steve Carter Hiking Water Bottle
Hat
Dance Dancing Shoe
Costume
Chi-Ming null null
Fig. 7. Nested relation with redundancy caused by an MVD.
Prof (Student)* (Hobby)* Hobby (Hobby-Equipment)*
Pat Lee Hiking Hiking Water Bottle
Steve Carter Hiking Hat
Dance Dance Dancing Shoe
Chi-Ming null Costume
Fig. 8. Decomposition of nested relation in Fig. 7.
C. Design Flexibility
Here, we show several examples to illustrate how flexible each of the normal forms is in nested relation
schemes design. It turns out that NNF [MNE96] allows greater flexibility in nested relation schemes design
than NNF [OY87a] , NNF [OY89] and NNF [RK87] .
Example 8: Let us replace A by Child and B by Toy in Example 5. The following three nested relation
schemes Child Toy, Child (Toy)*, and Toy (Child)* are all in NNF [MNE96] vacuously. Choosing which one
to use, of course, depends on the application in hand. However, since there is no "key," as mentioned in
Example 5, all of these nested relation schemes violate NNF [OY87a] , NNF [OY89] , and NNF [RK87] . Notice that
many-to-many relationships, such as the one between Child and Toy, occur commonly. 2
Example 9: Let Hobby-Equipmentg be the
given set of MVDs. Notice that M is conflict-free. The only nested relation scheme allowed by NNF [OY87a] ,
NNF [OY89] , and NNF [RK87] is Hobby (Prof )* (Hobby-Equipment)* since Hobby is the only key. In this nested
relation scheme, data are forced to be stored in the point of view of Hobby, which is fine if this is what the
application in hand dictates. Notice that this nested relation scheme also satisfies NNF [MNE96] . Now suppose
we need to store the data in the point of view of Prof and the nested relation schemes that are needed are
Prof (Hobby)* and Hobby (Hobby-Equipment)*. Both of these nested relation schemes satisfy NNF [MNE96] .
However, since Prof is not a key, Prof (Hobby)* violates Conditions 4 of NNF [OY87a] , NNF [OY89] , and
NNF [RK87] . 2
We now turn to an example that involves FDs.
Example 10: As discussed in [MNE96], the scheme tree in Figure 3 is in NNF [MNE96] , but it violates
NNF [OY87a] , NNF [OY89] and NNF [RK87] . There are several violations. Since the rationale behind the violations
for NNF [OY89] and NNF [RK87] is quite similar to that of NNF [OY87a] , we focus on NNF [OY87a] .
We now argue that Matriculation cannot be an inner node in the scheme tree. Consider the set of attributes
Descendent(Matriculation), which is equal to fMatriculation, Student, Interestg. Student is in
FK (Descendent(Matriculation)) because Student is a key and is contained in Descendent(Matriculation).
However, since Matriculation is not a key, therefore it is not in FK (Descendent(Matriculation)) and thus
the scheme tree violates a subcondition of Condition 4 of NNF [OY87a] . Moreover, Dept Chair cannot be the
root since Dept Chair is not a key. Therefore, the scheme tree violates another subcondition of Condition 4
of NNF [OY87a] . With some work, the reader can check that these two violations also apply for NNF [OY89]
and NNF [RK87] . 2
Observe that Examples 8, 9 and 10 are all quite natural and reasonable. Unlike Example 6, these examples
are easily created and are all very practical. By these examples, we can see that Conditions 4 of both
NNF [OY87a] and NNF [OY89] are the most problematic and NNF [RK87] inherits the same problem. Due to these
limitations, NNF [OY87a] , NNF [OY89] and NNF [RK87] all restrict attribute clustering and design flexibility. We
conclude this subsection by stating that NNF [MNE96] provides greater design flexibility than the other three
normal forms.
D. Algorithms & Nested Database Schemes
In this subsection, we first present an algorithm that generates scheme trees in NNF [MNE96] from an
acyclic database scheme and a set of FDs where each given FD is embedded in a relation scheme in the given
database scheme (defined below). This algorithm generalizes the algorithms in [ME96], [ME98], in which
only fl-acyclic database schemes are considered [Fag83]. The acyclic database schemes that we consider in
this paper, however, are ff-acyclic database schemes, which are the most general type of acyclic database
schemes [Fag83]. In Section II-B, some of the numerous equivalent definitions of ff-acyclic database schemes
are presented. To avoid being too wordy, however, "an acyclic database scheme" simply means "a ff-acyclic
database scheme" in this paper. After presenting the algorithm of NNF [MNE96] , we shall compare the
algorithms of all of the normal forms and the nested database schemes that they generate.
In addition to the assumptions that the given database scheme is acyclic and each given FD is embedded
in a relation scheme of the given database scheme, we also assume that each relation scheme is in BCNF
with respect to the given FDs. These assumptions are justified as follows. First, in Section II-B, we have
already mentioned the importance of acyclic database schemes. Second, as stated in [FMU82], most FDs
that are relevant for data structuring are embedded in some relation schemes of the given database scheme.
Furthermore, most FDs that are derived from the semantic data models that we use in [ME96], [ME98]
are indeed embedded in some relationship sets which roughly correspond to relation schemes in this paper.
Third, as stated in [ME98], in most cases relation schemes are of small arity and a majority of them are
binary, thus we believe that most relation schemes in practice are in BCNF.
D.1 Algorithm for NNF [MNE96]
We now present Algorithm 1, which is the algorithm that generates nested relation schemes in NNF [MNE96] .
Example 11: Consider the set U of attributes, the set M of MVDs, and the set F of FDs in Figure 4.
To save space, we abbreviate Dept as D, Chair as C, Prof as P , Hobby as H , Hobby-Equipment as E,
Matriculation as M , Student as S, and Interest as I . Notice that M and F is equivalent to 1R and F where
g.
At Step 1, a join tree J of R is derived and is shown in Figure 9. Initially all the nodes in J are unmarked.
At Step 2.1, suppose DC is selected as the seed relation scheme (R seed ) of a new scheme tree and the single
path scheme tree TDC is simply created as DC. The node DC in J is then marked and is entered into L.
Algorithm 1
Input: An acyclic database scheme and a set of nontrivial FDs F where each
in F is embedded in a R i (XY ' R i ). Furthermore, each R i is in BCNF with respect to F .
Without loss of generality, no R i is a subset of another R j if i 6= j.
Nested relation schemes that are in NNF [MNE96] with respect to R and F .
Internal Data Structure: A join tree J of R and a first-in first-out queue L of relation schemes that is initially
empty.
1 Use the Graham Reduction to derive J from R [Mai83]. We begin with the graph with nodes R 1 ,
. , Rn and with no edge. Let R 0
i be R i after applying zero or more node removals. Each time R 0
is removed because R 0
add an edge between R i and R j in the graph. Eventually
be added to the graph and the resulting graph is J . Notice that since there may
be more than one possible reduction sequences, an acyclic database scheme may have more than
one join trees.
2 While there is an unmarked node in J , do:
2.1 Select an unmarked node R seed in J . Create a single path scheme tree TR seed
from R seed .
Mark R seed and enter R seed into L.
2.2 While L is not empty, do:
2.2.1 Let RM be the first marked relation scheme in L. Remove RM from L.
2.2.2 For each unmarked neighbor RU of RM in J , do:
2.2.2.1 If there is a node N in TR seed
such that RU ! Ancestor(N ) and
are several nodes that satisfy these
conditions, choose the lowest one.), modify TR seed
as follows:
2.2.2.1.1 If Ancestor(N Otherwise,
create a single path scheme tree TRU from (RU \Gamma RM ) and
attach the root of TRU as a child of N .
2.2.2.1.2 Mark RU and enter RU into L.
3 A scheme tree T produced in Step 2 can be modified by moving an attribute A in Aset(T ) up or
down the nodes in the path that A appears as long as T satisfies Condition 2 of NNF [MNE96] and
remains the same.
At Step 2.2.1, RM becomes DC and it has one unmarked neighbor DP in J . The node N in Step 2.2.2.1 is
DC and at Step 2.2.2.1.1, P is attached as a child of DC. DP is then marked and is entered into L. Back
at Step 2.2.1, RM becomes DP and it has two unmarked neighbors PH and SP in J . Successively H and S
become children of P in TDC and both PH and SP are marked and are entered into L. Back at Step 2.2.1,
RM becomes PH and it has one unmarked neighbor HE. Since there is no node N in TDC such that HE !
Ancestor(N ), we cannot extend the path that contains H . Back at Step 2.2.1, RM becomes SP and it has
two unmarked neighbors SI and SM in J . For SI, the node N in Step 2.2.2.1 is S and I becomes a child of
S in TDC . For SM, the node N in Step 2.2.2.1 is also S. But since Ancestor(N we put the attribute
M into the node N and thus the node N becomes SM. Notice that in this example, the order of choosing
unmarked neighbors at Step 2.2.2 does not make a difference in TDC . The only unmarked node left in J is
HE and thus in the second iteration of Step 2, a single path scheme tree created from HE is generated and
Fig. 9. Join tree of join dependency in Example 11.
Fig. 10. Join tree of join dependency in Example 12.
then HE is marked. In Step 3, we may choose to move M as a parent of S in TDC and Algorithm 1 has
generated the scheme tree in Figure 3. 2
The database scheme R in Example 11 is fl-acyclic. The following example shows that Algorithm 1 is able
to generate scheme trees in NNF [MNE96] from an ff-acyclic, but not fl-acyclic, database scheme.
Example 12: Let ACEg. The join tree generated at Step 1
is shown in Figure 10. Suppose we select ABC as the seed relation scheme and the single path scheme tree
TABC created at Step 2.1 is AC - B. ABC has only one unmarked neighbor ACE. Later E becomes a child
of AC in TABC . As the reader may verify, CDE and AEF cannot be attached to TABC . In the second and
the third iterations of Step 2, two single path scheme trees are created from CDE and AEF respectively.
Notice that CDE and AEF cannot appear in the same scheme tree in this example because they are not
neighbors in the join tree in Figure 10. 2
The concept of a closed set of relation schemes is crucial to the proof of correctness of Algorithm 1. Let
be a database scheme over a set of attributes U . Let S ' R and let S be [R i 2S R i . S is
closed if for every R i 2 R, there is a R k 2 S such that R i " S ' R k . Notice that if R is acyclic, then any
closed set of relation schemes of R is also acyclic [BFMY83].
Also, the proof of correctness of Algorithm 1 depends on Lemmas 8 and 10 as well, which characterize the
set of MVDs and the set of FDs that hold for a scheme tree generated by Algorithm 1.
Let 1R be a join dependency is a database scheme over a set of attributes U .
1R implies numerous MVDs. However, every MVD implied by 1R is also implied by an MVD in MVD(1R).
MVD(1R) is a set of MVDs of the form 1f[ i2S1 R ng and
As shown in Chapter 13 in [Mai83], R is acyclic if and only if MVD(1R) and 1R are equivalent.
Lemma 8: Let U be a set of attributes. Let be a database scheme over U . Let S ' R
and let S be [R i 2S R i . If S is closed and D is the set of MVDs that hold for S with respect to 1R, then D
is equivalent to MVD(1S ).
Proof. Let X !! Y be an MVD implied by 1R where X ' S. Thus, X !! (Y " S) is in D. X !! Y is
equivalent to the join dependency 1fXY, X (U \Gamma XY )g. 1R implies 1fXY, X (U \Gamma XY )g if and only if R i
' XY or R i Hence, for each R i in S, R i ' XY or R i ' X (U \Gamma XY )g. Thus,
1 For every MVD X !! Y on U , X !! Y is equivalent to the join dependency 1fXY, X (U \Gamma XY )g. 1R implies 1fXY,
only if R i
' XY or R i
implied by an MVD
in MVD(1R).
implied by MVD(1S ). Now consider an MVD V !! W in MVD(1S ), which is equivalent
to the join dependency 1fVW, V (S \Gamma VW )g. Let us construct an MVD implied by 1R which is denoted
by 1fL, Rg. Initially, As we add the relation schemes in R \Gamma S to either
the L set or the R set, since S is closed, L " R is always equal to V . Thus, 1R implies an MVD V !! Z on
U such that
In the following, if G is a set of FDs and W is a set of attributes, then G[W Wg.
Lemma 9: Let be a database scheme. Let F be a set of FDs such that each FD in F
is embedded in a R i . 1R and F implies an FD X ! Y if and only if F implies
Proof. Lemma 1 in [GY84]. 2
Lemma 10: Let U be a set of attributes. Let be an acyclic database scheme over U
and let J be a join tree of R. Let F be a set of FDs over U such that each FD in F is embedded in a R i .
Let J 0 be a connected subtree of J . Let S be the set of nodes (relation schemes) in J 0 and let S be [R i 2S R i .
If D is the set of FDs that hold for S with respect to 1R and F , then D is equivalent to [R i
Proof. By Lemma 9, 1R has nothing to do with closures of sets of attributes. The if-part is obvious because
for each R i 2 We proceed to the only-if part. Without loss of generality, assume the
right-hand side of every FD in F is a single attribute. We first show that if W
embedded in a relation scheme in S. Assume not, we shall derive a contradiction. Let RB
be a relation scheme in R \Gamma S such that WB ' RB . Since J 0 is connected, there is a unique node (relation
R u in J 0 such that among all the nodes in J 0 , R u is the closest node to RB . By Condition 2 of
Definition 6, for each R j 2 S, R j " RB ' R u . Now, since WB ' S, therefore WB ' R u , which is a
contradiction.
A be an FD in D. Since X ! A 2 D, XA ' S and there is a derivation sequence Z of X ! A
by using the FDs in F . Let Z be the following derivation sequence:
A.
If each A i that appears in Z is in S, then all V i 's are subsets of S. This implies that every V i A i is a subset
of S. Thus, by what we have just proved, each embedded in a relation scheme in S and the proof
is done. Now assume some A i 's are not in S. Suppose A p is the first such attribute in Z. Since
and A 2 S, there is a A q that appears in Z such that A 1 2 S, . , A
and A q 2 S. Let RA i
be a relation scheme in R that embeds connected and RAp is not
a node in J 0 , Z can be arranged in such a way that V p+1 , . , V q all contain some of these attributes A p ,
. , A q\Gamma1 . This implies that we can arrange Z such that RAp , . , all belong to the same remaining
connected subtree J 00 after we remove all the nodes in J 0 from J and all the edges in J that are incident on
the nodes in J 0 . Since J 0 is a connected subtree, there is a unique node R v in J 0 such that among all the
nodes in J 0 , R v is the closest node to any node in J 00 . We now show by induction that X
Basis: Since A 1 2 S, . , A and RAp is in J 00 , RAp 62 S; therefore by
Condition 2 of Definition 6,
Induction: Assume X l , for every l where p l ! q. Now consider l 1. By the construction
of Z, V l+1 ' A p . A l X
, which is not a node in J 0 , V l+1 '
RA l+1
. This implies that V l+1 ' A p . A l (X
). But since RA l+1
is not a node in J 0 and X
' S, by Condition 2 of Definition 6, X
R v ). By the induction hypothesis, X and we conclude that
Now since V q ! A q is embedded in R q , R q
By using the same reasoning, it is possible to show that for each
implied by [R i
Lemma 11: Let U be a set of attributes. Let T be a scheme tree such that Aset(T ) ' U . MVD(T ) is
equivalent to the join dependency 1Path(T ) on Aset(T ).
Proof. Proposition 4.1 in [OY87a]. 2
Lemma 12: All the nodes in TR seed
that satisfy the conditions in Step 2.2.2.1 are on the same path.
Proof. Assume there are two nodes N and N 0 that satisfy the conditions. That is, RU
This implies that
and RM are neighbors in J , RU "
Aset(TR seed
) and since every given FD is embedded in a relation scheme of the given database scheme, RU
only if
will force N and N 0 to be on the same path at Step 2.2.2.1.1. 2
In the following proof, we assume the reader is familiar with the chase, which is described in Chapter 8 in
[Mai83].
Theorem 8: Algorithm 1 is correct.
Proof. It is obvious that Step 1 generates a join tree for the given acyclic database scheme. Next we show
that every tree generated by Algorithm 1 satisfies Definitions 1 and 4. In particular, we show that the nodes
in a generated tree are all nonempty and are pairwise disjoint. By assumption, no relation scheme is a
subset of another relation scheme in the given database scheme. Also, since RU and RM are neighbors in
the derived join tree, RU " seed
which is not empty, does not
intersect with Aset(TR seed
). Furthermore, the single path scheme trees generated at Steps 2.1 and 2.2.2.1.1
thus they do not have empty node. Step 3 simply will not violate these two
definitions. Thus, Algorithm 1 generates trees that satisfy Definitions 1 and 4.
Obverse that Algorithm 1 generates a scheme tree from the nodes in a connected subtree of the derived
join tree. Since the nodes in a connected subtree constitutes a closed set of relation schemes, Algorithm 1
generates a scheme tree from a closed set of relation schemes. In fact, this is exactly the purpose of the join
tree created at Step 1. As an example, fCDE, AEFg in Example 12 is not a closed set of relation schemes
and thus CDE and AEF cannot appear in the same scheme tree. The purpose of the first-in first-out queue
in Algorithm 1, however, is to ensure that a scheme tree is built level by level.
Next we need to characterize the set D of MVDs and FDs that hold for a scheme tree T generated by
Algorithm 1. Let S be the closed set of relation schemes from which T is constructed and let S be [R i 2S R i .
Thus, Aset(T ) is equal to S. It turns out that D is equivalent to [R i 1S. The proof is as
follows. D implies [R i the given database scheme R is acyclic and
S is closed, S is acyclic. Thus, MVD(1S ) and 1S are equivalent. However, since S is closed, by Lemma 8,
Therefore, D implies 1S. We now consider the reverse implication. By Lemma 10, the
set of FDs in D is equivalent to [R i Y be an MVD in D. By the definition of D,
R and F imply an MVD X !! Z on U (= [R i 2RR i ) such that X ' S, and us run the
chase on the tableau TXZ for X !! Z. TXZ has two rows r 1 and r 2 . r 1 has a's under XZ-columns and b's
elsewhere and r 2 has a's under X(U \Gamma XZ)-columns and b's elsewhere. Since R and F imply X !! Z on
U , by Lemma 9, we can assume that we apply the FDs in F until no more b can be changed into a, and
then for every R ag or R i ' ag. This statement also holds for every
using the FDs in [R i instead of the FDs in F . By Lemma 9
again, the FDs in [R i are strong enough to ensure that for every R i 2 S, R i ag or
ag. Thus, [R i imply X !! Y on S.
We are now ready to prove by induction that a scheme tree T generated by Algorithm 1 is in NNF [MNE96] .
The induction is on the size of S, which is denoted by jSj.
Basis: When is created at Step 2.1. MVD(T ) is a set of trivial MVDs and thus is equivalent
to 1S, which is a trivial join dependency. By definition, FD(T ) is equivalent to [R i
Condition 1 of NNF [MNE96] . By assumption, every given relation scheme is in BCNF. Hence, T
also satisfies Condition 2 of NNF [MNE96] . At Step 3, Path(T ) remains the same and thus T still satisfies
NNF [MNE96] .
Induction: Assume the statement is true for every closed set of relation schemes S of R where 1 jSj k.
Now consider when 1. In the connected subtree J 0 from which S is defined, let N k+1 be a node in
J 0 that has exactly one neighbor in J 0 . We denote the scheme tree before N k+1 is added by TS \GammaN k+1
and
the scheme tree after N k+1 is added by TS . The goal is to show that MVD(T S
to [R i 1S. Observe that FD(T S ) is equivalent to [R i 2S F
is equivalent to the join dependency 1Path(T ) on Aset(T ) for any scheme tree T . Hence, 1S implies
for each R i in S, there is a path P in Path(T S ) such that R i ' P . Therefore, we are
left to show that 1Path(T S us run the chase on the tableau T1S for 1S. Notice
that T1S has all the rows in T1S \GammaN k+1
, which is the tableau for 1(S \Gamma N k+1 ). Observe that FD(T S \GammaN k+1
' FD(T S ). By the induction hypothesis, 1Path(T S \GammaN k+1
consider adding N k+1 into TS \GammaN k+1
. By Lemma 12, there is only one node N in TS \GammaN k+1
that satisfies the
conditions at Step 2.2.2.1. One more path will be added to TS \GammaN k+1
or the paths
that contain N will be enlarged if Ancestor(N These changes can be done in T1S by using the
FDs in [R i which is equivalent to FD(T S ). Thus, for each path P in Path(T S ), there is a row r
in T1S such that P ' fC j ag. Therefore, 1Path(T imply 1S and thus MVD(T
imply 1S. Hence, Condition 1 of NNF [MNE96] is satisfied. T satisfies Condition 2 of NNF [MNE96] is
implied by the fact that every given relation scheme is in BCNF and by the conditions in Step 2.2.2.1. At
remains the same and thus T still satisfies NNF [MNE96] . 2
A (B (C )*
3Fig. 11. The instance I on scheme A(B(C ) ) .
D.2 Advantages of FDs
We first discuss how FDs can be used in constructing large scheme trees, and thus big clusters of data. If
with the MVD B !! C and T is a scheme tree with A as the root, B as the child of A, and C
as the child of B, then B !! C holds for T . The redundancy caused by this MVD can be easily seen in the
instance I on A(B(C ) ) in Figure 11.
For I , we can cover the data f3, 4g in the first nested tuple under (C) , and we can tell that it must be
f3, 4g by using the MVD B !! C and the other data values in I . A(B(C ) ) is not in NNF [MNE96] with
respect to U and B !! C, neither is it in NNF [OY87a] , nor in NNF [OY89] , nor in NNF [RK87] . However, with
an additional FD B ! A, this redundancy cannot happen. In this example, I violates B ! A since the
B-value 1 associates with two A-values, 5 and 6. In fact, with this FD, no data instance that satisfies the
MVD and the FD in this example can have redundancy. A(B(C ) ) is in NNF [MNE96] with respect to U
and the dependencies B !! C and B ! A. However, A(B(C does not satisfy NNF [OY87a] , NNF [OY89] ,
and NNF [RK87] even in the presence of the FD B ! A.
Since NNF [OY87a] and NNF [OY89] do not take advantages of FDs, as opposed to NNF [MNE96] , their definitions
lead to small scheme trees and thus small clusters of data. In particular, removing partial dependencies,
as defined in Conditions 2 of NNF [OY87a] and NNF [OY89] , regardless of the given FDs, is the main cause of
this problem. For example, the scheme tree in Figure 3 has partial dependencies with respect to NNF [OY87a]
and NNF [OY89] . Consider the edge (Student, Interest). Ancestor(Student) !! Descendent(Interest) is not
left-reduced and thus the scheme tree violates Condition 2 of NNF [OY87a] , and with some work, we can also
show that the scheme tree violates Condition 2 of NNF [OY89] because of the same edge (Student, Interest).
Using the same reasoning, Ancestor(Prof ) !! Descendent(Hobby) is also not left-reduced with respect to
NNF [OY87a] and NNF [OY89] . To remove these partial dependencies, which do not cause any data redundancy
in the presence of the given FDs in Figure 4, the algorithms of NNF [OY87a] and NNF [OY89] decompose more
than necessary and generate small scheme trees. To be more specific, a scheme tree with P as the root and H
and S as the children of P will be generated by their algorithms. Furthermore, the algorithm of NNF [RK87]
will also generate more than one scheme tree for this example because of the way they handle FDs.
In short, NNF [OY87a] , NNF [OY89] , and NNF [RK87] do not take full advantages of FDs in constructing large
scheme trees while NNF [MNE96] does.
D.3 Dependency Preservation
In addition to characterizing data redundancy, another property of interest of nested database schemes
is dependency preservation. In this subsection, we mainly focus on NNF [MNE96] and NNF [OY89] and their
algorithms.
Some definitions are needed here. Let D be a set of MVDs and FDs over a set of attributes U . A
dependency preserving if there is a
set F of FDs such that each FD in F is embedded in a R i , and 1R [ F is equivalent to D on U [YO92].
Furthermore, conditions are defined for D to be extended conflict-free [YO92]. One interesting property,
among many others, is that if D is an extended conflict-free set of MVDs and FDs over U , then there
is an acyclic and dependency preserving 4NF decomposition of U with respect to D (Proposition 5.1 in
[YO92]). Now, using the decomposition algorithm in [OY89], a set of scheme trees fT 1 , . , Tm g, m 1,
is generated from the given set D of MVDs and FDs and the set of all the attributes U . According to
Proposition 6.3 in [OY89], if D is extended conflict-free, then [ m
is an acyclic and dependency
preserving decomposition of U with respect to D.
In the following theorem, we show that Algorithm 1 also has this property.
Theorem 9: Algorithm 1 generates a nested database scheme that is dependency preserving with respect
to an extended conflict-free set of MVDs and FDs D.
Proof. Since D is extended conflict-free, according to Proposition 5.1 in [YO92], D is equivalent to 1R [
F where R is an acyclic 4NF database scheme and F is a set of FDs such that each FD in F is embedded
in a relation scheme in R. R is in 4NF implies R is in BCNF. Hence, D satisfies the input requirements
of Algorithm 1. Assume Algorithm 1 generates m scheme trees T 1 , . , Tm from m closed sets of relation
schemes S 1 , . , Sm of R. By the construction of Algorithm
Consider running the chase
on the tableau T1R for 1R. By Theorem 8, MVD(T equivalent to [R i 2S j
m. Therefore, MVD(T S 1
equivalent to 1[ m
and F , which is equivalent to 1R [ F , and in turn is equivalent to D. 2
Thus, if a given set of MVDs and FDs is extended conflict-free, the decomposition algorithm in [OY89]
and Algorithm 1 both produce a nested database scheme that is dependency preserving.
IV. Conclusions
Here, we summarize the results of this paper. First, NNF [MNE96] is superior to NNF [OY87a] , NNF [OY89] ,
and NNF [RK87] in generalizing 4NF and BCNF. Second, with respect to non conflict-free sets of MVDs,
NNF [OY87a] , NNF [OY89] , and NNF [RK87] are all superior to NNF [MNE96] in reducing redundant data values.
In addition, Condition 3 of NNF [OY87a] is redundant with respect to sets of MVDs that have the intersection
property. Third, when no FD is given and the given set of MVDs is conflict-free, NNF [MNE96] , NNF [OY87a] ,
NNF [OY89] , and NNF [RK87] are all able to reduce redundant data values and NNF [OY87a] NNF [OY89] , and
NNF [RK87] all imply NNF [MNE96] . However, NNF [MNE96] does not imply NNF [OY87a] , or NNF [OY89] , or
NNF [RK87] . Fourth, NNF [MNE96] provides greater design flexibility than the other three normal forms. Fifth,
NNF [OY87a] , NNF [OY89] , and NNF [RK87] do not take full advantages of FDs in creating large scheme trees
while NNF [MNE96] does; and finally sixth, the algorithms of NNF [MNE96] and NNF [OY89] both are dependency
preserving with respect to extended conflict-free sets of MVDs.
--R
On the desirability of acyclic database schemes.
An integrated approach to logical design of relational database schemes.
Multivalued dependencies and a new normal form for relational databases.
Degrees of acyclicity for hypergraphs and relational database schemes.
A simplified universal relation assumption and its properties.
Independent database schemas.
Bringing object/relational down to earth.
NF-NR: A practical normal form for nested relations.
The Theory of Relational Databases.
Transforming conceptual models to object-oriented database designs: Practicalities
Using nnf to transform conceptual data models to object-oriented database designs
A normal form for precisely characterizing redundancy in nested relations.
A new normal form for nested relations.
Reduced mvds and minimal covers.
On the normalization in nested relational databases.
Database Management Systems.
The design of :1nf relational databases into nested normal form.
Extended algebra and calculus for nested relational databases.
Database System Concepts.
Object normal forms and dependency constraints for object-oriented schemata
Oracle8 PL/SQL Programming.
Unifying functional and multivalued dependencies for relational database design.
--TR
--CTR
Emanuel S. Grant , Rajani Chennamaneni , Hassan Reza, Towards analyzing UML class diagram models to object-relational database systems transformations, Proceedings of the 24th IASTED international conference on Database and applications, p.129-134, February 13-15, 2006, Innsbruck, Austria
Hai Zhuge, Fuzzy resource space model and platform, Journal of Systems and Software, v.73 n.3, p.389-396, November-December 2004
Hai Zhuge, Resource space model, its design method and applications, Journal of Systems and Software, v.72 n.1, p.71-81, June 2004
Wai Yin Mok, Designing nesting structures of user-defined types in object-relational databases, Information and Software Technology, v.49 n.9-10, p.1017-1029, September, 2007
Sven Hartmann , Sebastian Link , Klaus-Dieter Schewe, Functional and multivalued dependencies in nested databases generated by record and list constructor, Annals of Mathematics and Artificial Intelligence, v.46 n.1-2, p.114-164, February 2006 | data redundancy;nested normal forms;nested databases;acyclic database schemes;design flexibility;algorithms;object-relational database management systems;object-relational databases;nested database design;nested database schemes;nested relations;conflict-free sets of MVDs;nested relation schemes;SQL:1999 |
628217 | A Performance Study of Robust Load Sharing Strategies for Distributed Heterogeneous Web Server Systems. | Replication of information across multiple servers is becoming a common approach to support popular Web sites. A distributed architecture with some mechanisms to assign client requests to Web servers is more scalable than any centralized or mirrored architecture. In this paper, we consider distributed systems in which the Authoritative Domain Name Server (ADNS) of the Web site takes the request dispatcher role by mapping the URL hostname into the IP address of a visible node, that is, a Web server or a Web cluster interface. This architecture can support local and geographical distribution of the Web servers. However, the ADNS controls only a very small fraction of the requests reaching the Web site because the address mapping is not requested for each client access. Indeed, to reduce Internet traffic, address resolution is cached at various name servers for a time-to-live (TTL) period. This opens an entirely new set of problems that traditional centralized schedulers of parallel/distributed systems do not have to face. The heterogeneity assumption on Web node capacity, which is much more likely in practice, increases the order of complexity of the request assignment problem and severely affects the applicability and performance of the existing load sharing algorithms. We propose new assignment strategies, namely adaptive TTL schemes, which tailor the TTL value for each address mapping instead of using a fixed value for all mapping requests. The adaptive TTL schemes are able to address both the nonuniformity of client requests and the heterogeneous capacity of Web server nodes. Extensive simulations show that the proposed algorithms are very effective in avoiding node overload, even for high levels of heterogeneity and limited ADNS control. | Introduction
With ever increasing tra-c demand on popular Web sites, a parallel or distributed architecture
can provide transparent access to the users to preserve one logical interface, such
as www.site.org. System scalability and transparency require some internal mechanism
that automatically assigns client requests to the Web node that can oer the best service
[25, 17, 13, 6]. Besides performance improvement, dynamic request assignment allows the
Web server system to continue to provide information even after temporary or permanent
failures of some of the servers.
In this paper, we consider the Web server system as a collection of heterogeneous nodes,
each of them with a visible IP address. One node may consist of a single Web server or
multiple server machines behind the same network interface as in Web cluster systems. The
client request assignment decision is typically taken at the network Web switch level when the
client request reaches the Web site or at the Domain Name Server system (DNS) level during
the address lookup phase that is, when the URL hostname of the Web site is translated into
the IP address of one of the nodes of the Web server system [8]. Address mapping request
is handled by the Authoritative Domain Name Server (ADNS) of the Web site that can
therefore serve as request dispatcher. Original installations of locally distributed Web server
systems with DNS-based request assignment include NCSA HTTP-server [25], SWEB server
[4], lbnamed [34], SunSCALR [36]. On a geographical scale, the Web site is typically built
upon a set of distributed Web clusters, where each cluster provides one IP visible address
to client applications. DNS-based dispatching mechanisms for geographically distributed
systems are implemented by several commercial products such as Cisco DistributedDirector
[11], Alteon WebSystems GSLB [3], Resonate's Global Dispatcher [33], F5 Networks 3DNS
[19], HydraWeb Techs [23], Radware WSD [31], IBM Network Dispatcher [26], Foundry
Networks [21].
For the case of multiple Web servers at the same location (Web clusters), various Web
switch solutions are described in [12, 17, 22, 35, 29]. The Web switch has a full control on
the incoming requests. However, this approach is best suitable to a locally distributed Web
server system. Moreover, the Web switch can become the system bottleneck if the Web site
is subject to high request rates and there is one dispatching mechanism.
In this paper, we focus on DNS-based architectures that can scale from locally to geographically
distributed Web server systems. The dispatching algorithms implemented at the
ADNS level have to address new challenging issues. The main problems for load sharing
come from the highly uneven distribution of the load among the client domains [5, 16] and
from Internet mechanisms for address caching that let the ADNS control only a very small
fraction, often on the order of a few percentage, of the requests reaching the Web site. The
ADNS species the period of validity of cached addresses. This value is referred to as the
time-to-live (TTL) interval and typically xed to the same time for all address mapping
requests reaching the ADNS. Unlike the Web switch that has to manage all client requests
reaching the site, the limited control of ADNS prevents risks of bottleneck in DNS-based
distributed Web server systems. On the other hand, this feature creates a challenge to
ADNS-based global scheduling 1 algorithms and makes this subject quite dierent from existing
literature on centralized schedulers of traditional parallel/distributed systems that have
almost full control on the job requests [18, 10, 32, 24]. The general view of this problem is
to nd a mechanisms and algorithms that are able to stabilize the load in a system when
the control on arrivals is limited to a few percentage of the total load reaching the system.
Under realistic scenarios, in [13] it is shown that the application of classical dispatching
algorithms, such as round-robin and least-loaded-server, to the ADNS often results in
overloaded Web nodes well before the saturation of the overall system capacity. Other dispatching
policies that integrate some client information with feedback alarms from highly
loaded Web nodes achieve much better performance in a homogeneous Web server system.
The complexity of the ADNS assignment problem increases in the presence of Web nodes
with dierent capacities. Web systems with so called heterogeneous nodes are much more
likely to be found in practice. As a result of non-uniform distribution of the client requests
rates, limited control of the dispatcher and node heterogeneity, the ADNS has to take global
scheduling decisions under great uncertainties. This paper nds that a simple extension of
the algorithms for homogeneous Web server systems proposed in [13] does not perform well.
These policies show poor performance even at low levels of node heterogeneity. Other static
and dynamic global scheduling policies for heterogeneous parallel systems which are proposed
in [2, 27] cannot be used because of the peculiarities of the ADNS scheduling problem.
These qualitative observations and preliminary performance results convinced us that an
entirely new approach was necessary. In this paper, we propose and evaluate new ADNS
dispatching policies, called adaptive TTL algorithms. Unlike conventional ADNS algorithms
where a xed TTL value is used for all address mapping requests, tailoring the TTL value
adaptively for each address request opens up a new dimension to perform load sharing.
Extensive simulation results show that these strategies are able to avoid overloading nodes
very eectively even for high levels of node heterogeneity. Adaptive TTL dispatching is a
simple mechanism that can be immediately used in an actual environment because it requires
no changes to existing Web protocols and applications, or other parts that are not under the
direct control of the Web site technical management. Because the installed base of hosts,
1 In this paper we use the denition of global scheduling given in [10], and dispatching as its synonymous.
name servers, and user software is huge, we think that a realistic dispatching mechanism must
work without requiring modications to existing protocols, address mechanisms, and widely
used network applications. Moreover, adaptive TTL algorithms have low computational
complexity, require a small amount of system information, and show robust performance
even in the presence of non-cooperative name servers and when information about the system
state is partial or inaccurate.
The outline of the paper is as follows. In Section 2, we provide a general description of
the environment. In Section 3, we focus on the new issues that a distributed Web server
system introduces on the ADNS global scheduling problem. We further examine the relevant
state information required to facilitate ADNS scheduling. In Section 4, we consider various
ADNS scheduling algorithms with constant TTL, while we propose the new class of adaptive
algorithms in Section 5. In Section 6, we describe the model and parameters of the
heterogeneous distributed Web server systems for the performance study. Moreover, we
discuss the appropriate metrics to compare the performance of the algorithms. In Section
7, we present the performance results of the various algorithms for a wide set of scenarios.
In Section 8, we analyze the implications of the performance study and summarize the
characteristics of all proposed algorithms. Section 9 contains our concluding remarks.
Environment
In this paper, we consider the Web server system as a collection of fS heterogeneous
nodes that are numbered in non-increasing order of processing capacity. Each node
may consist of a single server or a Web cluster, it may be based on single- or multi-processors
machines, have dierent disk speeds and various internal architectures. However, from our
point of view, we only address heterogeneity through the notion that each node may have
a dierent capacity to satisfy client requests. In particular, each node S i is characterized
by an absolute capacity C i that is expressed as hits per second it can satisfy, and a relative
capacity i which is the ratio between its capacity and the capacity of the most powerful
node in the Web server system that is, Moreover, we measure the heterogeneity
level of the distributed architecture by the maximum dierence between the relative node
capacities that is, . For example, if the absolute capacities of the nodes are
hits/sec, the vector of relative
capacities is
The Web site built upon this distributed Web server system is visible to users through
one logical hostname, such as www.site.org. However, IP addresses of the Web nodes (that
is, servers or clusters) are visible to client applications.
In operation, the WWW works as a client/server system where clients submit requests
for objects identied through the Uniform Resource Locator (URL) specied by the user.
Each URL consists of a hostname part and a document specication. The hostname is a
logical address that may refer to one or multiple IP addresses. In this paper, we consider
this latter instance, where each IP address is that of a node of the Web server system. The
hostname has to be resolved through an address mapping request that is managed by the
Domain Name Server System. The so called lookup phase may involve several name servers
and, in some instances, also the ADNS. Once the client has received the IP address of a
Web node, it can direct the document request to the selected node. The problem is that
only a small percentage of client requests actually needs the ADNS to handle the address
request. Indeed, on the path from clients to the ADNS, there are typically several name
servers, which can have a valid copy of the address mapping returned by the ADNS. When
this mapping is found in one of the name servers on this path and the TTL is not expired, the
address request is resolved bypassing the name resolution provided by the ADNS. Further,
Web browsers at the user side also cache some of the address mapping for a period of usually
15 minutes that is out the ADNS control.
The clients have a (set of) local name server(s) and are connected to the network through
local gateways such as rewalls or SOCKS servers. We will refer to the sub-network behind
these local gateways as domain.
3 DNS-based Web server system
In this section we rst consider the various issues on DNS-based dispatching. We then
examine the state and conguration information that can be useful to ADNS and discuss
ways to obtain this information.
3.1 DNS global scheduling issues
The distributed Web server system uses one URL hostname to provide a single interface
for users. For each address request reaching the system, the ADNS returns a tuple (IP
address, TTL), where the rst tuple entry is the IP address of one of the nodes in the Web
server system, and the second entry is the TTL period during which the name servers along
the path from the ADNS to the client cache the mapping. In addition to the role of IP
address resolver, the ADNS of a distributed Web server system can perform as a global
scheduler that distributes the requests based on some optimization criterion, such as load
balancing, minimization of the system response time, minimization of overloaded nodes,
client proximity.
We will rst consider the scheduling issue on address mapping and delay the discussion
on TTL value selection until Section 5 which is the new approach introduced in this paper.
Existing ADNSes typically use DNS rotation or Round-Robin (RR) [1], least-loaded-
server [34] or proximity [11] algorithms to map requests to the nodes. We observed that
these policies show ne performance under (unrealistic) hypotheses that is, the ADNS has
almost full control on client requests and the clients are uniformly distributed among the
domains. Unfortunately, in the Web environment, the distribution of clients among the
domains is highly non-uniform [5] and the issue of the limited ADNS control cannot be
easily removed. Indeed, IP address caching at name servers for the TTL period limits the
control of the ADNS to a small fraction of the requests reaching the Web server system.
Although TTL values close to 0 would give more control to the ADNS, various reasons
prevent this solution. Besides the risks of causing bottleneck at the ADNS, very small TTL
values are typically ignored by the name servers in order to avoid overloading the network
with name resolution tra-c. The ADNS assignment is a very coarse grain distribution of
the load among the Web nodes, because proximity does not take into account heavy load
uctuations of Web workload that are amplied by the geographical context. Besides burst
arrivals and not uniform distribution among Internet domains of clients connected to the
Web site, world time zones are another cause of heterogeneous source arrivals. As we are
considering highly popular Web sites, if the ADNS selects the Web node only on the basis
of the best network proximity, it is highly probable that address resolution is found in the
caches of the intermediate name servers of the Internet Region, so that even less address
requests will reach the ADNS.
From the Web server system point of view, the combination of non-uniform load and
limited control can result in bursts of requests arriving from a domain to the same node
during the TTL period, thereby causing high load imbalance. The main challenge is to nd
a realistic ADNS algorithm, with low computational complexity and fully compatible with
Web standards, that is able to address these issues and the heterogeneity of the Web nodes.
3.2 Relevant state and conguration information for DNS schedul-
ing
One important consideration in dealing with the ADNS scheduling problem is the kind of
state and conguration information that can be used in mapping URL host names to IP
addresses. An in-depth analysis carried out in [13] on the eectiveness of the dierent types
of state information on ADNS scheduling for homogeneous Web server systems is summarized
below:
Scheduling policies, such as round-robin and random used in [25,
4, 28], that do not require any state information show very bad performance under
realistic scenarios [13]. (See further discussions in Section 7.1.)
Detailed server state information. Algorithms using detailed information about the state
of each Web node (for example, queue lengths, present and past utilization) perform
better than the previous ones, but are still unable to avoid overloading some Web node
while under-utilizing other nodes. The present load information does not capture the
eect that is, the future arrivals due to past address resolutions. This makes the
server load information obsolete quickly and poorly correlated with future load condi-
tions. This excludes policies of the least-loaded-server class from further consideration
in a heterogeneous Web server system.
Information on client domain load. An eective scheduling policy has to take into account
some client domain information, because any ADNS decision on an IP address
resolution aects the selected node for the entire TTL interval during which the hostname
to IP address mapping is cached in the name servers. Therefore, the ADNS needs
to make an adequate prediction about the impact on the future load of the nodes following
each address mapping. The key goal is to obtain an estimation of the domain
hit rate, i , which is the number of hits per second reaching the Web server system
from the i-th domain. Multiplying i by TTL, we obtain the hidden load weight
which is the average number of hits that each domain sends to a Web node during a
interval after a new address resolution request has reached the ADNS.
Information on overloaded nodes. Information on overload nodes is useful so that ADNS
can avoid assigning address requests to already over-utilized nodes. For the purpose
of excluding them from any assignment until their load returns in normal conditions,
scheduling algorithms can combine the domain hit rate information with some feedback
information from the overloaded nodes to make address mapping decision.
In addition,
Node processing capacities. As we shall see later, in a heterogeneous Web server system,
the node processing capacity needs to be taken into account by ADNS in either making
node assignment or xing the TTL value.
3.3 Information gathering mechanisms
Based on the observation from the previous section, all ADNS scheduling algorithms considered
in this paper will apply the feedback alarm mechanism and evaluate the hit rate of
each client domain connected to the Web site. The main question is whether these kinds
of information are actually accessible to the ADNS of a distributed Web server system. We
recall that the rst requirement for an ADNS algorithm is that it must be fully compatible
with existing Web standards and protocols. In particular, all state information needed by a
policy has to be received by the ADNS from the nodes and the ADNS itself, because they
are the only entities that the Web site management can use to collect and exchange load
information. Algorithms and mechanisms that need some active cooperation from any other
Web components, such as browsers, name servers, users, will not be pursued because they
require modications of some out-of-control Web components. We next examine how the
ADNS can have access to the feedback alarm and the domain hit rate information.
The implementation of the feedback alarm information requires two simple mechanisms: a
monitor of the load of each Web node, and an asynchronous communication protocol between
the nodes and the ADNS. Each node periodically calculates its utilization and checks whether
it has exceeded a given # threshold. In that case, the node sends an alarm signal to the ADNS
that excludes it from any further assignment until its load falls below the threshold. This
last event is communicated to the ADNS through a normal signal. We assume that all of the
global scheduling algorithms to be discussed next consider a node as a candidate for receiving
requests only if that node is not overloaded. Although fault-tolerance is not the focus of this
paper, it is worth noting that a very simple modication of this feedback mechanism could
also avoid routing requests to failed or unreachable nodes. Either node-initiated (through
synchronous messages) or scheduler-initiated (through a polling mechanism) strategies could
be combined with the same scheduling algorithms above to also provide fault-tolerance.
The estimation of the domain hit rate cannot be done by the ADNS alone because the
information coming from the clients to the ADNS is very limited. For each new session
requiring an address resolution, the ADNS sees only the IP address of a client's domain.
Due to the address caching mechanisms, the ADNS will see another address request coming
from the same domain only after TTL seconds, independent of the domain hit rate. Hence,
the only viable approach to estimate this information requires cooperation of the Web nodes.
They can track and collect the workload to the distributed Web server system through the
logle maintained by each node to trace the client accesses in terms of hits. According to
the Common Logle Format [14], the information for each hit includes the remote (domain)
hostname (or IP address), the requested URL, the date and time of the request and the
request type. Furthermore, there are also extended logs to provide referred information for
linking each request to a previous Web page request from the same client (additional details
can be found in [9]). Each node periodically sends its estimate of the domain hit rates to the
ADNS, where a collector process gets all estimates and computes the actual hit rate from
each domain by adding up its hit rate on each node.
Finally, the node processing capacity information (f i g) required is a static conguration
information. It is an estimate on the number of hits or HTTP requests per second each node
can support.
Figure
1 summarizes the various components needed for ADNS assuming that a node
consists of a single Web server machine. In addition to the DNS base function, these include
an ADNS scheduler, alarm monitor, domain load collector and TTL selector. The ADNS
scheduler assigns each address request to one of the node based on some scheduling algorithm.
The alarm monitor tracks the feedback alarm from servers to avoid assigning requests to an
overloaded node until the load level is returned to normal, while the domain load collector
collects the domain hit information from each node and estimates the hit rate and hidden
load weight of each domain. The TTL selector xes the appropriate TTL value for the
address mapping. Also shown is the corresponding components in the Web node. Besides
the HTTP daemon server, these include the load monitor and request counter. The load
monitor tracks the node load and issues alarm and normal signal accordingly as explained
above. The request counter estimates the number of hits received from each domain in a
given period and provides the information to the domain load collector in ADNS. When the
node is a Web cluster with multiple server machines, the request counter and load monitor
processes run on the Web switch. This component would have the twofold role of intra-cluster
information collector and interface with the ADNS.
algorithms with constant TTL
Strategies that do not work well in the homogeneous case cannot be expected to achieve
acceptable results in a heterogeneous node system. Hence, we consider only the better
performing homogeneous node algorithms that seem to be extensible to an heterogeneous
environment. Among the several alternatives proposed in [13], the following policies gave
the most promising results. We present them in the forms extended to a heterogeneous node
system to take into account both the non-uniform hit rates and dierent node capacities.
Two-tier Round-Robin (RR2). This algorithm is a generalization of the Round-Robin
(RR) algorithm. It is based on two considerations. First of all, since the clients are
Scheduler
ADNS Domain
Load
Collector
Selector
Alarm
Monitor
HTTP Daemon
Server
ADNS
DNS Base Function
Load Monitor
Request Counter
Web Server Node
Figure
1: Software component diagram
unevenly distributed, the domain hit rates are very dierent. Secondly, the risk of
overloading some of the nodes is typically due to the requests coming from a small
set of very popular domains. Therefore, RR2 uses the domain hit rate information
to partition the domains connected to the Web site into two classes: normal and hot
domains. In particular, RR2 sets a class threshold and evaluates the relative domain
hit rate, which is with respect to the total number of hits in an interval from all connected
domains. The domains characterized by a relative hit rate larger than the class
threshold belong to the hot class. By default, we x the class threshold to 1=jDj, where
jDj is the average number of domains connected to the nodes. That is to say, each
domain with a relative hit rate larger than the class threshold belongs to the hot class.
The RR2 strategy applies a round-robin policy to each class of domains separately.
The objective is to reduce the probability that the hot domains are assigned too frequently
to the same nodes. Partitions of domains in more than two classes have been
investigated with little performance improvement [13].
(and also RR) is easily extendible to a heterogeneous Web server system through
the addition of some probabilistic routing features. The basic idea is to make the round
robin assignment probabilistic based upon the node capacity. To this purpose, we generate
a random number % (0 % 1) and, assuming that S i 1 was the last chosen
node, we assign the new requests to S i only if % i . Otherwise, S i+1 becomes the next
candidate and we repeat the process that is, we generate another random number and
compare it with the relative capacity of S i+1 . This straightforward modication allows
RR2 and RR to schedule the requests by taking into account of the various node
capacities. These probabilistic versions of the RR and RR2 algorithms are denoted
by Probabilistic-RR (PRR) and Probabilistic-RR2 (PRR2), respectively. Hereafter, we
will refer to the conventional RR as Deterministic-RR (DRR) to distinguish it from
PRR and analogously, DRR2 from PRR2.
Dynamically Accumulated Load (DAL). This algorithm uses the domain hit rate to
estimate the hidden load weight of each domain. Each time the ADNS makes a node
selection following an IP address resolution request, it accumulates the hidden load
weight of the requesting domain in a bin for each node to predict how many requests
will arrive to the chosen node due to this mapping. At each new IP address request,
the ADNS selects the node that has the lowest accumulated bin level.
DAL makes the node selection only based on the hidden load weight from the clients.
A generalization of this algorithm to a heterogeneous Web server system should take
into account the node capacity. The solution is to normalize the hidden load weight
accumulated at each bin by the capacity of the corresponding node. On IP address
assignment, DAL now selects the node that would result in the lowest bin level after
the assignment.
Minimum Residual Load (MRL). This algorithm is a modication of the basic DAL.
Analogous to the previous algorithm, MRL tracks the hidden load weight of each
domain. In addition, the ADNS maintains an assignment table containing all domain
to node assignments and their times of occurrences. Let l j be the average session
length of a client of the j-th domain. After a period of TTL+l j , the eect of the
assignment is expected to expire that is, no more requests will be sent from the j-th
domain due to this assignment. Hence, the entry for that assignment can be deleted
from the assignment table. At the arrival of an address resolution request at the time
now , the ADNS evaluates the expected number of residual requests that each node
should have, on the basis of the previous assignments, and chooses the node with the
minimum number of residual requests that is,
min
domain j
(w
where w j is the hidden load weight of the j-th domain, i is the relative capacity of
the i-th node, and t j (i; k) is the time of the assignment of the k th address resolution
request coming from the j-th domain to the i-th node in the mapping table. The
notation denotes that only the positive terms are considered in the internal sum because
no more residual load is expected to remain from an assignment when the corresponding
term is detected to be negative. The term (t j (i; represents the time
instant that the address mapping expires, and the term (t j (i;
represents the remaining time that the mapping is still valid. By normalizing w j by
the eect of the node heterogeneity is captured. The average session lengths l j are
not readily available at the ADNS but they can be estimated at the Web nodes. A
session can be identied via a cookie generation mechanism or inferred through some
heuristics using the site's topology or referred information [30]. Since l j is expected to
be rather stable over time, the frequency of exchanges between nodes and ADNS for
this information should be relatively low.
In addition to the extensions of the ADNS algorithms taken from homogeneous envi-
ronments, we now consider some scheduling disciplines that are specically tailored to a
heterogeneous environment. The basic idea is to reduce the probability of assigning requests
from hot domains to less powerful nodes. Two representative examples of this approach are:
Two-alarm algorithms. This strategy modies the asynchronous feedback mechanism by
introducing two-levels of threshold dierent relative capacity corresponding
to each level. The goal is to reduce the probability that a node with high load gets
selected to serve requests. Since the strategies PRR and PRR2 base their decision on
the relative node capacities, we force a reduction of the perceived relative capacity, as
node utilization becomes high. In fact, when the utilization of a node S i exceeds the
rst threshold # 1 , the ADNS reduces its relative capacity, for example by dividing i
by two. When the utilization exceeds the second threshold # 2 , the ADNS xes i to 0.
If all capacities were 0, we would use a random choice weighted on the node capacities.
Restricted-RR2. This algorithm is a simple modication of RR2. The requests coming
from the normal domains are divided among all the nodes in the same probabilistic
manner as previously described, while the requests coming from the hot domains are
assigned only to the top (that is, more powerful) nodes of the distributed Web server
system. To avoid having too restricted a subset to serve the heavy requests, we consider
as a top node any node with a relative capacity ( i ) of 0.8 or more.
5 DNS algorithms with dynamic choice of TTL
As we shall see in Section 7.1, the algorithms derived as generalizations of those scheduling
policies used in a homogeneous Web server system are inadequate to address node hetero-
geneity, unevenly domain hit rates, and limited ADNS control. Their poor performance
motivated the search for new strategies that intervene on the TTL value, which is the other
parameter controlled by the ADNS. In this section, we propose two classes of algorithms that
use some policy of Section 4 for the selection of the node and dynamically adjust the TTL
value based on dierent criteria. The rst class of algorithms mainly addresses the problem
of the limited control of the ADNS. To this purpose, it increases the ADNS control when
there are many overloaded nodes through a dynamic reduction of the TTL. The second class
of algorithms is more oriented to address node heterogeneity and unevenly distributed clients
through the use of TTL values that reduce the load skew. A summary of main characteristics
of all discussed dispatching algorithms is in Table 2, Section 8.
As we shall see, to set the TTL value for each address request, the variable TTL algorithm
requires information on the number of overloaded nodes, while adaptive TTL algorithms
need information on the processing capacity of each node and the hit rate of each connected
domain. Since these are the same type of information used by the ADNS scheduler discussed
in the previous section, no new information is required to dynamically x the TTL value.
5.1 Variable TTL algorithms
Firstly, we consider the variable TTL (varTTL) algorithms. They tune TTL values based
on the load conditions of the overall Web server system. When the number of overloaded
nodes increases, these algorithms reduce the TTL value. Otherwise, they use the default
or higher TTL values. The rationale for these strategies comes from the principle that the
ADNS should have more control on the incoming requests when many of the nodes in the
distributed system are overloaded.
In this paper, we implement the following simple formula for determining the TTL value
at time t,
where TTL base is the TTL value when no node is overloaded and joverload(t)j is the minimum
between some upper bound value (!) and the number of overloaded nodes at time t. For
example, if TTL base is 300 seconds and is 60 seconds, the TTL value drops to 240 seconds
after one node gets overloaded. By xing ! to 4, it means that the TTL value can only
drops to 60 seconds, even if there are more than 4 nodes overloaded.
For the node selection, the varTTL policies can be combined with any algorithm of
Section 4. In this paper, we consider the probabilistic versions of RR and RR2 that is,
PRR-varTTL and PRR2-varTTL.
5.2 Adaptive TTL algorithms
Instead of just reducing the TTL value to give more control to the ADNS, an alternative is
to address the unevenly distributed hit rates or heterogeneous node capacities by assigning
a dierent TTL value to each address request. The rationale for this approach comes from
the observation that the hidden load weight increases with the TTL value, independently
of the domain. Therefore, by properly selecting the TTL value for each address resolution
request, we can control the subsequent request load to reduce the load skews that are the
main cause of overloading, especially in a heterogeneous system. More specically, we can
make the subsequent requests from each domain to consume similar percentages of node
capacity. This can address both node heterogeneity and non-uniform hit rates.
First consider node heterogeneity. We assign a higher TTL value when the ADNS chooses
a more powerful node, and a lower TTL value when the requests are routed to a less capable
node. This is due to the fact that for the same fraction of node capacity, the more powerful
node can handle a larger number of requests, or take requests for a longer TTL interval.
An analogous approach can be adopted to handle the uneven hit rate distribution. The
address requests coming from hot domains will receive a lower TTL value than the requests
originated by normal domains. As the hot domains have higher hit rates, a shorter TTL
interval will even out the total number of subsequent requests generated.
The new class of scheduling disciplines that use this approach is called adaptive TTL. It
consists of a two-step decision process. In the rst step, the ADNS selects the Web node. In
the second step, it chooses the appropriate value for the TTL interval. These strategies can
be combined with any scheduling algorithm described in Section 4. Due to space limitations,
we only consider the basic RR algorithm and its RR2 variant. Furthermore, we combine the
adaptive TTL policies with the deterministic and probabilistic versions of these algorithms.
Both of them handle non-uniform requests by using TTL values inversely proportional to
the domain hit rate, while address system heterogeneity either during the node selection
(probabilistic policies) or through the use of TTL values proportional to the node capacities
(deterministic policies).
5.2.1 Probabilistic algorithms
The probabilistic policies use PRR or PRR2 algorithms to select the node. After that, the
value is assigned based on the hit rate of the domain that has originated the address
request. In its most generic form, we denote by TTL/i the policy that partitions the domains
into i classes based on the relative domain hit rate and assigns a dierent TTL value to
address requests originating from a dierent domain class. TTL/i is a meta-algorithm that
includes various strategies. For we obtain a degenerate policy (TTL/1) that uses the
same TTL for any domain, hence not a truly adaptive TTL algorithm. For
the policy (TTL/2) that partitions the domains into normal and hot domains, and chooses a
high TTL value for requests coming from normal domains, and a low TTL value for requests
coming from hot domains. Analogously, for we have a strategy that uses a three-tier
partition of the domains, and so on, until that denotes the algorithm (TTL/K) that
uses a dierent TTL value for each connected domain. (In actual implementation, to reduce
the amount of bookkeeping, domains with lower hit rates may be lumped into one class.)
For TTL/K policies, let TTL j (t) denote the TTL value chosen for the requests coming from
the j-th domain at time t,
where p is the parameter which scales the average TTL (and the minimum TTL) value
and hence overall rate of the address mapping requests, j (t) and max (t) are the hit rates
(available as an estimation at time t) of the j-th domain and the most popular domain,
respectively.
5.2.2 Deterministic algorithms
Under the deterministic algorithms, the node selection is done by the ADNS through the
deterministic RR or RR2 policy. The approach for handling non-uniform hit rates is via
adjusting TTL value similar to that described for the probabilistic disciplines. However, the
value is now chosen by considering the node capacity as well. For the generic TTL/S i
policy, we partition the client domains into i classes based on the domain hit rates. The TTL
for each class and node is set inversely proportional to the class hit rate while proportional to
the node capacity. The deterministic TTL/S 1 algorithm is a degenerate case that considers
node heterogeneity only and ignores the skew on domain hit rates.
The TTL/S 2 policy uses two TTL values for each node depending on the domain class
of the requests that is, normal or hot domain.
The TTL/S K algorithm selects a TTL value for each node and domain combination.
Specically, be the TTL chosen for the requests from the j-th domain to the
i-th node at time t,
where d is the parameter which scales the average TTL (and the minimum TTL) value.
6 Performance model
In this section we provide details on the simulation model and describe the various parameters
of the model. We then discuss the performance metrics to compare the dierent ADNS
schemes.
6.1 Model assumptions and parameters
We rst consider the workload. We assume that clients are partitioned among the domains
based on a Zipf's distribution that is, a distribution where the probability of selecting the
i-th domain is proportional to 1=i (1 x) [38]. This choice is motivated by several studies
demonstrating that if one ranks the popularity of client domains by the frequency of their
accesses to the Web site, the distribution of the number of clients in each domain is a function
with a short head (corresponding to big providers, organizations and companies, possibly
behind rewalls), and a very long tail. For example, a workload analysis on academic and
commercial Web sites shows that in average 75% of the client requests come from only 10%
of the domains [5]. In the experiments, the clients are partitioned among the domains based
on a pure Zipf's distribution that is, using in the default case. This represents the
most uneven client distribution. Additional sensitivity analysis on the skew parameter (x)
and other distributions is included in Section 7.4.
We did not model the details of Internet tra-c [15] because the focus of this paper is on
Web server system. However, we consider major components that impact the performance
of the system. This includes an accurate representation of the number and distribution of
the intermediate name servers as in [7], because they aect operations and performance of
the ADNS scheduling algorithms through their address caching mechanisms.
Moreover, we consider all the details concerning a client session that is the entire period
of access to the Web site from a single user. In the rst step, the client obtains (through
the ADNS or the cache of a name server or gateway) an address mapping to one of the
Web nodes through the address resolution process. As the Web server system consists of
heterogeneous nodes with identical content, the requests from the clients can be assigned to
any one of the nodes. Once the node selection has been completed, the client is modeled to
submit multiple Web page requests that are separated with a given mean think time. The
number of page requests per session and the time between two page requests from the same
client are assumed to be exponentially distributed as in [5].
Each page request consists of a burst of small requests sent to the node. These bursts
represent the objects that are contained within a Web page. These are referred to as hits.
Under the HTTP/1.0 protocol, each hit request establishes a new connection between the
client and the Web node. However, address caching at the browser level guarantees that
a client session is served by the same Web node independently of its duration. Moreover,
the new version HTTP/1.1 provides persistent connections during the same session [20]. As
a consequence, the dierence between HTTP protocols does not aect the results of this
paper.
The number of hits per page request are obtained from a uniform distribution in the
discrete interval [5-15]. Previous measures reported roughly seven dierent hits per page,
however more recent analyses indicate that the mean number of embedded hits is increasing
[16]. The hit service time and the inter-arrival time of hit requests to the node are assumed
to be exponentially distributed. (We will also consider the case of hits with very long mean
service time like the CGI type in Section 7.4.) Other parameters used in the experiments
are reported in Table 1 with their default values between brackets. When not otherwise
specied, all performance results refer to the default values. A thorough sensitivity analysis,
not shown because of space limits, reveals that main conclusions of the experiments are not
aected by the choice of workload parameters such as the number of hits per page request,
the mean service time and inter-arrival time of hits.
Category Parameter Setting (default values)
Web system Number of nodes 7
System capacity 1500 hits/sec
Average system load 1000 hits/sec
Average utilization 0.6667
Homogeneous
Domain Connected 10-100 (20)
Client Number 1000-3000
Distribution among domains Zipf
Geometric
Request Web page requests per session exponential (mean 20)
Hits per Web page request uniform in [5-15]
Inter-arrival of page requests exponential (mean 15)
Inter-arrival of hits exponential (mean 0.25)
Hit service time exponential (mean 1=C i )
Table
1: Parameters of the system (all time values are in seconds).
In our experiments, we considered ve levels of node heterogeneity. Table 1 reports
details about the relative capacities and the heterogeneity level of each Web server system.
By carefully choosing the workload and system parameters, the average utilization of the
system is kept to 2/3 of the whole capacity. This value is obtained as a ratio between the
oered load that is, the total number of hits per second arriving to the Web site, and the
system capacity which is the sum of the capacity of each node denoted in hits per second.
Although we considered dierent levels of node heterogeneity, we keep the system capacity
constant to allow for a fair comparison among the performance of the proposed algorithms.
Furthermore, to implement the feedback alarm, each node periodically calculates its
utilization (the period is of 16 seconds) and checks whether it has exceeded a given #=0.75
threshold as in [13]. (Additional simulation results, not shown, indicate that our results are
not sensitive to these parameters.) For two-alarm algorithms, the two threshold are xed at
0.6 and 0.75, respectively.
For the constant TTL schemes, a default TTL value of 240 seconds is used as in [17]. For
the variable TTL algorithm, TTL base is xed to 300 seconds with We note that the
base TTL value (TTL base ) is higher than the TTL value in the constant TTL case. In fact,
if one of the nodes becomes overloaded, the TTL value drops to 240 seconds which becomes
the same as the constant TTL case. For the adaptive TTL algorithms, the average TTL
value is xed to 400 seconds so as to keep the minimum TTL value (denoted as TTL ) above
seconds, while the maximum TTL value can go up to 1200 seconds. This average TTL
value is considerable higher than the 240 seconds in the constant TTL case. Nonetheless, as
we shall see later, the adaptive TTL algorithms still perform far better than the constant
algorithms. Sensitivity analysis to TTL values is provided in Section 7.4.
The simulators were implemented using the CSIM package [37]. Each simulation run is
made up of ve hours of the Web site activities. Condence intervals were estimated, and
the 95% condence interval was observed to be within 4% of the mean.
6.2 Performance metrics
We next examine the metrics of interest for evaluating the performance of an ADNS algorithm
in a heterogeneous Web server system. The main goal is to avoid any of the Web nodes
becoming overloaded. That is to say, our objective is to minimize the highest load among
all nodes at any instant. Commonly adopted metrics such as the standard deviation of node
utilization are not useful for this purpose because minimizing the load dierences among the
Web nodes is only a secondary goal.
These considerations lead us to evaluate the performance of the various policies focusing
on the system maximum utilization at a given instant that is, the highest node utilization
observed at that instant among all nodes in the system. For example, assume three nodes
in a Web server system. If their utilizations are 0.6, 0.75, and 0.63, respectively, at time t 1 ,
and 0.93, 0.66 and 0.42, respectively, at time t 2 , the system maximum utilization at t 1 is
0.75 and that at t 2 is 0.93. With a system maximum utilization of 0.93, the Web site has
serious load problems at t 2 .
Specically, the major performance criterion is the cumulative frequency of the system
maximum utilization that is, the probability (or fraction of time) that the system maximum
utilization is below a certain value. By focusing on the highest utilization among all Web
nodes, we can deduce whether the Web server system is overloaded or not. Moreover, its
cumulative frequency can provide an indication on the relative frequency of overloading. For
example, if the probability of all nodes less than 0.80 utilized is 0.75, it implies that the
probability of at least one node exceeding 0.80 utilized is 0.25.
In practice, the performance of the various scheduling policies is evaluated by tracking
at periodic intervals the system maximum utilizations observed during the simulation runs.
The node with the maximum utilization changes over time. However, if the system maximum
utilization at an instant is low, it means that no node is overloaded at that time. By tracking
the period of time the system maximum utilization is above or below a certain threshold,
we can get an indication of how well the Web server system is working. We recall that in all
experiments the Web server system is subject to an oered load equal to 2/3 of the overall
system capacity. We note that typically all the nodes, even if with dierent proportions,
contribute to this maximum during an entire simulation run. Since the average utilization
is xed at 0.6667, the distribution of the system maximum utilization of a perfect policy
(always maintaining a utilization of 0.6667 at each node) should be a step function which
goes from 0 to 1 at a utilization of 0.6667.
When we evaluate the sensitivity of the algorithms as a function of system parameters,
such as node heterogeneity, we nd it useful to adopt a dierent metric that is related to the
cumulative frequency of the system maximum utilization. For this set of results, we consider
the 96-th percentile of the system maximum utilization that is, P rob(SystemMaxU tilization <
0:96). In other words, the probability that no node of the Web server system is overloaded,
namely Prob(Not Overloaded System), becomes the performance metric of interest.
7 Performance results
For the performance evaluation of the proposed dispatching algorithms, we carried out a
large number of experiments. Only a subset is presented here due to space limitation. The
rst set of experiments (in Section 7.1) shows the problem with constant TTL algorithms. It
illustrates the point that just considering the scheduling component is not su-cient to achieve
good performance. We then evaluate algorithms that also explore the TTL component. The
remaining sections focus on measuring how eectively the adaptive TTL algorithms applied
to a ADNS scheduler, that controls only a small percentage of the requests, can avoid
overloading nodes in a heterogeneous distributed Web server system.
7.1 Constant TTL schemes
In this section we evaluate the performance of the constant TTL algorithms based on the
parameters shown in Table 1. We obtained simulation results for four heterogeneity levels
of the Web server system from 20% to 65%. In Figures 2, we present the performance for
the lowest heterogeneity level in Table 1. In this gure, we also report as \ideal" policy
the PRR algorithm under uniform distribution of the client request rates, and the random
algorithm that has the worst performance. The y-axis is the cumulative probability (or
relative frequency) of the system maximum utilization reported on the x-axis. The higher
the probability the less likely that some of the nodes will be overloaded, and hence better
load sharing is achieved. For example, under the PRR2 policy, the probability of having
maximum utilization less than 0.95 is about 0.5, while under DRR, the probability is only
0.2. This gure conrms that the various probabilistic versions of the round-robin policy
perform better than the deterministic versions, and this improvement is even more consistent
if we look at the RR2 algorithm. The results of the MRL policy are close to PRR2 and much
better than the DRR algorithm which is often proposed for DNS-based distributed systems.
Indeed, for this latter policy, the probability that no node is overloaded is below 0.2. That
is to say, for more than 80% of the observed time, there is at least one overloaded node.
However, even considering this slightly heterogeneous system, no strategy achieves acceptable
performance. In the best instance, the Web server system has at least one node
overloaded (P rob(SystemMaxU tilization > 0:96)) for about 30% of the time, and the
shapes of all cumulative frequencies are very far from the \ideal" policy's behavior. This
motivated the search for alternative policies such as Restricted-RR2 and Round-Robin with
two alarms (Restricted-RR2 and PRR2-Alarm2, respectively).
Figures
3 analyzes the sensitivity of the proposed algorithms with constant TTL to the
system heterogeneity. Now the y-axis is the probability that no Web node is overloaded,
while the x-axis is the heterogeneity level of the Web server system. This gure shows that
both DAL and MRL-based policies are unable to control node load when the distributed
Web server system is heterogeneous. The performance of the other variants of constant TTL
algorithms (that is, PRR-Alarm2, PRR2-Alarm2 and Restricted-RR2) is similar to that of
the basic PRR2. Although the PRR2-Alarm2 algorithm often performs better than other
policies, no strategy clearly outperforms all the other policies over all system heterogeneity
levels. Moreover, the di-culty in determining the best strategy is also conrmed by other
(not reported) experiments in which we vary the number of domains, clients and average
load. None of these policies can actually be considered adequate because the probability of
having at least one overloaded node is still high that is, always more than 0.30.
Cumulative
Frequency
System Max Utilization
PRR
MRL
DRR
Random (Zipf)
Figure
2: Performance of constant TTL algorithms
(Heterogeneity level of 20%)0.40.80
Prob(Not
Overloaded
Level %
PRR
MRL
Figure
3: Sensitivity to system heterogeneity of
constant TTL algorithms
7.2 Comparison of constant and dynamic TTL schemes
The next set of results evaluates how system heterogeneity aects the performance of the
adaptive TTL schemes. Figure 4 compares various deterministic TTL/S i algorithms for a
low heterogeneity level of 20%. Each set consists of both the deterministic RR2 and RR
scheduling schemes for Also shown in this gure is the DRR scheme with
a constant TTL. First of all, for each set of TTL strategies, the RR2 scheduling scheme is
always slightly better than the RR scheme. All adaptive TTL schemes that address both
node and client heterogeneity perform signicantly better than constant TTL policies, while
policies taking into account only the node heterogeneity (as done by TTL/S 1 schemes) do
not improve performance much. Moreover, the results of the strategies that use a dierent
for each node and domain, namely DRR-TTL/S K and DRR2-TTL/S K, are very close
to the envelope curve of the \ideal" PRR(uniform) policy.
Similar results are achieved by the probabilistic schemes that combine adaptive TTL to
handle non-uniform domain hit rates and probabilistic routing features to address system
heterogeneity. Figure 5 shows the cumulative probability of the system maximum utilization
for a heterogeneity level of 20%. The relative order among the strategies remains analogous
to the previous order. Specically, the RR2 scheduling policies are slightly better than RR
strategies and TTL/K strategies outperform TTL/2 strategies. Also we consider here the
variable TTL (varTTL) schemes. The performance of varTTL schemes are close to that of the
strategies. Furthermore, all probabilistic adaptive TTL approaches are consistently
better than the PRR scheme with a constant TTL, even for a heterogeneity level low as 20%.
Since the RR2-based algorithms perform better than RR-based counterpart, in the re-
Cumulative
Frequency
System Max Utilization
DRR
Figure
4: Performance of deterministic algorithms
(Heterogeneity level of 20%)0.20.61
Cumulative
Frequency
System Max Utilization
PRR
Figure
5: Performance of probabilistic algorithms
(Heterogeneity level of 20%)
mainder of this section we mainly focus on the former class of policies. Figures 6 and 7 refer
to a distributed Web server system with a 35% and 65% heterogeneity level, respectively.
Figure
6 shows that when we adopt a TTL proportional to each domain hit rate, the deterministic
strategy TTL/S K prevails over the probabilistic TTL/K. On the other hand, when
we consider only two classes of domains, the probabilistic approach TTL/2 is slightly better
than the deterministic TTL/S 2. Analogous results are observed for a system heterogeneity
equal to 50% and for other system parameters.
The probabilistic approaches tend to perform better than deterministic strategies when
the system heterogeneity is very high that is, more than 60%. Figure 7 shows that DRR2-
TTL/S K performs the best if we look at the 98th percentile, while the shape of the curve
is in favor of PRR2-TTL/K if we consider lower percentiles. Moreover, the DRR2-TTL/S 2
and PRR2-TTL/2 algorithms, which performed more or less the same in the previous cases,
dierentiate themselves here in favor of the probabilistic algorithms. PRR2-varTTL performs
close to the PRR2-TTL/2 strategy, while DRR2-TTL/S 2 and DRR2-TTL/S 1 seem rather
inadequate to address high heterogeneity levels.
We next consider the average number of messages to ADNS under the constant TTL,
variable TTL and adaptive TTL schemes. Specically, in Figure 8, PRR2, PRR2-varTTL
and PRR2-TTL/K are chosen to represent the constant TTL, variable TTL and adaptive
schemes, respectively. (The dierence among the schemes in each class, such as PRR2-
TTL/K and DRR2-TTL/S K, is small.) Figure 8 shows both the number of address requests
and alarm requests with a 35% heterogeneity level. The variable TTL schemes have a higher
request load to ADNS, while the adaptive TTL and constant TTL schemes are comparable.
However, it is important that no policy risks to stress ADNS.
Cumulative
Frequency
System Max Utilization
Figure
Performance of RR2-based algorithms
(Heterogeneity level of 35%)0.20.61
Cumulative
Frequency
System Max Utilization
Figure
7: Performance of RR2-based algorithms
(Heterogeneity level of 65%)50150250350450DNS
messages
(per
hour)
Constant TTL Variable TTL Adaptive TTL
CONSTANT - requests
alarms
alarms
alarms
Figure
8: Number of address resolution requests to ADNS
7.3 Sensitivity to system heterogeneity
In this set of experiments we evaluate the sensitivity of the proposed strategies to the degree
of system heterogeneity from 20% to 65%. We rst focus on deterministic and probabilistic
policies (Figure 9 and Figure 10, respectively), and then compare the two classes of policies
under RR2 in Figure 11. Now, the y-axis is the probability that the no node of the Web
server system is overloaded, while the x-axis denotes the heterogeneity level.
Figures
9-11 show that most adaptive TTL algorithms are relatively stable that is, their
performance does not vary widely when the heterogeneity level increases to 50%. After this
level, a more sensible performance degradation can be observed for all policies. However,
for any heterogeneity level, a large gap exists among the schemes that use a dierent TTL
value for each connected domain and the other policies. Moreover, while achieving the best
performance, the TTL/S K and TTL/K algorithms display the best stability, too. TTL/2
algorithms are still acceptable when combined with RR2-based scheduling schemes, while
they tend to degrade more for higher heterogeneity level when they are combined with the
RR-based scheduling schemes. The varTTL algorithm, which uses a variable TTL depending
upon the number of overloaded nodes, is very unstable as shown in Figure 11, while the DRR-
(as shown in Figure performs much worse than any other adaptive TTL
policies and more similar to a constant TTL strategy (comparing Figure 9 with Figure 3).
Hereafter, we do not consider these two types of strategies, because of their instability and
poor performance, respectively. Figure 11 shows that when we assign a dierent TTL to
each connected domain, DRR2-TTL/S K performs the best, while with only two classes
of domains, PRR2-TTL/2 performs better than DRR2-TTL/S 2 for higher heterogeneity
levels. From all the shown results, the following are observed.
Adaptive TTL schemes, especially DRR2-TTL/S K, and PRR2-TTL/K, are very effective
in avoiding overloading the nodes even when the system is highly heterogeneous
and domain hit rates are unevenly distributed as the pure Zipf's function.
Constant TTL strategies cannot handle the non-uniformity of client distribution as well
as node heterogeneity. Various enhancements, such as those provided by restricted-RR
and two-alarm-RR schemes, are not really eective.
Dierentiating requests coming from more popular and normal domains improves the
performance, regardless of whether TTL is dynamically chosen or xed. Indeed, RR2-
based strategies are always slightly better than their RR-based counterparts.
Deterministic strategies typically perform better than probabilistic schemes. However
the dierence is not large and tends to diminish for high heterogeneity levels.
Prob(Not
Overloaded
Level %
Figure
9: Sensitivity to system heterogeneity of
Prob(Not
Overloaded
Level %
Figure
10: Sensitivity to system heterogeneity of
Prob(Not
Overloaded
Level %
Figure
11: Sensitivity to system heterogeneity of RR2-based algorithms
7.4 Sensitivity to TTL values and workload parameters
We now consider sensitivity to the average TTL values. In Figure 12, both the PRR2-TTL/K
and are shown with a 20% heterogeneity level for dierent mean TTL values from 300
to 500 seconds. (DRR2-TTL/S K schemes which show similar behavior as the PRR2-TTL/K
schemes are not shown for readability of the gure.) The adaptive TTL schemes outperform
the constant TTL scheme (PRR2) with a wide margin regardless of the TLL values. In
Figure
13, the DRR2-TTL/S K and are shown with a 50% heterogeneity level for
various mean TTL values. (The PRR2-TTL/K schemes are not shown for readability of the
gure.) The superiority of the adaptive TTL schemes is again observed.
Figure
14 shows the sensitivity of the overload probability of the various TTL policies
to the mean TTL value, when the heterogeneity level is at 20%. The various dynamic
Figure
12: Sensitivity to TTL of probabilistic algorithm
Cumulative
Frequency
System Max Utilization
Figure
13: Sensitivity to TTL of deterministic algorithm
Prob(Not
Overloaded
PRR
Figure
14: Sensitivity to TTL of Dynamic TTL schemes
schemes perform far superior to the constant TTL schemes, PRR and PRR2, and
show much less sensitivity to the TTL values. Among the dynamic TTL schemes, DRR2-
TTL/S K provides the best performance. Also all the RR2 policies perform better than the
corresponding RR policies.
We next study the sensitivity to the client request distributions. In addition to the pure
Zipf distribution (with the skew parameter distribution (with
a Zipf distribution with 0:5, and a uniform distribution (corresponding to are
considered. Figure 15 shows the performance of PRR2-TTL/K under these dierent client
distributions with a heterogeneity level of 35%. The performance improves as the skew in
the client distribution decreases. The geometric distribution has performance close to the
pure Zipf distribution. We note that the mean TTL value is kept the same at 400 seconds
for all cases. Because the client distributions have dierent skew, the minimum TTL values
Geometric (p=0.3)
Figure
15: Sensitivity to client distribution (same
mean TTL=400 seconds)0.20.61
Cumulative
Frequency
System Max Utilization
Zipf (x=0.5)- mean TTL=158 sec.
Geometric
Figure
16: Sensitivity to client distribution (same
under PRR2-TTL/K will be dierent for the dierent client distributions (See Equation 3).
If we hold the minimum TTL value (TTL) at 60 seconds for all cases as in Figure 16, the
performance gap will be much larger as the skew in the client distribution increases. The
spread of the TTL values among the client domains (hence also the mean TTL value) also
increases with the skew as indicated in Figure 16.
Next, the sensitivity to the hit service time is examined. We consider the case where
some of the hit requests such as CGI dynamic request has particularly long service time. We
introduce a new type of Web page requests which consist of one hit of the CGI-type, and ve
other ordinary hits as considered before, where a CGI-type hit is assumed to have an average
service time that is about 10 times the ordinary hits. Figure 17 shows the performance of
PRR2-TTL/K under dierent percentages of this new type of Web page requests containing
a CGI-type hit. The dynamic TTL scheme handles workload with long hit service time very
well. As the percentage of these long Web page requests increases, the performance actually
improves. This is due to the fact that for a given amount of total load to the system, if
the per request load increases, the number of subsequent requests arriving during the TTL
period will decrease so that the ADNS control improves.
7.5 Robustness of adaptive TTL schemes
The previous results point out a clear preference for adaptive TTL schemes. We now examine
their robustness by considering two specic aspects. One is the impact of name servers and
gateways not following the TTL value recommended by the ADNS. The other is how sensitive
the performance is to the accuracy of the estimated domain hit rate. The latter is less of an
cgi req.=0%
cgi req.=5%
cgi req.=10%
cgi req.=15%
cgi req.=20%
Figure
17: Sensitivity to long requests of PRR2-TTL/K algorithms
issue if the load from each domain remains relatively stable or changes slowly. However, in
a more dynamic environment where hit rates from the domains may change continuously, it
can be di-cult to obtain an accurate estimate.
7.5.1 Eects of non-cooperative name servers
Each name server caches the address mapping for a TTL period. In order to avoid network
saturation due to address resolution tra-c, very small TTL values are typically ignored by
name servers. Since there is not a common TTL lower threshold which is adopted by all
name servers, in our study we consider the worst case scenarios, where all name servers
and gateways are considered non-cooperative if the proposed TTL is lower than a given
minimum, and perform sensitivity analysis against this threshold.
Figure
shows the sensitivity of the adaptive TTL policies to the minimum accepted
value by the name servers, when the heterogeneity level is at 35%. The performance
of DRR2-TTL/S K and PPR2-TTL/K gradually deteriorates as the minimum TTL value
allowed by the name servers increases. However, PRR2-TTL/2 is almost insensitive to the
minimum accepted TTL. The advantage from DRR2-TTL/S K or PPR2-TTL/K diminishes
as the minimum TTL value accepted by the name servers increases because the TTL/S K
schemes may sometimes need to select quite a low TTL value when a client request coming
from a hot domain is assigned to a node with limited capacity. On the other hand, a
probabilistic TTL/2 strategy is almost not aected by the problem of non-cooperative name
servers because it uses a rough partitioning (that is, two classes) of the domains and is able
to always assign TTL values higher than 180 seconds in all experiments.
Prob(Not
Overloaded
MinTTL-NS
Figure
18: Sensitivity to minimum accepted TTL by name servers
7.5.2 Eects of the estimation error
Next we will examine how the maximum error in estimating the hit rate of each domain may
aect the system performance. Figures 19 and 20 compare various adaptive TTL schemes
as a function of the estimation error for heterogeneity levels of 20% and 50%, respectively.
In the experiment, we introduce a perturbation to the hit rate of each domain, while the
ADNS estimates of the domain hit rates remain the same as before. Hence the percentage
error in the load estimate is the same as the amount of perturbation on the actual load. For
the case of a % error, the hit rate of the busiest domain is increased by % and the hit
rates of the other domains are proportionally decreased to maintain the same total load on
the system. This eectively increases the skew of the hit rate distribution, hence represents
a worst case.0.60.81
Prob(Not
Overloaded
Estimation
Figure
19: Sensitivity to estimation error (Hetero-
geneity level of
Prob(Not
Overloaded
Estimation
Figure
20: Sensitivity to estimation error (Hetero-
geneity level of 50%)
When the estimation error or load perturbation increases, the system performance decreases
in all eight algorithms. However, all the TTL/S K and TTL/K schemes clustered on
the top show much less sensitivity than the TTL/S 2 and TTL/2 schemes on the bottom.
In particular, when the node heterogeneity is high ( 50%) and the error is large ( 30%),
the performance of TTL/S 2 strategies can degrade substantially. This is in contrast to the
TTL/S K and TTL/K schemes which are only slightly aected by the error in estimating the
domain hit rate. When the heterogeneity level is less than 50%, their performance degrades
at most a few percentage points as compared to the case with no estimation error. This
shows the robustness of the TTL/S K and TTL/K algorithms.
Although the TTL/S 2 and TTL/2 algorithms are very sensitive to the estimation error
(even when this is limited to 10%), it is important to note that the shown results refer to a
positive perturbation on the client domain with the heaviest hit rate. This is actually a worst
(unrealistic) case because we indirectly increase the skews of the client request distributions
to more than a pure Zipf's distribution. Other results not reported, which consider negative
perturbation on the heaviest domain hit rate, show analogous performance for the TTL/K
policies and much better performance for the TTL/S 2 and TTL/2 strategies. This was
expected because a reduction of the hit rate of the busiest domain makes client requests
more evenly distributed than a pure Zipf's distribution.
8 Summary of the performance study
Table
2 outlines the specic methods that each ADNS algorithm uses to address heterogeneous
nodes and uneven distribution of client requests.
In summary,
To balance the load across multiple Web nodes, ADNS has two control knobs: the
scheduling policy (that is, the node selection) and the TTL value for the period of
validity of the selection.
Just exploring the scheduling component alone (constant TTL algorithms) is inadequate
to address both node heterogeneity and uneven distributions of clients among
domains.
Variable TTL policies that dynamically reduce the TTL when the number of overloaded
nodes increases so that more control can be given to the ADNS perform better than
constant TTL algorithms. However, this approach is not su-cient to cope with high
heterogeneity levels.
Category ADNS Algorithm Non-uniform Client Distribution Heterogeneous Nodes
Constant
two-tier domain partition |
PRR | probabilistic routing
two-tier domain partition probabilistic routing
DAL accumulated hidden load weight normalized bin
MRL residual hidden load weight normalized (residual) bin
PRR-Alarm2 | two alarms, probabilistic routing
PRR2-Alarm2 two-tier domain partition two alarms, probabilistic routing
two-tier domain partition limited probabilistic routing
Variable routing
two-tier domain partition probabilistic routing
Adaptive
(Deterministic) DRR-TTL/S i TTL/(i-classes hit rate) TTL/(node capacities)
domain partition TTL/(node capacities)
two-tier domain partition TTL/(node capacities)
Adaptive TTL PRR-TTL/1 (same policy as PRR)
(Probabilistic) PRR-TTL/i TTL/(i-classes hit rate) probabilistic routing
PRR2-TTL/1 (same policy as PRR2)
PRR2-TTL/i two-tier domain partition probabilistic routing
Table
2: Summary of ADNS scheduling algorithms.
Adaptive TTL schemes can be easily integrated with even simple scheduling policies
such as RR or RR2. This approach shows good performance for various node heterogeneity
levels and system parameters, even in the presence of non-cooperative Internet
name servers.
When there is full control on the choice of the TTL values that is, all (or most) name
servers are cooperative, DRR2-TTL/S K is the strategy of choice.
When there is limited control on the chosen TTL values, both DRR2-TTL/S K and
show reasonable resilient to the eect of non-cooperative name servers.
Both DRR2-TTL/S K and PRR2-TTL/K perform well, even if the domain hit rate
cannot be accurately estimated because of high variability of the load sources.
The heterogeneity level of the Web server system aects the achievable level of perfor-
mance. Specically, our results indicate that if the degree of heterogeneity is within
50%, the probability of node overloading guaranteed by best adaptive TTL policies is
always less than 0.05-0.10. Therefore, to achieve satisfactory performance, it would
be desirable not to exceed this heterogeneity level in the design of a distributed Web
server system.
Moreover, the adaptive TTL algorithms give the best results even for homogeneous
Web server systems. In these instances, they are almost always able to avoid overloading
nodes even if the ADNS control on the client requests remains below 3-4% of the
total load reaching the Web site.
9 Conclusions
Although distributed Web server systems may greatly improve performance and enhance
fault-tolerance of popular Web sites, their success depends on load sharing algorithms that
are able to automatically assign client requests to the most appropriate node. Many interesting
scheduling algorithms for parallel and distributed systems have previously been
proposed. However, none of them can be directly adopted for dynamically sharing the load
in a distributed Web server system when request dispatching is carried out by the Authoritative
DNS of the Web site. The main problems are that the ADNS dispatcher controls only
a small fraction of the client requests which actually reach the Web server system. Besides
that, these requests are unevenly distributed among the Internet domains. The problems
are further complicated when we consider the more likely scenario of a Web server system
consisting of heterogeneous nodes.
We rst showed that extending known scheduling strategies or those adopted for the
homogeneous node case [13] does not lead to satisfactory results. Therefore, we propose a
dierent class of strategies, namely adaptive TTL schemes. They assign a dierent expiration
time (TTL value) to each address mapping taking into account the capacity of the chosen
node and/or the relative load weight of the domain which has originated the client request.
A key result of this paper is that, in most situations, the simple combination of an alarm
signal from overloaded nodes and adaptive TTL dramatically reduces load imbalance even
when the Web server system is highly heterogeneous and the ADNS scheduler controls a very
limited portion of the incoming requests. Moreover, the proposed strategies demonstrate high
robustness. Their performance is almost not aected even when the error in estimating the
domain load is sizable (say 30%), and it is only slightly aected in the presence of some
non-cooperative name servers that is, name servers and gateways not accepting low TTL
values that could be sometimes proposed by the ADNS.
Acknowledgements
This work was entirely carried out while Michele Colajanni was a visiting researcher at
the IBM T.J. Watson Research Center, Yorktown Heights, NY. This paper beneted from
preliminary discussions held with Daniel Dias, IBM T.J. Watson Research Center, and from
contributions in experiments given by Valeria Cardellini, University of Roma Tor Vergata,
Italy. The authors wish to thank the anonymous referees for their helpful comments on
earlier versions of this paper.
--R
DNS and BIND
Alteon Web Systems
http://www.
http://www.
Common Log
http://www.
http://www.
http://www.
http://www.
Dispatcher, http://www.
CSIM18: The Simulation Engine
Human Behavior and the Principles of Least E
--TR
--CTR
Xueyan Tang , Samuel T. Chanson, Adaptive hash routing for a cluster of client-side web proxies, Journal of Parallel and Distributed Computing, v.64 n.10, p.1168-1184, October 2004
Request Redirection Algorithms for Distributed Web Systems, IEEE Transactions on Parallel and Distributed Systems, v.14 n.4, p.355-368, April
Lap-sun Cheung , Yu-kwok Kwok, On Load Balancing Approaches for Distributed Object Computing Systems, The Journal of Supercomputing, v.27 n.2, p.149-175, February 2004 | performance analysis;load balancing;distributed systems;domain name system;World Wide Web;global scheduling algorithms |
628247 | Clustering for Approximate Similarity Search in High-Dimensional Spaces. | In this paper, we present a clustering and indexing paradigm (called Clindex) for high-dimensional search spaces. The scheme is designed for approximate similarity searches, where one would like to find many of the data points near a target point, but where one can tolerate missing a few near points. For such searches, our scheme can find near points with high recall in very few IOs and perform significantly better than other approaches. Our scheme is based on finding clusters and, then, building a simple but efficient index for them. We analyze the trade-offs involved in clustering and building such an index structure, and present extensive experimental results. | Introduction
Similarity search has generated a great deal of interest lately because of
applications such as similar text/image search and document/image copy
detection. These applications characterize objects (e.g., images and text
documents) as feature vectors in very high-dimensional spaces [13, 23]. A
user submits a query object to a search engine, and the search engine returns
objects that are similar to the query object. The degree of similarity between
two objects is measured by some distance function between their feature
vectors. The search is performed by returning the objects that are nearest
to the query object in high-dimensional spaces.
Nearest-neighbor search is inherently expensive, especially when there
are a large number of dimensions. First, the search space can grow exponentially
with the number of dimensions. Second, there is simply no way to
build an index on disk such that all nearest neighbors to any query point
are physically adjacent on disk. (We discuss this "curse of dimensional-
ity" in more detail in Section 2.) Fortunately, in many cases it is sufficient
to perform an approximate search that returns many but not all nearest
neighbors [2, 17, 15, 27, 29, 30]. feature vector is often an approximate
characterization of an object, so we are already dealing with approximations
anyway.) For instance, in content-based image retrieval [11, 19, 40]
and document copy detection [9, 13, 20], it is usually acceptable to miss a
small fraction of the target objects. Thus it is not necessary to pay the high
price of an exact search.
In this paper we present a new similarity-search paradigm: a cluster-
ing/indexing combined scheme that achieves approximate similarity search
with high efficiency. We call this approach Clindex (CLustering for INDEX-
ing). Under Clindex, the dataset is first partitioned into "similar" clusters.
To improve IO efficiency, each cluster is then stored in a sequential file, and a
mapping table is built for indexing the clusters. To answer a query, clusters
that are near the query point are retrieved into main memory. Clindex then
ranks the objects in the retrieved clusters by their distances to the query
object, and returns the top, say k, objects as the result.
Both clustering and indexing have been intensively researched (we survey
related work in Section 2), but these two subjects have been studied
separately with different optimization objectives: clustering optimizes classification
accuracy, while indexing maximizes IO efficiency for information
retrieval. Because of these different goals, indexing schemes often do not
preserve the clusters of datasets, and randomly project objects that are
close (hence similar) in high-dimensional spaces onto a 2D plane (the disk
geometry). This is analogous to breaking a vase (cluster) apart to fit it
into the minimum number of small packing boxes (disk blocks). Although
the space required to store the vase may be reduced, finding the boxes in
a high-dimensional warehouse to restore the vase requires a great deal of
effort.
In this study we show that by (1) taking advantage of the clustering
structures of a dataset, and (2) taking advantage of sequential disk IOs by
storing each cluster in a sequential file, we can achieve efficient approximate
similarity search in high-dimensional spaces with high accuracy. We examine
a variety of clustering algorithms on two different data sets to show that
Clindex works well when (1) a dataset can be grouped into clusters and (2)
an algorithm can successfully find these clusters. As a part of our study,
we also explore a very natural algorithm called Forming (CF) that
achieves a pre-processing cost that is linear in the dimensionality and polynomial
in the dataset size, and can achieve a query cost that is independent
of the dimensionality.
The contributions of this paper are as follows:
ffl We study how clustering and indexing can be effectively combined. We
show how a rather natural clustering scheme (using grids as in [1]) can
lead to a simple index that performs very well for approximate similarity
searches.
ffl We experimentally evaluate the Clindex approach on a well-clustered 30; 000-
image database. Our results show that Clindex achieves very high recall
when the data can be well divided into clusters. It can typically return
more than 90% of what we call the "golden" results (i.e., the best results
produced by a linear scan over the entire dataset) with a few IOs.
ffl If the dataset does not have natural clusters, clustering may not be as ef-
fective. Fortunately, real datasets rarely occupy a large-dimensional space
uniformly [4, 15, 38, 46, 48]. Our experiment with a set of 450; 000 randomly
crawled Web images that do not have well-defined categories shows
that Clindex's effectiveness does depend on the quality of the clusters. Nev-
ertheless, if one is willing to trade some accuracy for efficiency, Clindex is
still an attractive approach.
ffl We also evaluate Clindex with different clustering algorithms (e.g., TSVQ),
and compare Clindex with other traditional approaches (e.g., tree struc-
tures) to understand the gains achievable by pre-processing data to find
clusters.
ffl Finally, we compare Clindex with PAC-NN [15], a latest approach for
conducting approximate nearest neighbor search.
The rest of the paper is organized as follows. Section 2 discusses related
work. Section 3 presents Clindex and shows how a similarity query is conducted
using our scheme. Section 4 presents the results of our experiments
and compares the efficiency and accuracy of our approach with those of some
traditional index structures. Finally, we offer our conclusions and discuss
the limitations of Clindex in Section 5.
Related Work
In this section we discuss related work in three categories:
1: Tree-like index structures for similarity search,
2: Approximate similarity search, and
3: Indexing by clustering.
2.1 Tree-like Index Structures
Many tree structures have been proposed to index high-dimensional data
(e.g., R -tree [3, 24], SS-tree [45], SR-tree [28], TV-tree [31], X-tree [6], M-tree
[16], and K-D-B-tree [36]). A tree structure divides the high-dimensional
space into a number of subregions, each containing a subset of objects that
can be stored in a small number of disk blocks. Given a vector that represents
an object, a similarity query takes the following three steps in most
systems [21]:
1. It performs a where-am-I search to find in which subregion the given
vector resides.
2. It then performs a nearest-neighbor search to locate the neighboring
regions where similar vectors may reside. This search is often implemented
using a range search, which locates all the regions that overlap with the
search sphere, i.e., the sphere centered at the given vector with a diameter
d.
3. Finally, it computes the distances (e.g., Euclidean, street-block, or L 1
distances) between the vectors in the nearby regions (obtained from the
previous step) and the given vector. The search result includes all the
vectors that are within distance d from the given vector.
The performance bottleneck of similarity queries lies in the first two
steps. In the first step, if the index structure does not fit in main memory
and the search algorithm is inefficient, a large portion of the index structure
must be fetched from the disk. In the second step, the number of neighboring
subregions can grow exponentially with respect to the dimension of
the feature vectors. If D is the number of dimensions, the number of neighboring
subregions can be on the order of O(3 D ) [21]. Roussopoulos et. al
[37] propose the branch-and-bound algorithm and Hjaltason and Samet [25]
propose the priority queue scheme to reduce the search space. But, when D
is very large, the reduced number of neighboring pages to access can still be
quite large. Berchtold et. al [5] propose the pyramid technique that partitions
a high dimensional space into 2D pyramids and then cuts each pyramid
into slices that are parallel to the basis of the pyramid. This scheme does
not perform satisfactorily when the data distribution is skewed or when the
search hypercube touches the boundary of the data space.
In addition to being copious, the IOs can be random and hence exceedingly
expensive. 1 An example can illustrate what we call the random-
placement syndrome faced by traditional index structures. Figure 1(a) shows
a 2-dimensional Cartesian space divided into 16 equal stripes in both the
vertical and the horizontal dimensions, forming a 16 \Theta 16 grid structure.
The integer in a cell indicates how many points (objects) are in the cell.
Most index structures divide the space into subregions of equal points in a
top-down manner. Suppose each disk block holds 40 objects. One way to divide
the space is to first divide it into three vertical compartments (i.e., left,
middle, and right), and then to divide the left compartment horizontally.
We are left with four subregions A, B, C and D containing about the same
number of points. Given a query object residing near the border of A, B
and D, the similarity query has to retrieve blocks A, B and D. The number
of subregions to check for the neighboring points grows exponentially with
To transfer 100 KBytes of data on a modern disk with 8-KByte block size, doing a
sequential IO is more than ten times faster than doing a random IO. The performance
gap widens as the size of data transferred increases.
respect to the data dimension.
A
(a) Clustering by indexing
Object y
Object z
Object x
(b) Random placement
Figure
1: The shortcomings of tree structures.
Furthermore, since in high-dimensional spaces the neighboring subregions
cannot be arranged in a manner sequential to all possible query ob-
jects, the IOs must be random. Figure 1(b) shows a 2-dimensional example
of this random phenomenon. Each grid in the figure, such as A and B, represents
a subregion corresponding to a disk block. The figure shows three
possible query objects x, y and z. Suppose that the neighboring blocks of
each query object are its four surrounding blocks. For instance, blocks A,
B, D and E are the four neighboring blocks of object x. If the neighboring
blocks of objects x and y are contiguous on disk, then the order must be
CFEBAD, or FCBEDA, or their reverse orders. Then it is impossible
to store the neighboring blocks of query object z contiguously on disk, and
this query must suffer from random IOs. This example suggests that in
high-dimensional spaces, the neighboring blocks of a given object must be
dispersed randomly on disk by tree structures.
Many theoretical papers (e.g., [2, 27, 29]) have studied the cost of an
exact search, independent of the data structure used. In particular, these
papers show that if N is the size of a dataset, D is the dimension, and
D ?? log N , then no nearest-neighbor algorithm can be significantly faster
than a linear search.
2.2 Approximate Similarity Search
Many studies propose conducting approximate similarity search for applications
where trading a small percentage of recall for faster search speed is
acceptable. For example, instead of searching in all the neighboring blocks
of the query object, study [44] proposes performing only the where-am-I step
of a similarity query, and returning only the objects in the disk block where
the query object resides. However, this approach may miss some objects
similar to the query object. Take Figure 1(a) as an example. Suppose the
query object is in the circled cell in the figure, which is near the border of
regions A, B and D. If we return only the objects in region D where the
query object resides, we miss many nearest neighbors in A.
Arya and Mount [2] suggest doing only "-approximate nearest-neighbor
searches, denote the function computing the distance
between two points. We say that is an "-approximate nearest neighbor
of q if for all studies
have attempted to devise better algorithms to reduce search time and storage
requirements. For example, Indyk and Motwani [27] and Kushilevitz et al.
[30] give algorithms with polynomial storage and query time polynomial in
logn and d. Indyk and Motwani [27] also give another algorithm with smaller
storage requirements and sublinear query time. Most of this work, however,
is theoretical. The only practical scheme that has been implemented is the
locality-sensitive hashing scheme proposed by Indyk and Motwani [27]. The
key idea is to use hash functions such that the probability of collision is
much higher for objects that are close to each other than for those that are
far apart.
Approximate search has also been applied to tree-like structures. Recent
work [15] shows that if one can tolerate " ? 0 relative error with a ffi
confidence factor, one can improve the performance of M-tree by 1-2 orders
of magnitude.
Although an "-approximate nearest-neighbor search can reduce the search
space significantly, its recall can be low. This is because the candidate space
for sampling the nearest neighbor becomes exponentially larger than the
optimal search space. To remedy this problem, a follow-up study of [27]
builds multiple locality-preserving indexes on the same dataset [26]. This is
analogous to building n tree indexes on the same dataset, and each index
distributes the data into data blocks differently. To answer a query, one
retrieves one block following each of the indexes and combines the results.
This approach achieves better recall than is achieved by having only one
index. But in addition to the n times pre-processing overhead, it has to
replicate the data times to ensure that sequential IOs are possible via
every index.
2.3 Clustering and Indexing by Clustering
Clustering techniques have been studied in the statistics, machine learning
and database communities. The work in different communities focuses on
different aspects of clustering. For instance, recent work in the database
community includes CLARANS [35], BIRCH [48], DBSCAN [18], CLIQUE
[1], and WaveClusters [39]. These techniques have high degree of success in
identifying clusters in a very large dataset, but they do not deal with the
efficiency of data search and data retrieval.
Of late, clustering techniques were explored for efficient indexing in high-dimensional
spaces. Choubey, Chen and Rundensteiner [14] propose pre-processing
data using clustering schemes before bulk-loading the clusters
into an R-tree. They show that the bulk-loading scheme does not hurt
retrieval efficiency compared to inserting tuples one at a time. Their bulk-loading
scheme speeds up the inseration operation but is not designed to
tackle the search efficiency problem that this study focuses on. For improving
search efficiency, we presented the RIME system that we built [13]
for detecting replicated images using an early version of Clindex [12]. In
this paper, we extend our early work for handling approximate similarity
search and conduct extensive experiments to compare our approach with
others. Bennett, Fayyad and Geiger [4] propose using the EM (Expecta-
tion Maximization) algorithm to cluster data for efficient data retrieval in
high-dimensional spaces. The quality of the EM clusters, however, depends
heavily on the initial setting of the model parameters and the number of clus-
ters. When the data distribution does not follow a Gaussian distribution
and the number of clusters and the Gaussian parameters are not initialized
properly, the quality of the resulting clusters suffers. How these initial parameters
should be selected is still an open research problem [7, 8]. For
example, it is difficult to know how many clusters exist before EM is run,
but their EM algorithm needs this number to cluster effectively. Also, the
EM algorithm may take many iterations to converge, and it may converge
to one of many local optima [7, 32].
The clustering algorithm that Clindex uses in this paper does not make
any assumption on the number of clusters nor on the distribution of the
data. Instead, we use a bottom-up approach that groups objects adjacent
in the space into a cluster. (This is similar to grouping stars into galaxias.)
Therefore, a cluster can be in any shape. We believe that when the data
is not uniformly distributed, this approach can preserve the natural shapes
of the clusters. In addition, Clindex treats outliers differently by placing
them in separate files. An outlier is not close to a cluster, so it is not similar
to the objects in the clusters. However, an outlier can be close to other
outliers. Placing outliers separately helps the search for the similar outliers
more efficiently.
3 Clindex Algorithm
Since the traditional approaches suffer from a large number of random IOs,
our design objectives are (1) to reduce the number of IOs, and (2) to make
the IOs sequential as much as possible. To accomplish these objectives,
we propose a clustering/indexing approach. We call this scheme Clindex
(CLustering for INDEXing). The focus of this approach is to
similar data on disk to minimize disk latency for retrieving similar
objects, and
ffl Build an index on the clusters to minimize the cost of looking up a cluster.
To couple indexing with clustering, we propose using a common structure
that divides space into grids. As we will show shortly, the same grids
that we use for clustering are used for indexing. In the other direction,
an insert/delete operation on the indexing structure can cause a regional
reclustering on the grids. Clindex consists of the following steps:
1. It divides the i th dimension into 2 - stripes. In other words, at each
dimension, it chooses points to divide the dimension. (We describe
how to adaptively choose these dividing points in Section 3.3.2.) The
stripes in turn define cells, where D is the dimension of the search
space. This way, using a small number of bits (- bits in each dimension)
we can encode to which cell a feature vector belongs.
2. It groups the cells into clusters. A cell is the smallest building block
for constructing clusters of different shapes. This is similar to the idea in
calculus of using small rectangles to approximate polynomial functions of
any degree. The finer the stripes, the smaller the cells and the finer the
building blocks that approximate the shapes of the clusters. Each cluster
is stored as a sequential file on disk.
3. It builds an index structure to refer to the clusters. A cell is the smallest
addressing unit. A simple encoding scheme can map an object to a cell ID
and retrieve the cluster it belongs to in one IO.
The remainder of this section is divided into four topics:
Section 3.1 describes how Clindex works in its clustering phase.
Indexing: Section 3.2 depicts how a structure is built by Clindex to index
clusters.
Tuning: Section 3.3 identifies all tunable control parameters and explains
how changing these parameters affect Clidnex' performance.
Finally, we show how a similar query is conducted using Clindex
in Section 3.4.
3.1 The CF Algorithm: Clustering Cells
To perform efficient clustering in high-dimensional spaces, we use an algorithm
called cluster-forming (CF). To illustrate the procedure, Figure 2(a)
shows some points distributed on a 2D evenly-divided grid. The CF algorithm
works in the following way:
1. CF first tallies the height (the number of objects) of each cell.
2. CF starts with the cells with the highest point concentration. These
cells are the peaks of the initial clusters. (In the example in Figure
2(a), we start with the cells marked 7.)
3. CF descends one unit of the height after all cells at the current height
are processed. At each height, a cell can be in one of three conditions:
it is not adjacent to any cluster, it is adjacent to only one cluster, or it
is adjacent to more than one cluster. The corresponding actions that
CF takes are
(a) If the cell is not adjacent to any cluster, the cell is the seed of a
new cluster.
(b) If the cell is adjacent to only one cluster, we join the cell to the
cluster.
(c) If the cell is adjacent to more than one cluster, the CF algorithm
invokes an algorithm called cliff-cutting (CC) to determine
(a) Before clustering
A
(b) After clustering
Figure
2: The grid before and after CF.
to which cluster the cell belongs, or if the clusters should be combined
4. CF terminates when the height drops to a threshold, which is called
the horizon. The cells that do not belong to any cluster (i.e., that are
below the horizon) are grouped into an outlier cluster and stored in
one sequential file.
Figure
2(b) shows the result of applying the CF algorithm to the data
presented in Figure 2(a). In contrast to how the traditional indexing schemes
split the data (shown in Figure 1(a)), the clusters in Figure 2(b) follow what
we call the "natural" clusters of the dataset.
The formal description of the CF algorithm is presented in Figure 3.
The input to CF includes the dimension (D), the number of bits needed
to encode each dimension (-), the threshold to terminate the clustering
algorithm ('), and the dataset (P ). The output consists of a set of clusters
(\Phi) and a heap structure (H) sorted by cell ID for indexing the clusters. For
each cell that is not empty, we allocate a structure C that records the cell
The Cluster Forming Algorithm
ffl Input:
ffl Output:
ffl \Phi; /* cluster set */
ffl Variables:
ffl Execution Steps:
0: Init:
1: for each
to a cell id */
1.2: C / HeapFind(H; -)
else
new C;
C:id
2: S / fCjC 2 Hg; /* S is a temp array holding a copy of all cells */
3: Sort(S); /* sort cells in descending order on C:#p */
5: while ((C 6= nil) and (C:#p - '))
5.1: \Psi / FindNeighborClusters(S; C:id) /* \Psi contains cell C's neighboring
clusters */
not adjacent to any cluster */
new fi; /* fi holds a new cluster structure */
Insert the new cluster into the cluster set */
Update to which cluster the cell belongs */
else If (j\Psij - 1) /* If the cell is adjacent to one or more than
one cluster */
described in Section 3.1.2 */
new fi; fi:C / 0; /* Group remaining cells into an outlier cluster */
7: \Phi / \Phi [ ffig;
8: for
Figure
3: The Cluster Forming (CF) algorithm.
ID (C:id), the number of points in the cell (C:#p), and the cluster that the
cell belongs to (C:fi). The cells are inserted into the heap. CF is a two-pass
algorithm. After the initialization step (step 0), its first pass (step 1) tallies
the number of points in each cell. For each point p in the data set P , it
maps the point into a cell ID by calling the function Cell. The function Cell
divides each dimension into 2 - regions. Each value in the feature vector is
replaced by an integer between 0 and which depends on where the
value falls in the range of the dimension. The quantized feature vector is
the cell ID of the object. The CF algorithm then checks whether the cell
exists in the heap (by calling the procedure HeapFind, which can be found
in a standard algorithm book). If the cell exists, the algorithm increments
the point count for the cell. Otherwise, it allocates a new cell structure, sets
the point count to one, and inserts the new cell into the heap (by calling the
procedure HeapInsert, also in standard textbooks).
In the second pass, the CF algorithm clusters the cells. In step 2 in the
figure, CF copies the cells from the heap to a temporary array S. Then, in
steps 3 and 4 it sorts the cells in the array in descending order on the point
count (C:#p). In step 5, the algorithm checks if a cell is adjacent to some
existing clusters starting from the cell with the greatest height down to the
termination threshold '. If a cell is not adjacent to any existing cluster, a
new cluster fi is formed in step 5.2(a). The CF algorithm records the centroid
cell for the new cluster in fi:C and inserts the cluster into the cluster set
\Phi. If the cell is adjacent to more than one cluster, the algorithm calls the
procedure CC (cliff cutting) in step 5.2(b) to determine which cluster the
cell should join. (Procedure CC is described in Section 3.1.2.) The cell then
joins the identified (new or existing) cluster. Finally, in steps 6 to 8, the
cells that are below the threshold are grouped into one cluster as outliers.
3.1.1 Time Complexity:
We now analyze the time complexity of the CF algorithm. Let N denote
the number of objects in the dataset and M be the number of nonempty
cells. First, it takes O(D) to compute the cell ID of an object and it takes
O(D) time to check whether two cells are adjacent. During the first pass of
the CF algorithm, we can use a heap to keep track of the cell IDs and their
heights. Given a cell ID, it takes O(D \Theta log M) time to locate the cell in the
heap. Therefore, the time complexity of the first phase is O(N \Theta D \Theta logM ).
During the second pass, the time to find all the neighboring cells is
Mg). The reason is that there are at most three neighboring
stripes of a given cell in each dimension. We can either search all the neighboring
cells, which are at most 3 D \Gamma 1, or search all the nonempty cells, which
are at most M . In a high-dimensional space 2 , we believe that M !! 3 D .
Therefore, the time complexity of the second phase is O(D \Theta M 2 ). The total
time complexity is O(N \Theta D \Theta log M)+ O(D \Theta M 2 ).
Since the clustering phase is done off-line, the pre-processing time may
be acceptable in light of the speed that is gained in query performance.
3.1.2 Procedure CC
In the Cliff-Cutting (CC) procedure, we need to decide to which cluster the
cell belongs. Many heuristics exist. We choose the following two rules:
1: If the new object is adjacent to only one cluster, then add it to the
cluster.
2: If the new object is adjacent to more than one cluster, we
ffl Merge all clusters adjacent to the cell into one cluster.
ffl Insert this cell to the cluster.
Note that the above rules may produce large clusters. One can avoid
large clusters by increasing either - or ' as we discuss in Sections 3.3.1
and 3.3.2.
3.2 The Indexing Structure
In the second step of the CF algorithm, an indexing structure is built to
support fast access to the clusters generated. As shown in Figure 4, the
indexing structure includes two parts: a cluster directory and a mapping
table. The cluster directory keeps track of the information about all the
clusters, and the mapping table maps a cell to the cluster where the cell
resides.
All the objects in a cluster are stored in sequential blocks on disk, so
that these objects can be retrieved by efficient sequential IOs. The cluster
directory records the information about all the clusters, such as the cluster
ID, the name of the file that stores the cluster, and a flag indicating whether
the cluster is an outlier cluster. The cluster's centroid, which is the center
of all the cells in the cluster, is also stored. With the information in the
image databases as an example. Many image databases use feature vectors with
more than 100 dimensions. Clearly, the number of images in an image database is much
smaller than 3 100 .
Mapping Table Cluster Directory
Cell ID Cluster ID00000011000001111111100162826
filename="file158"
Figure
4: The index structure.
cluster directory, the objects in a cluster can be retrieved from disk once we
know the ID of the cluster.
The mapping table is used to support fast lookup from a cell to the
cluster where the cell resides. Each entry has two values: a cell ID and a
cluster ID. The number of entries in the mapping table is the number of
nonempty cells (M ), and the empty cells do not take up space. In the worst
case, we have one cell for each object, so there are at most N cell structures.
The ID of each cell is a binary code with the size D \Theta - bits, where D is
the dimension and - is the number of bits we choose for each dimension.
Suppose that each cluster ID is represented as a two-byte integer. The total
storage requirement for the mapping table is M \Theta (D \Theta -=8 + 2) bytes. In
the worst case, and the total storage requirement for the mapping
table is on the order of O(N \Theta D). The disk storage requirement of the
mapping table is comparable to that of the interior nodes in a tree index
structure.
Note that the cell IDs can be sorted and stored sequentially on disk.
Given a cell ID, we can easily search its corresponding entry by doing a
binary search. Therefore, the number of IOs to look up a cluster is O(log M ),
which is comparable the cost of doing a where-am-I search in a tree index
structure.
3.3 Control Parameters
The granularity of the clusters is controlled by four parameters:
ffl D: the number of data dimensions or object features.
ffl -: the number of bits used to encode each dimension.
the number of objects.
ffl ': the horizon.
The number of cells is determined by parameters D and - and can be
written as 2 D\Theta- . The average number of objects in each cell is N
2 D\Theta- . This
means that we are dealing with two conflicting objectives. On the one hand,
we do not want to have low point density, because low point density results
in a large number of cells but a relatively small number of points in each
cell and hence tends to create many small clusters. For similarity search
you want to have a sufficient number of points to present a good choice to
the requester, say, at least seven points in each cluster [33]. On the other
hand, we do not want to have densely-populated cells either, since having
high point density results in a small number of very large clusters, which
cannot help us tell objects apart.
Cell ID
# of objects
Figure
5: A Clustering Example.
3.3.1 Selecting '
The value of ' affects the number and the size of the clusters. Figure 5
shows an example in a one-dimensional space. The horizontal axis is the
cell IDs and the vertical axis the number of points in the cells. The ' value
set at the t level threshold forms four clusters. The cells whose heights are
below are clustered into the outlier cluster. If the threshold is reduced,
both the cluster number and the cluster size may change, and the size of the
outlier cluster decreases. If the outlier cluster has a relatively small number
of objects, then it can fit into a few sequential disk blocks to improve its IO
efficiency. On the other hand, it might be good for the outlier cluster to be
relatively large, because then it can keep the other clusters well separated.
The selection of a proper ' value can be quite straightforward in an
indirect way. First, one decides what percentage of data objects should be
outliers. We can then add to the 5 th step of the CF algorithm (in Figure 3) a
termination condition that places all remaining data into the outlier cluster
once the number of remaining data objects drops below the threshold.
3.3.2 Selecting -
Due to the uneven distribution of the objects, it is possible that some regions
in the feature space are sparsely populated and others densely populated.
To handle this situation, we need to be able to perform adaptive clustering.
Suppose we divide each dimension into 2 - stripes. We can divide regions
that have more points into smaller substripes. This way, we may avoid
very large clusters, if this is desirable. In a way, this approach is similar
to the extensible hashing scheme: for buckets that have too many points,
the extensible hashing scheme splits the buckets. In our case, we can build
clusters adaptively with different resolutions by choosing the dividing points
carefully based on the data distribution. For example, for image feature
vectors, since the luminescence is neither very high nor very low in most
pictures, it makes sense to divide the luminescence spectrum coarsely at
the two extremes and finely in the middle. This adaptive clustering step
can be done in Step 1:1 in Figure 3. When the Cell procedure quantizes
the value in each dimension, it can take the statistical distribution of the
dataset in that dimension into consideration. This step, however, requires
an additional data analysis pass before we execute the CF algorithm, so that
the Cell procedure has the information to determine how to quantize each
dimension properly.
To summarize, CF is a bottom-up clustering algorithm that can approximate
the cluster boundaries to any fine detail and is adaptive to data
distributions. Since each cluster is stored contiguously on disk, a cluster can
be accessed with much more efficient IOs than traditional index-structure
methods.
3.4 Similarity Search
Given a query object, a similarity search is performed in two steps. First,
Clindex maps the query object's feature vector into a binary code as the ID
of the cell where the object resides. It then looks up in the mapping table
to find the entry of the cell. The search takes the form of different actions,
depending on whether or not the cell is found:
ffl If the cell is found, we obtain the cluster ID to which the cell belongs. We
find the file name where the cluster is stored in the cluster directory, and
then read the cluster from disk into memory.
ffl If the cell is not found, the cell must be empty. We then find the cluster
closest to the feature vector by computing and comparing the distances
from the centroids of the clusters to the query object. We read the nearest
cluster into main memory.
If a high recall is desirable, we can read in more nearby clusters. After we
read candidate objects in these nearby clusters into main memory, we sort
them according to their distances to the query object, and return the nearest
objects to the user.
Remarks:
Our search can return more than one cluster whose centroid is close to the
query object by checking the cluster directory. Since the number of clusters
is much smaller than the number of objects in the dataset, the search for
the nearest clusters can very likely be done via an in-memory lookup. If the
number of clusters is very large, one may consider treating cluster centroids
as points and apply clustering algorithms on the centroids. This way, one
can build a hierarchy of clusters and narrow down the search space to those
clusters (in a group of clusters) that are nearest to the query object. Another
alternative is to precompute the k-nearest clusters and store their IDs in each
cluster. As we show in Section 4, returning the top few clusters can achieve
very high recall with very little time.
Example:
A
x
F
Figure
Figure
6 shows a 2D example of how a similarity search is carried out.
In the figure, A, B, C, D and E are five clusters. The areas not covered
by these clusters are grouped into an outlier cluster. Suppose that a user
submits x as the query object. Since the cell of object x belongs to cluster
A, we return the objects in cluster A and sort them according to their
distances to x. If a high recall is required, we return more clusters. In this
case, since clusters B and C are nearby, we also retrieve the objects in these
two clusters. All the objects in clusters A, B and C are ranked based on
their distances to object x, and the nearest objects are returned to the user.
If the query object is y, which falls into the outlier cluster, we first retrieve
the outlier cluster. By definition, the outlier cluster stores all outliers,
and hence the number of points in the outlier that can be close to y is small.
We also find the two closest clusters D and E, and return the nearest objects
in these two clusters.
Evaluation
In our experiments we focused on queries of the form "find the top k most
similar objects" or k Nearest Neighbors (abbreviated as k-NN). For each
k-NN query, we return the top k nearest neighbors of the query object.
To establish the benchmark to measure query performance, we scanned the
entire dataset for each query object to find its top 20 nearest neighbors, the
"golden" results. There are at least three metrics of interest to measure the
query result:
(a) Recall after X IOs: After X IOs are performed, what fraction of the
k top golden results has been retrieved?
(b) Precision after X IOs: After X IOs, what fraction of the objects
retrieved is among the top k golden results?
(c) Precision after R% of the top k golden objects has been found. We
call this R-precision, and it quantifies how much "useful" work was done
in obtaining some fraction of the golden results.
In this paper we focus on recall results, and comment only briefly on
R-precision and relative distance errors [47]. In our environment, we believe
that precision is not the most useful metric since the main overhead is IOs,
not the number of non-golden objects that are seen.
We performed our experiments on two sets of images: a 30; 000 and a
450; 000 image set. The first set contains 30; 000 images from Corel CDs,
which has been extensively used as a test-bed of content-based image retrieval
systems by the computer vision and image processing communities.
This 30; 000-image dataset consists images of different categories, such as
landscapes, portraits, and buildings. The other dataset is a set of 450; 000
randomly-crawled images from the Internet. 3 This 450; 000-image dataset
has a variety of content, ranging from sports, cartoons, clip arts, etc. and can
be difficult to classify even manually. To characterize images, we converted
each image to a 48-dimensional feature vector by applying the Daubechies'
wavelet transformation [13].
In addition to using the CF clustering algorithm, we indexed these feature
vectors using five other schemes: Equal Partition (EP), Tree-Structure
Vector Quantization (TSVQ) [22], R -tree, SR-tree, and M-tree with the
PAC-NN scheme [15]:
ffl EP: To understand the role that clustering plays in Clindex, we devised a
simple scheme, EP, that partitions the dataset into sequential files without
performing any clustering. That is, we partitioned the dataset into cells
with an equal number of images, where each cell occupies a contiguous
region in the space and is stored in a sequential file. Since EP is very
similar to Clindex with CF except for the clustering, any performance
differences between the schemes must be due to the clustering. If the
3 The readers are encouraged to experiment with the prototype, which is made available
on-line [10]. We have been experimenting with our new approaches in image characterization
so that the prototype may change from time to time.
differences are significant, it will mean that the dataset is not uniformly
distributed and that Clindex with CF can exploit the clusters.
ffl TSVQ: To evaluate the effectiveness of Clindex's CF clustering algorithm,
we replaced it with a more sophisticated algorithm, and then stored the
clusters sequentially as usual. The replacement algorithm used is TSVQ,
a k-mean algorithm [34], was implemented for clustering data in high-dimensional
spaces. It has been widely used in data compression and
lately in indexing high-dimensional data for content-based image retrieval
[41, 42].
ffl R -tree and SR-tree: Tree structures are often used for similarity searches
in multidimensional spaces. To compare, we used the R -tree and SR-tree
structures implemented by Katayama and Satoh [28]. We studied the IO
cost and recall of using these tree-like structures to perform similarity
search approximately. Note that the comparison between Clindex and
these tree-like structures will not be "fair" since neither R -tree nor SR-tree
performs off-line analysis (i.e., no clustering is done in advance). Thus, the
results will only be useful to quantify the potential gains that the off-line
analysis gives us, compared to traditional tree-based similarity search.
ffl M-tree with PAC-NN: We also compare Clindex with Ciaccia and Patella's
PAC-NN implementation of the M-tree [15]. The PAC-NN scheme uses
two parameters, ffl and ffi, to adjust the tradeoff between search speed and
search accuracy. The accuracy ffl allows for a certain relative error in the
results, and the confidence ffi guarantees, with probability at least
that ffl will not be exceeded.
Since the PAC-NN implementation that we use supports only 1-NN search,
we performed 20 1-NN queries to get 20 nearest neighbors. After each 1-
NN query, the nearest neighbor found was removed from the the M-tree.
After the 20 th 1-NN query, we had 20 nearest neighbors removed from the
dataset and we compared them with the golden set to compute recall. Our
test procedure is reliable for measuring recall for PAC-NN. However, our
test procedure alters its search cost. It is well known that the cost of an
exact 20-NN query is not 20 times the cost of a 1-NN query, but much
less. For approximate queries, this seems also to be the case. To ensure
fair comparison, we present the PAC-NN experimental results separately
in Section 4.2.
As discussed in Section 3.4, Clindex always retrieves the cluster whose
centroid is closest to the query object. We also added this intelligence to
both TSVQ and EP to improve their recall. For R -tree and SR-tree, how-
ever, we did not add this optimization. 4
To measure recall, we used the cross-validation technique [34] commonly
employed to evaluate clustering algorithms. We ran each test ten times,
and each time we set aside 100 images as the test images and used the
remaining images to build indexes. We then used these set-aside images to
query the database built under four different index structures. We produced
the average query recall for each run by averaging the recall of 100 image
queries. We then took an average over 10 runs to obtain the final recall.
Our experiments are intended to answer the following questions:
1. How does clustering affect the recall of an approximate search? In
Section 4.1 we perform different algorithms on two datasets that have
different clustering quality to find out the effects of clustering.
2. How is the performance of Clindex compared to the PAC-NN scheme?
In Section 4.2 we show how the PAC-NN scheme trades accuracy for speed
in an M-tree structure.
3. How is the performance of Clindex in terms of elapsed time compared
to the traditional structures and compared to sequentially reading in the
entire file? In Section 4.3 we examine and compare how long the five
schemes take to achieve a target recall.
4. How does block size affect recall? In Section 4.4 we cluster the 30; 000-
image set using different block sizes to answer this question.
In the above experiments we collected the recall for 20-NN queries. In
Section 4.5, we present the recalls of k-NN for
4.1 Recall versus Clustering Algorithms
Figures
7 and 8 compare the recalls of CF, TSVQ, EP, R -tree and SR-tree.
We discuss the experimental results on the two data sets separately.
ffl The 30; 000-image dataset (Figure 7): In this experiment, all schemes divided
the dataset into about 256 clusters. Given one IO to perform, CF
returns the objects in the nearest cluster, which gives us an average of
62% recall (i.e., a return of 62% of the top 20 golden results). After we
read three more clusters, the accumulated recall of CF increases to 90%.
4 A scheme like R -tree or SR-tree could be modified to pre-analyze the data and build
a better structure. The results would be presumably similar to the results of EP, which
keeps centroid information for each data block to assist effective nearest neighbor search.
We did not develop the modified R -tree or SR-tree scheme, and hence did not verify our
hypothesis.
Recall
Number of IOs
Clindex with CF
R* Tree
SR Tree
Figure
7: Recalls of Five Schemes (20-NN) on 30; 000 Images.
If we read more clusters, the recall still increases but at a much slower
pace. After we read 15 clusters, the recall approaches 100%. That is, we
can selectively read in 6% ( 15= 6%) of the data in the dataset to obtain
almost all top 20 golden results.
The EP scheme obtains much lower recall than CF. It starts with 30%
recall after the first IO is completed, and slowly progresses to 83% after
IOs. The TSVQ structure, although it does better than the EP scheme,
still lags behind CF. It obtains 35% recall after one IO, and the recall does
not reach 90% until after 10 IOs. The recall of CF and TSVQ converge
after 20 IOs. Finally, R -tree and SR-tree suffer from the lowest recall.
(That SR-tree performs better than R -tree is consistent with the results
reported in [28].)
ffl The 450; 000-image dataset (Figure 8): In this experiment, all schemes
divided the dataset into about 1; 500 clusters. Given one IO to perform,
CF returns the objects in the nearest cluster, which gives us an average of
25% recall. After we read 10 more clusters, the accumulated recall of CF
increases to 60%. After we read 90 clusters, the recall approaches 90%.
Again, we can selectively read in 6% ( 90= 6%) of the data in the dataset
to obtain 90% of the top 20 golden results. As expected, the achieved recall
of this image dataset is not as good as that of the 30; 000-image set. We
believe that this is because the randomly crawled images may not form
Recall
Number of IOs
Clindex with CF
R* Tree
SR Tree
Figure
8: Recalls of Five Schemes (20-NN) on 450; 000 Images.
clusters that are as well separated in the feature space as the clusters of
the 30; 000 image set. Nevertheless, in both cases the recall versus the
percentage of data retrieved from the data set shows that CF can achieve
a good recall by retrieving a small percentage of data.
The EP scheme achieves much lower recall than CF. It starts with 1%
recall after the first IO is completed, and slowly progresses to 33% after
100 IOs. The TSVQ structure performs worse than the EP scheme. It
achieves only 30% recall after 100 IOs. Finally, R -tree and SR-tree suffer
from the lowest recall. When the data does not display good clustering
quality, a poor clustering algorithm exacerbates the problem.
Remarks
Using recall alone cannot entirely tell the quality of the approximate results.
For instance, at a given recall, say 60%, the approximate results can be very
different (in terms of distances) from the query point and can be fairly similar
to the query. Figure 9 uses the relative distance error metric proposed in [47]
to report the quality of the approximation. Let Q denote the query point in
the feature space, D k
G the average distance of the top-k golden results from
Q in the feature space, and D k
A the average distance of the top-k actual
results from Q. The relative distance error is measured by
G
G
Relative
Distance
Errors
Number of IOs
Clindex with CF
R*/SR Tree
Figure
9: Relative Distance Errors (20-NN) on 30; 000 Images.
Figure
9 shows that the relative distance errors of both Clindex and
TSVQ can be kept under 20% after ten IOs. But the relative distance errors
of the tree-structures are still quite large (more than 80%) even after IOs.
4.2 The PAC-NN Scheme
We conducted PAC-NN experiments on the 30; 000-image dataset. We observed
that the value of ffi plays a more important role than the value of ffl in
determining the search quality and cost. Table 1 shows one representative
set of results where we set (Setting ffl to other values does not change
the tradeoff pattern between search accuracy and cost.) Under five different
settings, 0, 0:01, 0:02, 0:05 and 0:1, the table shows recall, number of
IOs and wall-clock query time (for all 20 1-NN queries). (As we mentioned
previously, the cost of 20 1-NN queries can be substantially higher than the
cost of one 20-NN query. Therefore, one should only treat the number of
IOs and the elapsed time in the table as loose bounds of the cost.)
When ffi is increased from zero to 0:01, the recall of PAC-NN drops from
95:3% to 40%, but the number of IOs is reduced substantially to one eighth
and the wall-clock elapsed time is reduced to one fifth. When we increase ffi
further, both the recall and cost continue to drop.
Remarks
ffl Quality of approximation: From the perspective of some other performance
metrics that were proposed to measure quality of approximation [47], the
PAC-NN scheme trades very slight accuracy for substantial search speedup.
For multimedia applications where one may care more about getting similar
results than finding the exact top-NN results, the PAC-NN scheme provides
a scalable and effective way to trade accuracy for speed.
ffl IO cost: Since the PAC-NN scheme we experimented with is implemented
with M-tree and a tree-like structure tends to distribute a nature cluster
into many non-contiguous disk blocks (we have discussed this problem in
Section 2), it takes the M-tree more IOs than Clindex to achieve a given
recall. This observation is consistent with the results obtained from using
other tree-like structures to index the images.
Construction Time: It took only around 1:5 minutes to build an M-tree,
while building Clindex with CF takes 30 minutes on a 600MHz Pentium-III
workstation with 512 MBytes of DRAM. As we explained in Section 3,
since the clustering phase is done off-line, the pre-processing time may be
acceptable in light of the speed that is gained in query performance.
Recall 95:3% 40:0% 25:3% 12:2% 5:9%
Number of IOs 13; 050
Wall-clock Time (seconds) 15 3 2 1:6 1:3
Table
1: The Performance and Cost of PAC-NN.
4.3 Running Time versus Recall
We have shown that clustering indeed helps improve recall when a dataset
is not uniformly distributed in the space. This is shown by the recall gap
between CF and EP in Figures 7 and 8. We have also shown that by
retrieving only a small fraction of data, CF can achieve a recall that is quite
high (e.g., 90%). This section shows the time needed by the schemes to
achieve a target recall. We compare their elapsed time to the time it takes
to sequentially read in the entire dataset.
To report query time, we first use a quantitative model to compute IO
time. Since IO time dominates a query time, this model helps explain why
one scheme is faster than the others. We then compare the wall-clock time
we obtained with the computed IO time. As we will show shortly, these
two methods give us different elapsed times but the relative performance
between the indexes are the same.
Let B denote the total amount of data retrieved to achieve a target
recall, N the number of IOs, TR the transfer rate of the disk, and T seek the
average disk latency. The elapsed time T of a search can be estimated as
To compute T , we use the parameters of a modern disk listed in Table 2.
To simplify the computation, we assume that an IO incurs one average seek
(8:9 ms) and one half of a rotational delay (5:6 ms). We also assume that
TR is 130 Mbps and that each image record, including its wavelets and a
URL address, takes up 500 Bytes. For example, to compute the elapsed
time to retrieve 4 clusters that contains 10% of the 30; 000 images, we can
express T as
30; 000 \Theta 10% \Theta 500 \Theta 8
ms:
Figure
10(a) shows the average elapsed time versus recall for five schemes
on the 30; 000-image dataset. Note that the average elapsed time is estimated
based on the number of records retrieved in our ten rounds of ex-
periments, not based on the number of clusters retrieved. The horizontal
line in the middle of the figure shows the time it takes to read in the entire
image feature file sequentially. To achieve 90% recall, CF and TSVQ take
substantially less time than reading in the entire file. The traditional tree
structures, on the other hand, perform worse than the sequential scan when
we would like to achieve a recall that is above 65%.
Figure
10(b) shows the wall-clock elapsed time versus recall that we
collected on a Dell workstation, which is configured with a 600MHz Pentium-III
processor and 512 MBytes of DRAM. To eliminate the effect of caching,
we ran ten queries on each index structure and we rebooted the system after
Parameter Name Value
Disk Capacity 26 GBytes
Number of cylinders, CYL 50,298
Min. Transfer Rate TR 130 Mbps
Rotational Speed (RPM) 5,400
Full Rotational Latency Time 11.12 ms
Min. Seek Time 2.0 ms
Average Seek Time 8.9 ms
Max. Seek Time ms
Table
2: Quantum Fireball Lct Disk Parameters
each query. The average running time of all schemes are about twice as long
as the predicted IO time. We believe that this is because the wall-clock time
includes index lookup time, distance computation time, IO bus time and
memory access time, in addition to the IO time. The relative performance
between indexes, however, are unchanged. In both figures, Clindex with CF
requires the lowest running time to reach every recall target.
In short, we can draw the following conclusions:
1. Clindex with CF achieves a higher recall than TSVQ, EP and R -tree,
because CF is adaptive to wide variances of cluster shapes and sizes.
TSVQ and other algorithms are forced to bound a cluster by linear
functions, e.g., a cube in 3-D space. CF is not constrained in this
way. Thus, if we have an odd cluster shape (e.g., with tentacles), CF
will conform to the shape, while TSVQ will either have to pick a big
space that encompasses all the "tentacles," or will have to select several
clusters, each for a portion of the shape, and will thereby inadvertently
break some "natural" clusters, as we illustrated in Figure 1(a). It is
not surprising that the recall of TSVQ converges to that of CF after a
number of IOs, because TSVQ eventually pieces together its "broken"
clusters.
2. Using CF to approximate an exact similarity search requires reading
just a fraction of the entire dataset. The performance is far better than
that achieved by performing a sequential scan on the entire dataset.
3. CF, TSVQ and EP enjoy a boost in recall in their first few additional
IO
Time
Recall %
Clindex with CF
R*/SR Tree
Read All
(a) IO Time0.51.52.53.510 20
Wall-clock
Time
Recall %
Clindex with CF
R*/SR Tree
Sequential
(b) Wall-clock Time
Figure
10: Elapsed Time vs. Recall on 30; 000 images.
IOs, since they always retrieve the next cluster whose centroid is the
closest to the query object. R -tree, conversely, does not store centroid
information, and cannot selectively retrieve blocks to boost its recall
in the first few IOs. (We have discussed the shortcomings of tree
structures in Section 2 in detail.)
4. When a data set does not exhibit good clusters, CF's effectiveness can
suffer. A poor clustering algorithm such as TSVQ and EP can suffer
even more. Thus, employing a good clustering algorithm is important
for Clindex to work well.
Recall 60% 70% 80% 90% - 100%
# of Golden Objects Retrieved 12 14
CF # of Objects Retrieved 115 230 345 460
R-Precision 10:44% 6:09% 4:64% 3:91% 1:16%
R -tree # of Objects Retrieved 2; 670 3; 430 4; 090 4; 750 5; 310
R-Precision 0:45% 0:41% 0:39% 0:379% 0:377%
Table
3: R-precision of 20-NN
Remarks on Precision
Figure
7 shows that while retrieving the same amount of data, CF has higher
recall than R -tree and other schemes. Put another way, CF also enjoys
higher R-precision because retrieving the same number of objects from disk
gives CF more golden results.
Since a tree structure is typically designed to use a small block size to
achieve high precision, we tested R -tree method with larger block size, and
compared its R-precision with that of CF. We conducted this experiment
only on the 30; 000-image dataset. (The tree-like structures are ineffective on
the 450; 000-image dataset.) It took R -tree 267 IOs on average to complete
an exact similarity search. Table 3 shows that the R-precision of CF is more
than ten times higher than that of R -tree under different recall values.
This result, although surprising, is consistent with that of many studies
[28, 43]. The common finding is that when the data dimension is high, tree
structures fail to divide points into neighborhoods and are forced to access
almost all leaves during an exact similarity search. Therefore, the argument
for using a small block size to improve precision becomes weaker as the data
dimension increases.
4.4 Recall versus Cluster Size
Since TSVQ, EP and tree-like structures are ineffective on the 450; 000-
image dataset, we conducted the remaining experiments on the 30; 000-image
dataset only.
We define cluster size as the average number of objects in a cluster.
To see the effects of the cluster size on recall, we collected recall values at
different cluster sizes for CF, TSVQ, CF and R -tree. Note that the cluster
size of CF is determined by the selected values of - and '. We thus set
four different - and ' combinations to obtain recall values for four different
cluster sizes. For EP, TSVQ and R -tree, we selected cluster (block) sizes
that contain from 50 up to 900 objects. Figure 11 depicts the recall (y-axis)
of the four schemes for different cluster sizes (x-axis) after one IO and after
five IOs are performed.
Figure
11(a) shows that given the same cluster size, CF has a higher
recall than TSVQ, EP and R -tree. The gaps between the clustering schemes
(EP and R -tree) and non-clustering schemes (TSVQ and CF) widen as the
cluster size increases. We believe that the larger the cluster size, the more
important a role the quality of the clustering algorithm plays. CF enjoys
l
Size (# of Objects)
Clindex
R*-tree
(a) After one IO0.20.61
Recall
Size (# of Objects)
Clindex
R*-tree
(b) After five IOs
Figure
11: Recall versus Cluster Size.
significantly higher recall than TSVQ, EP and R -tree because it captures
the clusters better than the other three schemes.
Figure
11(b) shows that CF still outperforms TSVQ, EP and R -tree
after five IOs. CF, TSVQ and EP enjoy better improvement in recall than
R -tree because, again, CF, TSVQ and EP always retrieve the next cluster
whose centroid is nearest to the query object. On the other hand, a tree
structure does not keep the centroid information, and the search in the
neighborhood can be random and thus suboptimal. We believe that adding
centroid information to the leaves of the tree structures can improve its
recall.
4.5 Relative Recall versus k-NN
So far we measured only the recall of returning the top 20 nearest neigh-
bors. In this section we present the recall when the top nearest
neighbors are returned after one and five IOs are performed. In these ex-
periments, we partitioned the dataset into about 256 clusters (blocks) for
all schemes, i.e., all schemes have about the same cluster size.
Figures
12(a) and Figures 12(b) present the relative recall relative to
the top 20 golden results of the four schemes after one IO and five IOs are
l
Top k-NN
Clindex with CF
R*-tree
(a) After One IO0.20.61
Recall
Top k-NN
Clindex with CF
R*-tree
(b) After Five IOs
Figure
12: Recall of k-NN.
performed, respectively. The x-axis in the figures represents the number of
nearest neighbors requested, and the y-axis represents the relative recall.
For instance, when only one nearest object is requested, CF returns the
nearest object 79% of the time after one IO and 95% of the time after five
IOs.
Both figures show that the recall gaps between schemes are insensitive to
how many nearest neighbors are requested. We believe that once the nearest
object can be found in the returned cluster(s), the conditional probability
that additional nearest neighbors can also be found in the retrieved clus-
is high. Of course, for this conclusion to hold, the number of objects
contained in a cluster must be larger than the number of nearest neighbors
requested. In our experiments, a cluster contains 115 objects on average,
and we tested up to 20 golden results. This leads us to the following discussion
on how to tune CF's control parameters to form clusters of a proper
size.
4.6 The Effects of the Control Parameters
One of Clindex's drawbacks is that its control parameters must be tuned
to obtain quality clusters. We had to experiment with different values of
' and - to form clusters. We show here how we selected - and ' for the
30; 000-image dataset.
Since the size of our dataset is much smaller than the number of cells
D\Theta- ), many cells are occupied by only one object. When we set ' - 1,
most points fell into the outlier cluster, so we set ' to zero. To test the -
values, we started with 2. We increased - by increments of one and
checked the effect on the recall. Figure 13 shows that the recall with respect
to the number of IOs decreases when - is set beyond two. This is because in
our dataset D ?? log N , and using a - that is too large spreads the objects
apart and thereby leads to the formation of too many small clusters. By
dividing the dataset too finely one loses the benefit of large block size for
sequential IOs.0.20.61
Recall
# of IOs
Figure
13: Recall versus -.
Although the proper values of - and ' are dataset dependent, we empirically
found the following rules of thumb to be useful for finding good
starting values:
ffl Choose a reasonable cluster size: The cluster must be large enough to take
advantage of sequential IOs. It must also contain enough objects so that
if a query object is near a cluster, then the probability that a significant
number of nearest neighbors of the query object are in the cluster is high.
On the other hand, a cluster should not be so large as to prolong query
time unnecessarily. According to our experiments, increasing the cluster
size to where the cluster contains more than 300 objects (see Figure 11)
does not further improve recall significantly. Therefore, a cluster of about
300 objects is a reasonable size.
ffl Determine the desired number of clusters: Once the cluster size is chosen,
one can compute how many clusters the dataset will be divided into. If
the number of clusters is too large to make the cluster table fit in main
memory, we may either increase the cluster size to decrease the number of
clusters, or consider building an additional layer of clusters.
Adjust - and ': Once the desired cluster size is determined, we pick the
starting values for - and '. The - value must be at least two, so that
the points are separated. The suggested ' is one, so that the clusters are
separated. After running the clustering algorithm with the initial setting,
if the average cluster is smaller than the desired size, we set ' to zero
to combine small clusters. If the average cluster size is larger than the
desired size, we can increase either - or ', until we obtain roughly the
desired cluster size.
4.7
Summary
In summary, our experimental results show that:
1: Employing a better clustering algorithm improves recall as well as the
elapsed time to achieve a target recall.
2: Providing additional information such as the centroids of the clusters
helps prioritize the retrieval order of the clusters, and hence improves the
search efficiency.
3: Using a large block size is good for both IO efficiency and recall.
Our experimental results also show that Clindex is effective when it uses a
good clustering algorithm and when the data is not uniformly distributed.
When a dataset is uniformly distributed in a high-dimensional space, sequentially
reading in the entire dataset can be more effective than using
Clindex.
Conclusions
In this paper, we presented Clindex, a paradigm for performing approximate
similarity search in high-dimensional spaces to avoid the dimensionality
curse. Clindex clusters similar objects on disk, and performs a similarity
search by finding the clusters near the query object. This approach improves
the IO efficiency by clustering and retrieving relevant information
sequentially on and from the disk. Its pre-processing cost is linear in D
(dimensionality of the dataset), polynomial in N (size of the dataset) and
quadratic in M (number of non-empty cells), and its query cost is independent
of D. We believe that for many high-dimensional datasets (e.g.,
documents and images) that exhibit clustering patterns, Clindex is an attractive
approach to support approximate similarity search both efficiently
and effectively.
Experiments showed that Clindex typically can achieve 90% recall after
retrieving a small fraction of data. Using the CF algorithm, Clindex's recall
is much higher than that of some traditional index structures. Through
experiments we also learned that Clindex works well because it uses large
blocks, finds clusters more effectively, and searches the neighborhood more
intelligently by using the centroid information. These design principles can
also be employed by other schemes to improve their search performance. For
example, one may replace the CF clustering algorithm in this paper with
one that is more suitable for a particular dataset.
Finally, we summarize the limitations of Clindex and our future research
plan.
ffl Control parameter tuning: As we discussed in Section 3.3, determining
the value of ' is straightforward but determining a good - value requires
experiments on the dataset. We have provided some parameter-tuning
guidelines in the paper. We plan to investigate a mathematical model
which allows directly to determine the parameters. In this regard, we
believe that the PAC-NN [15] approach can be very helpful.
ffl Effective clustering: Clindex needs a good clustering algorithm to be ef-
fective. There is a need to investigate the pros and cons of using existing
clustering algorithms for indexing high-dimensional data.
ffl Incremental clustering: In addition, most clustering algorithms are off-line
algorithms and the clusters can be sensitive to insertions and deletions.
We plan to extend Clindex to perform regional reclustering for supporting
insertions and deletions after initial clusters are formed.
ffl Measuring performance using other metrics: In this study, we use the classical
precision and recall to measure the performance of similarity search.
We plan to employ other metrics (e.g., [47]) to compare the performance
between different indexing schemes.
Acknowledgments
We would like to thank James Ze Wang for his comments on our draft and
his help on the TSVQ experiments. We would like to thank Kingshy Gho
and Marco Patella for their assistance to complete the PAC-NN experiments
on the M-tree structure. We would also like to thank the TKDE associate
editor, Paolo Ciaccia, and reviewers for their helpful comments about the
original version of this paper.
--R
Automatic subspace clustering of high dimensional data for data mining applications.
An optimal algorithm for approximate nearest neighbor searching in fixed dimensions.
The pyramid-technique: Towards breaking the curse of dimensionality
Neural Networks for Pattern Recognition.
Scaling em (expectation- maximization) clustering to large databases
Copy detection mechanisms for digital docu- ments
Toward perception-based image retrieval (extended version)
Searching near- replicas of images via clustering
A geberalized r-tree bulk-insertion
Pac nearest neighbor queries: Approximate and controlled search in high-dimensional and metric spaces
M-Tree: An efficient access method for similarity search in metric spaces.
An algorithm for approximate closest-point queries
A density-based algorithm for discovering clusters in large spatial databases with noise
Query by image and video content: the QBIC system.
Safeguarding and charging for information on the internet.
Database System Implementa- tion
Vector Quantization and Signal Compression.
Visual information retrieval.
R-trees: a dynamic index structure for spatial searching.
Ranking in spatial databases.
Similarity search in high dimensions via hashing.
Approximate nearest neighbors: Towards removing the curse of dimensionality.
Two algorithms for nearest-neighbor search in high dimen- sions
Efficient search for approximate nearest neighbor in high dimensional spaces.
The EM algorithm
The magical number seven
Machine Learning.
Efficient and effective clustering methods for spatial data mining.
Nearest neighbor queries.
Adaptive color-image embedding for database navigation
A multi-resolution clustering approach for very large spatial databases
A fully automated content-based image query system
A quantitative analysis and performance study for similarity-search methods in high-dimensional spaces
Similarity indexing: Algorithms and performance.
Similarity indexing with the SS-Tree
Database Design (Second Edition).
Approximate similarity retrieval with m-trees
An efficient data clustering method for very large databases.
--TR
--CTR
Edward Chang , Kwang-Ting Cheng , Lihyuarn L. Chang, PBIR - perception-based image retrieval, ACM SIGMOD Record, v.30 n.2, p.613, June 2001
Jessica Lin , Eamonn Keogh , Stefano Lonardi , Jeffrey P. Lankford , Donna M. Nystrom, Visually mining and monitoring massive time series, Proceedings of the tenth ACM SIGKDD international conference on Knowledge discovery and data mining, August 22-25, 2004, Seattle, WA, USA
Edward Chang , Kwang-Ting Cheng , Wei-Cheng Lai , Ching-Tung Wu , Chengwei Chang , Yi-Leh Wu, PBIR: perception-based image retrieval-a system that can quickly capture subjective image query concepts, Proceedings of the ninth ACM international conference on Multimedia, September 30-October 05, 2001, Ottawa, Canada
Wei-Cheng Lai , Kingshy Goh , Edward Y. Chang, On scalability of active learning for formulating query concepts, Proceedings of the 1st international workshop on Computer vision meets databases, June 13-13, 2004, Paris, France
King-Shy Goh , Edward Y. Chang , Wei-Cheng Lai, Multimodal concept-dependent active learning for image retrieval, Proceedings of the 12th annual ACM international conference on Multimedia, October 10-16, 2004, New York, NY, USA
Anicet Kouomou Choupo , Laure Berti-quille , Annie Morin, Optimizing progressive query-by-example over pre-clustered large image databases, Proceedings of the 2nd international workshop on Computer vision meets databases, June 17-17, 2005, Baltimore, MD
Lijuan Zhang , Alexander Thomasian, Persistent clustered main memory index for accelerating k-NN queries on high dimensional datasets, Proceedings of the 2nd international workshop on Computer vision meets databases, June 17-17, 2005, Baltimore, MD
Sid-Ahmed Berrani , Laurent Amsaleg , Patrick Gros, Approximate searches: k-neighbors precision, Proceedings of the twelfth international conference on Information and knowledge management, November 03-08, 2003, New Orleans, LA, USA
Laurent Amsaleg , Patrick Gros , Sid-Ahmed Berrani, Robust Object Recognition in Images and the Related Database Problems, Multimedia Tools and Applications, v.23 n.3, p.221-235, August 2004
Nick Koudas , Beng Chin Ooi , Kian-Lee Tan , Rui Zhang, Approximate NN queries on streams with guaranteed error/performance bounds, Proceedings of the Thirtieth international conference on Very large data bases, p.804-815, August 31-September 03, 2004, Toronto, Canada
Simon Tong , Edward Chang, Support vector machine active learning for image retrieval, Proceedings of the ninth ACM international conference on Multimedia, September 30-October 05, 2001, Ottawa, Canada
King-Shy Goh , Beitao Li , Edward Chang, DynDex: a dynamic and non-metric space indexer, Proceedings of the tenth ACM international conference on Multimedia, December 01-06, 2002, Juan-les-Pins, France
Edward Chang , Beitao Li, MEGA---the maximizing expected generalization algorithm for learning complex query concepts, ACM Transactions on Information Systems (TOIS), v.21 n.4, p.347-382, October
Jessica Lin , Eamonn Keogh , Stefano Lonardi, Visualizing and discovering non-trivial patterns in large time series databases, Information Visualization, v.4 n.2, p.61-82, July 2005
Vassilis Athitsos , Marios Hadjieleftheriou , George Kollios , Stan Sclaroff, Query-sensitive embeddings, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, June 14-16, 2005, Baltimore, Maryland
Keke Chen , Ling Liu, ClusterMap: labeling clusters in large datasets via visualization, Proceedings of the thirteenth ACM international conference on Information and knowledge management, November 08-13, 2004, Washington, D.C., USA
Vassilis Athitsos , Marios Hadjieleftheriou , George Kollios , Stan Sclaroff, Query-sensitive embeddings, ACM Transactions on Database Systems (TODS), v.32 n.2, p.8-es, June 2007
Ertem Tuncel , Hakan Ferhatosmanoglu , Kenneth Rose, VQ-index: an index structure for similarity searching in multimedia databases, Proceedings of the tenth ACM international conference on Multimedia, December 01-06, 2002, Juan-les-Pins, France
Keke Chen , Ling Liu, iVIBRATE: Interactive visualization-based framework for clustering large datasets, ACM Transactions on Information Systems (TOIS), v.24 n.2, p.245-294, April 2006
Domenico Cantone , Alfredo Ferro , Alfredo Pulvirenti , Diego Reforgiato Recupero , Dennis Shasha, Antipole Tree Indexing to Support Range Search and K-Nearest Neighbor Search in Metric Spaces, IEEE Transactions on Knowledge and Data Engineering, v.17 n.4, p.535-550, April 2005 | approximate search;high-dimensional index;clustering;similarity search |
628254 | Hashing Methods for Temporal Data. | External dynamic hashing has been used in traditional database systems as a fast method for answering membership queries. Given a dynamic set S of objects, a membership query asks whether an object with identity k is in (the most current state of) S. This paper addresses the more general problem of Temporal Hashing. In this setting, changes to the dynamic set are timestamped and the membership query has a temporal predicate, as in: "Find whether object with identity k was in set S at time t. " We present an efficient solution for this problem that takes an ephemeral hashing scheme and makes it partially persistent. Our solution, also termed partially persistent hashing, uses linear space on the total number of changes in the evolution of set S and has a small (O(\log_B(n/B))) query overhead. An experimental comparison of partially persistent hashing with various straightforward approaches (like external linear hashing, the Multiversion B-Tree, and the R*-tree) shows that it provides the faster membership query response time. Partially persistent hashing should be seen as an extension of traditional external dynamic hashing in a temporal environment. It is independent of the ephemeral dynamic hashing scheme used; while the paper concentrates on linear hashing, the methodology applies to other dynamic hashing schemes as well. | Introduction
Hashing has been used as a fast method to address membership queries. Given a set S of objects
distinguished by some identity attribute (oid), a membership query asks whether object with oid k
is in set S. Hashing can be applied either as a main memory scheme (all data fits in main-memory
[DKM+88, FNSS92]) or in database systems (where data is stored on disk [L80]). Its latter form
is called external hashing [EN94, R97] and a hashing function maps oids to buckets. For every
object of S, the hashing function computes the bucket number where the object is stored. Each
bucket has initially the size of a page. For this discussion we assume that a page can hold B objects.
Ideally, each distinct oid should be mapped to a separate bucket, however this is unrealistic as the
universe of oids is usually much larger than the number of buckets allocated by the hashing scheme.
When more than B oids are mapped on the same bucket a (bucket) overflow occurs. Overflows are
dealt in various ways, including rehashing (try to find another bucket using another hashing
scheme) and/or chaining (create a chain of pages under the overflown bucket).
If no overflows are present, finding whether a given oid is in the hashed set is trivial: simply
compute the hashing function for the queried oid and visit the appropriate bucket. If the object is
in the set it should be in that bucket. Hence, if the hashing scheme is perfect, membership queries
are answered in O(1) steps (just one I/O to access the page of the bucket). Overflows however
complicate the situation. If data is not known in advance, the worst case query performance of
hashing is large. It is linear to the size of set S since all oids could be mapped to the same bucket
G. Kollios is with the Dept. of Computer & Information Science, Polytechnic University, Brooklyn, NY 11201;
[email protected]. V. J. Tsotras is with the Dept. of Computer Science, University of California, Riverside,
This research was partially supported by NSF grant IRI-9509527 and by the New
York State Science and Technology Foundation as part of its Center for Advanced Technology program.
if a bad hashing scheme is used. Nevertheless, practice has shown that in the absence of
pathological data, good hashing schemes with few overflows and constant average case query
performance (usually each bucket has size of one or two pages) exist. This is one of the major
differences between hashing and index schemes. If a balanced tree (B+ tree [C79]) is used instead,
answering a membership query takes logarithmic (on the size of S) time in the worst case. For many
applications (for example in join computations [SD90]), a hashing scheme that provides expected
constant query performance (one or two I/O's) is preferable to the worst case but logarithmic query
performance (four or more I/O's if S is large) of balanced search trees.
Static hashing refers to schemes that use a predefined set of buckets. This is inefficient if the
set S is allowed to change (by adding or deleting objects from the set). If the set is too small and
the number of pre-allocated buckets too large, the scheme is using more space than needed. If the
set becomes too large but a small number of buckets is used then overflows will become more
often, deteriorating the scheme's performance. What is needed is a dynamic hashing scheme which
has the property of allocating space proportional to the size of the hashed set S. Various external
dynamic hashing schemes have been proposed, among which linear hashing [L80] (or a variation)
appears to be commonly used.
Note that even if the set S evolves, traditional dynamic hashing is ephemeral, i.e., it answers
membership queries on the most current state of set S. In this paper we address a more general
problem. We assume that changes to the set S are timestamped by the time instant when they
occurred and we are interested in answering membership queries for any state that set S possessed.
Let S(t) denote the state (collection of objects) set S had at time t. Then the membership query has
a temporal predicate as in: "given oid k and time t find whether k was in S(t)". We term this problem
as Temporal Hashing and the new query as temporal membership query.
Motivation for the temporal hashing problem stems from applications where current as well as
past data is of interest. Examples include: accounting, billing, marketing, tax-related, social/
medical, and financial/stock-market applications. Such applications cannot be efficiently
maintained by conventional databases which work in terms of a single (usually the most current)
logical state. Instead, temporal databases were proposed [SA85] for time varying data. Two time
dimensions have been used to model reality, namely valid-time and transaction-time [J+94]. Valid
time denotes the time when a fact is valid in reality. Transaction time is the time when a fact is
stored in the database. Transaction time is consistent with the serialization order of transactions
(i.e., it is monotonically increasing) and can be implemented using the commit times of
transactions [S94]. In the rest, the terms time or temporal refer to transaction-time.
Assume that for every time t when S(t) changes (by adding/deleting objects) we could have a
good ephemeral dynamic hashing scheme (say linear hashing) h(t) that maps efficiently (with few
overflows) the oids in S(t) into a collection of buckets b(t). One straightforward solution to the
temporal hashing problem would be to separately store each collection of buckets b(t) for each t.
To answer a temporal membership query for oid k and time t we only need to apply h(t) on k and
access the appropriate bucket of b(t). This would provide an excellent query performance as it takes
advantage of the good linear hashing scheme h(t) used for each t, but the space requirements are
prohibitively large! If n denotes the number of changes in S's evolution, flashing each b(t) on the
disk could easily create O(n 2 ) space.
Instead we propose a more efficient solution that has similar query performance as above but
uses space linear to n. We term our solution partially persistent hashing as it reduces the original
problem into a collection of partially persistent 1 sub-problems. We apply two approaches to solve
these sub-problems. The first approach "sees" each sub-problem as an evolving subset of set S and
is based on the Snapshot Index [TK95]. The second approach "sees" each sub-problem as an
evolving sublist whose history is efficiently kept. In both cases, the partially persistent hashing
scheme "observes" and stores the evolution of the ephemeral hashing in an efficient way that
enables fast access to any h(t) and b(t). (We note that partial persistence fits nicely with a
transaction-time database environment because of the always increasing characteristic of
We compare partially persistent hashing with three other approaches. The first one uses a
traditional dynamic hashing function to map all oids ever created during the evolution of S(t). This
solution does not distinguish among the many copies of the same oid k that may have been created
as time proceeds. A given oid k can be added and deleted from S many times, creating copies of k
each associated with a different time interval. Because all such copies will be hashed on the same
bucket, bucket reorganizations will not solve the problem (this was also observed in [AS86]).
These overflows will eventually deteriorate performance especially as the number of copies
increases. The second approach sees each oid-interval combination as a multidimensional object
and uses an R-tree to store it. The third approach assumes that a B+ tree is used to index each S(t)
and makes this B+ tree partially persistent [BGO+96, VV97, LS89]. Our experiments show that
1. A structure is called persistent if it can store and access its past states [DSST89]. It is called partially persistent if
the structure evolves by applying changes to its "most current" state.
the partially persistent hashing outperforms the other three competitors in membership query
performance while having a minimal space overhead.
The partially persistent B+ tree [BGO+96, VV97, LS89] is technically the more interesting
among the competitor approaches. It corresponds to extending an ephemeral B+ tree in a temporal
environment. Like the ephemeral B+ tree, it supports worst case logarithmic query time but for
temporal queries. It was an open problem, whether such an efficient temporal extension existed for
hashing schemes. The work presented here answers this question positively. As with a non-temporal
environment, partially persistent hashing provides a faster than indexing, (expected)
query performance for temporal membership queries. This result reasserts our conjecture [KTF98]
that temporal problems that support transaction-time can be solved by taking an efficient solution
for the corresponding non-temporal problem and making it partially persistent.
The rest of the paper is organized as follows: section 2 presents background and previous work
as related to the temporal index methods that are of interest here; section 3 describes the basics of
the Snapshot Index and Linear Hashing. The description of partially persistent hashing appears in
section 4. Performance comparisons are presented in section 5, while conclusions and open
problems for further research appear in section 6.
2. Background and Previous Work
Research in temporal databases has shown an immense growth in recent years [OS95]. Work on
temporal access methods has concentrated on indexing. A worst case comparison of temporal
indexes appears in [ST97]. To the best of our knowledge, no approach addresses the hashing
problem in a temporal environment. Among existing temporal indexes, four are of special interest
for this paper, namely: the Snapshot Index [TK95], the Time-Split B-tree (TSB) [LS89], the
Multiversion B-Tree (MVBT) [BGO+96] and the Multiversion Access Structure (MVAS) [VV97].
A simple model of temporal evolution follows. Assume that time is discrete described by the
succession of non-negative integers. Consider for simplicity an initially empty set S. As time
proceeds, objects can be added to or deleted from this set. When an object is added to S and until
(if ever) is deleted from S, it is called "alive". This is represented by associating with each object
a semi-closed interval, or lifespan, of the form: [start_time, end_time). While an object is alive it
cannot be re-added in S, i.e. S contains no duplicates. Deletions can be applied to alive objects.
When an object is added at t, its start_time is t but its end_time is yet unknown. Thus its lifespan
interval is initiated as [t, now), where now is a variable representing the always increasing current
time. If this object is later deleted from S, its end_time is updated from now to the object's deletion
time. Since an object can be added and deleted many times, objects with the same oid may exist
but with non-intersecting lifespan intervals (i.e., such objects were alive at different times). The
state of the set at a given time t, namely S(t), is the collection of all alive objects at time t.
Assume that this evolution is stored in a transaction-time database, in a way that when a change
happens at time t, a transaction with the same timestamp t updates the database. There are various
queries we may ask on such a temporal database. A common query, is the pure-snapshot problem
(also denoted as "*//-/S" in the proposed notation of [TJS98]): "given time t find S(t)". Another
common query is the range-snapshot problem ("R//-/S"): "given time t and range of oids r, find all
alive objects in S(t) with oids in range r".
categorizes temporal indexes according to what queries they can answer efficiently and
compares their performance using three costs: space, query time and update time (i.e., the time
needed to update the index for a change that happened on set S). Clearly, an index that solves the
range-snapshot query can also solve the pure-snapshot query (if no range is provided). However,
as indicated in [TGH95], a method designed to address primarily the pure-snapshot query does not
need to order incoming changes according to oid. Note that in our evolution model, changes arrive
in increasing time order but are unordered on oid. Hence such method could enjoy faster update
time than a method designed for the range-snapshot query. The latter orders incoming changes on
oid so as to provide fast response to range-snapshot queries. Indeed, the Snapshot Index solves the
pure-snapshot query in I/O's, using O(n/B) space and only O(1) update
time per change (in the expected amortized sense [CLR90] because a hashing scheme is
employed). This is the I/O-optimal solution for the pure snapshot query. Here, a corresponds to the
number of alive objects in the queried state S(t).
For the range-snapshot query three efficient methods exist, namely, the TSB tree, the MVBT
tree and the MVAS structure. They all assume that there exists a B+ tree indexing each S(t); as time
proceeds and set S evolves the corresponding B+ tree evolves, too. They differ on the algorithms
provided to efficiently store and access the B+ tree evolution. Answering a range-snapshot query
about time t implies accessing the B+ tree as it was at time t and search through its nodes to find
the oids in the range of interest. Conceptually, these approaches take a B+ tree and make it partially
persistent [DSST89]. The resulting structure has the form of a graph as it includes the whole history
of the evolving B+ tree, but it is able to efficiently access any past state of this B+ tree.
O
log a B
Both the MVBT and MVAS solve the range-snapshot query in I/O's,
using O(n/B) space and update per change (in the amortized sense [CLR90]).
This is the -optimal solution for the range-snapshot query. Here m denotes the number of
"alive" objects when an update takes place and a denotes the answer size to the range-snapshot
query, i.e., how many objects from the queried S(t) have oids in the query range r. The MVAS
structure improves the merge/split policies of the MVBT thus resulting to a smaller constant in the
space bound. The TSB tree is another efficient solution to the range-snapshot query. In practice it
is more space efficient than the MVBT (and MVAS), but it can guarantee worst case query
performance only when the set evolution is described by additions of new objects or updates on
existing objects. Since for the purposes of this paper we assume that object deletions are frequent
we use the MVBT instead of a TSB.
3. Basics of the Snapshot Index and Linear Hashing
For the purposes of partially persistent hashing we need the fundamentals from the Snapshot Index
and ephemeral Linear Hashing, which are described next. For detailed descriptions we refer to
[TK95] and [L80, S88, EN94, R97], respectively.
3.1 The Snapshot Index
This method [TK95] solves the pure-snapshot problem using three basic structures: a balanced tree
(time-tree) that indexes data pages by time, a pointer structure (access-forest) among the data pages
and a hashing scheme. The time-tree and the access-forest enable fast query response while the
hashing scheme is used for update purposes.
We first discuss updates. Objects are stored sequentially in data pages in the same order as they
are added to the set S. In particular, when a new object with oid k is added to the set at time t, a new
record of the form <k, [t, now)> is created and is appended in a data page. When this data page
becomes full, a new data page is used and so on. At any given instant there is only one data page
that stores (accepts) records, the acceptor (data) page. The time when an acceptor page was created
(along with the page address) is stored in the time tree. As acceptor pages are created sequentially
the time-tree is easily maintained (amortized O(1) I/O to index each new acceptor page). For object
additions, the sequence of all data pages resembles a regular log but with two main differences: (1)
on the way deletion updates are managed and (2) on the use of additional links (pointers) among
the data pages that create the access-forest.
O
log a B
log
I/O
Object deletions are not added sequentially; rather they are in-place updates. When object k is
deleted at some time , its record is first located and then updated from <k, [t, now)> to <k, [t, )>.
Object records are found using their oids through the hashing scheme. When an object is added in
its oid and the address of the page that stores the object's record are inserted in the hashing
scheme. If this object is deleted the hashing scheme is consulted, the object's record is located and
its interval is updated. Then this object's oid is removed from the hashing function.
Storing only one record for each object suggests that for some time instant t the records of the
objects in S(t) may be dispersed in various data pages. Accessing all pages with alive objects at t,
would require too much I/O (if S(t) has a objects, we may access O(a) pages). Hence the records
of alive objects must be clustered together (ideally in a/B pages). To achieve good clustering we
introduce copying but in a "controlled" manner, i.e., in a way that the total space remains
. To explain the copying procedure we need to introduce the concept of page usefulness.
Consider a page after it gets full of records (i.e., after it stops being the acceptor page) and the
number of "alive" records it contains (records with intervals ending to now). For all time instants
that this page contains uB alive records ( ) is called useful. This is because for these times
t the page contains a good part of the answer for S(t). If for a pure-snapshot query about time t we
are able to locate the useful pages at that time, each such page will contribute at least uB objects to
the answer. The usefulness parameter u is a constant that tunes the behavior of the Snapshot Index.
Acceptor pages are special. While a page is the acceptor page it may contain fewer than uB
alive records. By definition we also call a page useful for as long as it is the acceptor page. Such a
page may not give enough answer to justify accessing it but we still have to access it! Nevertheless,
for each time instant there exists exactly one acceptor page.
Let [u.start_time, u.end_time) denote a page's usefulness period; u.start_time is the time the
page started being the acceptor page. When the page gets full it either continues to be useful (and
for as long as the page has at least uB alive records) or it becomes non-useful (if at the time it
became full the page had less than uB alive records). The next step is to cluster the alive records
for each t among the useful pages at t. When a page becomes non-useful, an artificial copy occurs
that copies the alive records of this page to the current acceptor page (as in a timesplit [E86, LS89]).
The non-useful page behaves as if all its objects are marked as deleted but copies of its alive records
can still be found from the acceptor page. Copies of the same record contain subsequent non-overlapping
intervals of the object's lifespan. The copying procedure reduces the original problem
O
of finding the alive objects at t into finding the useful pages at t. The solution of the reduced
problem is facilitated through the access-forest.
The access-forest is a pointer structure that creates a logical "forest of trees" among the data
pages. Each new acceptor page is appended at the end of a doubly-linked list and remains in the
list for as long as it is useful. When a data page d becomes non-useful: (a) it is removed from the
list and (b) it becomes the next child page under the page c preceding it in the list (i.e., c was the
left sibling of d in the list when d became non-useful). As time proceeds, this process will create
trees of non-useful data pages rooted under the useful data pages of the list. The access-forest has
a number of properties that enable fast searching for the useful pages at any time. [TK95] showed
that starting from the acceptor page at t all useful pages at t can be found in at most twice as many
I/O's (in practice much less I/O's are needed). To find the acceptor page at t the balanced time-tree
is searched (which corresponds to the logarithmic part of the query time). In practice this search is
very fast as the height of the balanced tree is small (it stores only one entry per acceptor page which
is clearly O(n/B)). The main part of the query time is finding the useful pages. The performance of
the Snapshot Index can be fine tuned by changing parameter u. Large u implies that acceptor pages
become non-useful faster, thus more copies are created which increases the space but also clusters
the answer into smaller number of pages, i.e., less query I/O.
3.2 Linear Hashing
Linear Hashing (LH) is a dynamic hashing scheme that adjusts gracefully to data inserts and
deletes. The scheme uses a collection of buckets that grows or shrinks one bucket at a time.
Overflows are handled by creating a chain of pages under the overflown bucket. The hashing
function changes dynamically and at any given instant there can be at most two hashing functions
used by the scheme.
More specifically, let U be the universe of oids and h {0,.,M-1} be the initial hashing
function that is used to load set S into M buckets (for example: h 0 Insertions and
deletions of oids are performed using h 0 until the first overflow happens. When this first overflow
occurs (it can occur in any bucket), the first bucket in the LH file, bucket 0, is split (rehashed) into
two buckets: the original bucket 0 and a new bucket M, which is attached at the end of the LH file.
The oids originally mapped into bucket 0 (using function h 0 ) are now distributed between buckets
using a new hashing function h 1 (oid). The next overflow will attach a new bucket M+1
and the contents of bucket 1 will be distributed using h 1 between buckets 1 and M+1. A crucial
property of h 1 is that any oids that were originally mapped by h 0 to bucket j should
be remapped either to bucket j or to bucket j+M. This is a necessary property for linear hashing to
work. An example of such hashing function is: h 1
Further overflows will cause additional buckets to split in a linear bucket-number order. A
variable p indicates which is the bucket to be split next. Conceptually the value of p denotes which
of the two hashing functions that may be enabled at any given time, applies to what buckets.
Initially p=0, which means that only one hashing function used and applies to all buckets in
the LH file. After the first overflow in the above example, p=1 and h 1 is introduced. Suppose that
an object with oid k is inserted after the second overflow (i.e., when p=2). First the older hashing
function is applied on k. If then the bucket h 0 (k) has not been split yet and k is
stored in that bucket. Otherwise the bucket provided by h 0 has already been split and
the newer hashing function is stored in bucket h 1 (k). Searching for an oid is
similar, that is, both hashing functions may be involved.
After enough overflows, all original M buckets will be split. This marks the end of splitting-
round 0. During round 0, p went subsequently from bucket 0 to bucket M-1. At the end of round 0
the LH file has a total of 2M buckets. Hashing function h 0 is no longer needed as all 2M buckets
can be addressed by hashing function h 1 (note: is reset to 0 and
a new round, namely splitting-round 1, is started. The next overflow (in any of the 2M buckets) will
introduce hashing function h 2 (oid) = oid mod2 2 M. This round will last until bucket 2M-1 is split.
In general, round i starts with hashing functions h i (oid) and
(oid). The round ends when all buckets are split. For our purposes we use h
are called split functions of h 0 . A split function h j has the properties:
and (ii) for any oid, either h j
At any given time, the linear hashing scheme is completely identified by the round number and
variable p. Given round i and variable p, searching for oid k is performed using h i if ;
otherwise h i+1 is used. During round i the value of p is increased by one at each overflow; when
the next round i+1 starts and p is reset to 0.
A split performed whenever an overflow occurs is an uncontrolled split. Let l denote the LH
file's load factor, i.e., where is the current number of oids in the LH file (size of
set S), B is the page size (in number of oids) and R the current number of buckets in the file. The
load factor achieved by uncontrolled splits is usually between 50-70%, depending on the page size
l S BR
and the oid distribution [L80]. In practice, to achieve a higher storage utilization a split is instead
performed when an overflow occurs and the load factor is above some upper threshold g. This is
a controlled split and can typically achieve 95% utilization. Deletions in set S will cause the LH
file to shrink. Buckets that have been split can be recombined if the load factor falls below some
lower threshold f . Then two buckets are merged together; this operation is the reverse of
splitting and occurs in reverse linear order. Practical values for f and g are 0.7 and 0.9, respectively.
4. Partially Persistent Hashing
We first describe the evolving-set approach which is based on the Snapshot Index; the evolving-
list approach will follow.
4.1 The Evolving-Set Approach
Using partial persistence, the temporal hashing problem will be reduced into a number of sub-problems
for which efficient solutions are known. Assume that an ephemeral linear hashing
scheme (as the one described in section 3) is used to map the objects of S(t). As S(t) evolves with
time the hashing scheme is a function of time, too. Let LH(t) denote the linear hashing file as it is
at time t. There are two basic time-dependent parameters that identify LH(t) for each t, namely i(t)
and p(t). Parameter i(t) is the round number at time t. The value of parameter p(t) identifies the next
bucket to be split.
An interesting property of linear hashing is that buckets are reused; when round i+1 starts it has
double the number of buckets of round i but the first half of the bucket sequence is the same since
new buckets are appended in the end of the file. Let b total denote the longest sequence of buckets
ever used during the evolution of S(t) and assume that b total consists of buckets: 0,1,2,.
Let b(t) be the sequence of buckets used at time t. The above observation implies that for all t, b(t)
is a prefix of b total . In addition .
Consider bucket b j from the sequence b total and the observe the collection of
objects that are stored in this bucket as time proceeds. The state of bucket b j at time t, namely b j (t),
is the set of oids stored to this bucket at t. Let denote the number of oids in b j (t). If all states
can somehow be reconstructed for each bucket b j , answering a temporal membership query
for oid k at time t can be answered in two steps:
(1) find which bucket b j , oid k would have been mapped by the hashing scheme at t, and,
(2) search through the contents of b j (t) until k is found.
The first step requires identifying which hashing scheme was used at time t. The evolution of
the hashing scheme LH(t) is easily maintained if a record of the form < t, i(t), p(t) > is appended to
an array H, for those instants t where the values of i(t) and/or p(t) change. Given any t, the hashing
function used at t is identified by simply locating t inside the time-ordered H in a logarithmic
search.
The second step implies accessing b j (t). The obvious way would be to store each b j (t), for those
times that b j (t) changed. As explained earlier this would easily create quadratic space requirements.
The updating per change would also suffer since the I/O to store the current state of b j would be
proportional to the bucket's current size, namely .
By observing the evolution of bucket b j we note that its state changes as an evolving set by
adding or deleting oids. Each such change can be timestamped with the time instant it occurred. At
times the ephemeral linear hashing scheme may apply a rehashing procedure that remaps the
current contents of bucket b j to bucket b j and some new bucket b r . Assume that such a rehashing
occurred at some time and its result is a move of v oids from b j to b r . For the evolution of b j (b r ),
this rehashing is viewed as a deletion (respectively addition) of the v oids at time , i.e., all such
deletions (additions) are timestamped with the same time for the corresponding object's
evolution.
Figure
1 shows an example of the ephemeral hashing scheme at two different time instants. For
simplicity 2. Figure 2 shows the corresponding evolution of set S and the evolutions
of various buckets. At time the addition of oid 8 on bucket 3 causes the first overflow which
rehashes the contents of bucket 0 between bucket 0 and bucket 5. As a result oid 15 is moved to
bucket 5. For bucket's 0 evolution this change is considered as a deletion at
5 it is an addition of oid 15 at the same instant
If b j (t) is available, searching through its contents for oid k is performed by a linear search. This
process is lower bounded by I/O's since these many pages are at least needed to
store b j (t). (This is similar with traditional hashing where a query about some oid is translated into
searching the pages of a bucket; this search is also linear and continues until the oid is found or all
the bucket's pages are searched.) What is therefore needed is a method which for any given t can
reconstruct b j (t) with effort proportional to I/O's. Since every bucket b j behaves like a
set evolving over time, the Snapshot Index [TK95] can be used to store the evolution of each b j and
reconstruct any b j (t) with the required efficiency.
O
O
We can thus conclude that given an evolving set S, partially persistent hashing answers a
temporal membership query about oid k at time t, with almost the same query time efficiency (plus
a small overhead) as if a separate ephemeral hashing scheme existed on each S(t). A good
ephemeral hashing scheme for S(t) would require an expected O(1) I/O's to answer a membership
query. This means that on average each bucket b j (t) used for S(t) would be of limited size, or
equivalently, corresponds to just a few pages (in practice one or two pages). In
perspective, partially persistent hashing will reconstruct b j (t) in I/O's, which from
the above is expected O(1).
The small overhead incurred by persistent hashing is due to the fact that it stores the whole
history of S's evolution and not just a single state S(t). Array H stores an entry every time a page
overflow occurs. Even if all changes are new oid additions, the number of overflows is upper
bounded by O(n/B). Hence array H indexes at most pages and searching it takes
I/O's.
Having identified the hashing function the appropriate bucket b j is pinpointed. Then time t must
be searched in the time-tree associated with this bucket. The overhead implied by this search is
bounded by where n j corresponds to the number of changes recorded in bucket
bucket
bucket
(a)
(b)
Figure
1: Two instants in the evolution of an ephemeral hashing scheme. (a) Until time no split has
occurred and is mapped to bucket 3 and causes an overflow. Bucket 0 is
rehashed using h 1 and
O
O
O
log
O
history. In practice, we expect that the n changes in S's evolution will be concentrated on the
first few in the b total bucket sequence, simply because a prefix of this sequence is always used. If
we assume that most of S's history is recorded in the first buckets (for some i), n j behaves as
and therefore searching b j 's time-tree is rather fast.
A logarithmic overhead that is proportional to the number of changes n, is a common
characteristic in query time of all temporal indexes that use partial persistence. The MVBT (or
MVAS) tree will answer a temporal membership query about oid k on time t, in
I/O's. We note that MVBT's logarithmic bound contains two searches. First, the appropriate B-tree
that indexes S(t) is found. This is a fast search and is similar to identifying the hashing function and
the bucket to search in persistent hashing. The second logarithmic search in MVBT is for finding
k in the tree that indexes S(t) and is logarithmic on the size of S(t). Instead persistent hashing finds
oid k in expected O(1) I/O's.
Figure
2: The detailed evolution for set S until time addition/deletion respectively).
Changes assigned to the histories of three buckets are shown. The hashing scheme of Figure 1 is
assumed. Addition of oid 8 in S at causes the first overflow. Moving oid 15 from bucket 0 to
bucket 5 is seen as a deletion and an addition respectively. The records stored in each bucket's history
are also shown. For example, at t=25, oid 10 is deleted from set S. This updates the lifespan of this
oid's corresponding record in bucket 0's history from <10, [1, now)> to <10, [1, 25)>.
evolution of set S up
to time
<10, [1, now)>
<15, [9, now)>
<oid, lifespan>
at
records in bucket 0's history
<10, [1, now)>
<15, [9, 21)>
<oid, lifespan>
at
evolution of bucket 0:
<10, [1, 25)>
<15, [9, 21)>
<oid, lifespan>
at
<3, [4, now)>
<13, [17, now)>
<oid, lifespan>
at
records in bucket 3's history
<3, [4, now>
<13, [17, now)>
<oid, lifespan>
at
evolution of bucket 3:
<13, [17, now)>
<oid, lifespan>
at
<oid, lifespan>
at
records in bucket 5's history
<15, [21, now)>
<oid, lifespan>
at
evolution of bucket 5:
<15, [21, now)>
<oid, lifespan>
at
<8, [21, now)> <8, [21, now)>
O
O
4.1.1 Update and Space Analysis. We proceed with the analysis of the update and space
characteristics of partially persistent hashing. It suffices to show that the scheme uses O(n/B) space.
An O(1) amortized expected update processing per change can then be derived. Clearly array H
satisfies the space bound. Next we show that the space used by bucket histories is also bounded by
O(n/B). Recall that n corresponds to the total number of real object additions/deletions in set S's
evolution. However, the rehashing process moves objects among buckets. For the bucket histories,
each such move is seen as a new change (deletion of an oid from the previous bucket and
subsequent addition of this oid to the new bucket). It must thus be shown that the number of moves
due to rehashing is still bounded by the number of real changes n. For this purpose we will use two
lemmas.
Lemma 1: For N overflows to occur at least NB+1 real object additions are needed.
Proof: The proof is based on induction on the number of overflows. (1) For the creation of the first
(N=1) overflow at least B+1 oid additions are needed. This happens if all such oids are mapped to
the same bucket that can hold only B oids (each bucket starts with one empty page). (2) Assume
that for the N first overflows NB+1 real object additions are needed. (3) It must be proved that the
first overflows need at least (N+1)B+1 oid additions. Assume that this is not true, i.e., that only
oid additions are enough. We will show that a contradiction results from this assumption.
According to (2) the first N of the (N+1) overflows needed NB+1 real object additions. Hence there
are B-1 remaining oid additions to create an extra overflow. Consider the page where the last
overflow occurred. This bucket has a page with exactly one record (if it had less there would be no
overflow, if it had more, the N-th overflow could have been achieved with one less oid). For this
page to overflow we need at least B more oid additions, i.e., the remaining B-1 are not enough for
the (N+1)-th overflow. Which results in a contradiction and the Lemma is proved. (Note that only
the page where the N-th overflow occurred needs to be considered. Any other page that has space
for additional oids cannot have more than one oid already, since the overflow that occurred in that
bucket could have been achieved with less oids). q
The previous Lemma lower bounds the number of real oid additions from N overflows. The
next Lemma upper bounds the total number of copies (due to oid rehashings) that can happen from
overflows.
Lemma 2: N overflows can create at most N(B+1) oid copies.
Proof: We will use again induction on the number of overflows. (1) The first overflow can create at
most B+1 oid copies. This happens if when the first overflow occurs all the oids in that bucket are
remapped to a new bucket. The deleted records of the remapped B+1 oids are still stored in the
history of the original bucket. (2) Assume that for the N first overflows at most N(B+1) oid copies.
(3) It must be shown that the first (N+1) overflows can create at most (N+1)(B+1) oid copies. We
will use contradiction. Hence let's assume that this is not true, i.e., the first (N+1) overflows can
create more copies. Let (N+1)(B+1)+x be that number where . Consider the last N overflows
in the sequence of overflows. From (2) it is implied that these overflows have already created at
most N(B+1) oid copies. Hence there are at least B+1+x additional copies to be created by the first
overflow. However this is a contradiction since from (1) the first overflow can only create at most
copies. q
We are now ready to prove the basic theorem about space and updating.
Theorem 1: Partially Persistent Hashing uses space proportional to the total number of real
changes and updating that is amortized expected O(1) per change.
Proof: Assume for simplicity that set S evolves by only adding oids (oid additions create new
records, overflows and hence more copying; deletions do not create overflows). As overflows
occur, linear hashing proceeds in rounds. In the first round variable p starts from bucket 0 and in
the end of the round it reaches bucket M-1. At that point 2M buckets are used and all copies
(remappings) from oids of the first round have been created. Since M overflows have occurred,
lemmas 1 and 2 imply that there must have been at least MB+1 real oid additions and at most
copies. By construction, these copies are placed in the last M buckets.
For the next round, variable p will again start from bucket 0 and will extend to bucket 2M-1.
When p reaches bucket 2M-1, there have been 2M new overflows. These new overflows imply that
there must have been at least 2MB+1 new real oid additions and at most 2M(B+1) copies created
from these additions. There are also the M(B+1) copy oids from the first round, which for the
purposes of the second round are "seen" as regular oids. At most each such copy oid can be copied
once more during the second round (the original oids from which these copies were created in the
first round, cannot be copied again in the second round as they represent deleted records in their
corresponding buckets). Hence the maximum number of copies after the second round is: 2M(B+1)
The total number of copies C total created after the i-th round (i = 0,1,2,.) is upper bounded by:
where each {} represents copies per round. Equivalently:
After the i-th round the total number of real oid additions A total is lower bounded by:
Equivalently:
(ii).
From (i), (ii) it can be derived that there exists a positive constant const such that
and since A total is bounded by the total number of changes n, we have that
O(n). To prove that partially persistent hashing has O(1) expected amortized updating per
change, we note that when a real change occurs it is directed to the appropriate bucket where the
structures of the Snapshot Index are updated in O(1) expected time. Rehashings have to be
carefully examined. This is because a rehashing of a bucket is caused by a single real oid addition
(the one that created the overflow) but it results into a "bunch" of copies made to a new bucket (at
worse the whole current contents of the rehashed bucket are sent to the new bucket). However,
using the space bound we can prove that any sequence of n real changes can at most create O(n)
copies (extra work) or equivalently O(1) amortized effort per real change. q
4.1.2 Optimization Issues. Optimizing the performance of partially persistent hashing involves
the load factor l of the ephemeral Linear Hashing and the usefulness parameter u of the Snapshot
, where and R(t) denote the size of the evolving set S and the number
of buckets used at t (clearly ). A good ephemeral linear hashing scheme will try to
equally distribute the oids among buckets for each t. Hence on average the size (in oids) of each
bucket b j (t) will satisfy: .
{
{ } .
{ }
A total MB 1
{ } 2MB 1
{ } .
{ }
A total MB 2 k
C total A total
const
l t
One of the advantages of the Snapshot Index is the ability to tune its performance through
usefulness parameter u. The index will distribute the oids of each b j (t) among a number of useful
pages. Since each useful page (except the acceptor page) contains at least uB alive oids, the oids in
will be occupying at most pages, which is actually l/u. Ideally, we would like the
answer to a snapshot query to be contained in a single page (plus probably one more for the
acceptor page). Then a good optimization choice is to keep . Conceptually, the load l gives
a measure of the size of a bucket ("alive" oids) at each time. These alive oids are stored into the
data pages of the Snapshot Index. Recall that an artificial copy happens if the number of alive oids
in a data page falls below uB. At that point the remaining uB-1 alive oids of this page are copied
to a new page. By keeping l below u we expect that the alive oids of the split page will be copied
in a single page which minimizes the number of I/O's needed to find them.
On the other hand, the usefulness parameter u affects the space used by the Snapshot Index and
in return the overall space of the persistent hashing scheme. As mentioned in section 3, higher
values of u imply frequent time splits, i.e., more page copies and thus more space. Hence it would
be advantageous to keep u low but this implies an even lower l. In return, lower l would mean that
the buckets of the ephemeral hashing are not fully utilized. This is because low l causes set S(t) to
be distributed into more buckets not all of which may be fully occupied.
At first this requirement seems contradictory. However, for the purposes of partially persistent
hashing, having low l is still acceptable. Recall that the low l applies to the ephemeral hashing
scheme whose history the partially persistent hashing observes and accumulates. Even though at
single time instants the b j (t)'s may not be fully utilized, over the whole time evolution many object
oids are mapped to the same bucket. What counts for the partially persistent scheme is the total
number of changes accumulated per bucket. Due to bucket reuse, a bucket will gather many
changes creating a large history for the bucket and thus justifying its use in the partially persistent
scheme. Our findings regarding optimization will be verified through the experimentation results
that appear in the next section.
4.2 The Evolving-List Approach
The elements of bucket b j (t) can also be viewed as an evolving list lb j (t) of alive oids. Such an
observation is consistent with the way buckets are searched in ephemeral hashing, i.e., linearly, as
if a bucket's contents belong to a list. This is because in practice each bucket is expected to be about
one or two pages long. Accessing the bucket state b j (t) is then reduced to reconstructing lb j (t).
Equivalently, the evolving list of oids should be made partially persistent.
l u
When bucket b j is first created, an empty page is assigned to list lb j . A list page has two areas.
The first area is used to store oid records and its size is B r where B r < B. The second area (of size
accommodates an extra structure (array NT) to be explained shortly. When the first oid k is
added on bucket b j at time t, a record <k, [t, now)> is appended in the first list page. Additional oid
insertions will create record insertions in the list and more pages are appended as needed. If oid k
is deleted at from the bucket, its record in the list is found (by a serial search among the list
pages) and its end_time is updated from now to (a logical deletion).
As with the Snapshot Index, we need a notion of page usefulness. A page is called useful as
long as it contains at least V alive objects or while it is the last page in the list. Otherwise it is a
non-useful page. For the following discussion we assume that . Except for the last
page in the list, a useful page can become non-useful because of an oid deletion (which will bring
the number of alive oids in this page below the threshold). The last page can turn from useful to
non-useful when it gets full of records (an event caused by an oid insertion). At that time if the
page's total number of alive oids is less than L the page becomes non-useful. Otherwise it continues
to be a regular useful page. When the last page gets full, a new last page is added in the list.
Finding the state b j (t) is again equivalent to finding the useful pages in lb j (t). We will use two
extra structures. The first structure is an array FT j (t) which for any time t provides access to the
first useful page in lb j (t). Entries in array FT j have the form <time, pid> where pid is a page address.
If the first useful page of the list changes at some t, a new entry with and the pid of the new
first useful page is appended in FT j . This array can be implemented as a multilevel, paginated index
since entries are added to it in increasing time order.
To find the remaining useful pages of lb j (t), every useful page must know which is the next
useful page after it in the list. This is achieved by the second structure which is implemented inside
every list page. In particular, this structure has the form of an array stored in the page area of size
-B r . Let NT(A) be the array inside page A. This array is maintained for as long as the page is
useful. Entries in NT(A) are also of the form <time, pid>, where pid corresponds to the address of
the next useful page after useful page A.
If during the usefulness period of some page A, its next useful page changes many times, NT(A)
can become full. Assume this scenario happens at time t and let C be the useful page before page
A. Page A is then artificially turned to non-useful (even if it still has more than V alive records) and
is replaced by a copy of it, page . We call this process artificial, since it was not caused by an
oid insertion/deletion to this page, rather it is due to a change in a page ahead. The new page
has the same alive records as A but an empty NT( ). A new entry is then added in NT(C) with
's pid. The first entry of NT( ) has the pid of the useful page (if any) that was after page A at t.
If all useful list pages until page A had their NT arrays full just before time t, the above process
of artificially turning useful pages to non-useful can propagate all the way to the top of the list. If
it reaches the first useful page in the list, a copy of it is created and array FT j is updated. However,
this does not happen often. Figure 3 shows an example of how arrays NT() and FT j are maintained.
The need for artificial creation of a copy of page A is for faster query processing. The NT(C)
array enables finding which is the next useful page after C for various time instants. Assume for
the moment that no new copy of page A is created, but instead NT(A) is allowed to grow over the
available area of page A, in additional pages. The last entry on NT(C) would then still point
to page A. Locating which is the next page after C at time t would lead to page A but then a serial
search among the pages of array NT(A) is needed. Clearly this approach is inefficient if the useful
page in front of page A changes often. The use of artificial copies guards against similar situations
as the next useful list page for any time of interest is found by one I/O! This technique is a
generalization of the backward updating technique used in [TGH95].
Special care is needed when a page turns from useful to non-useful due to an oid deletion/
A'
A'
A' A'
Figure
3: (a) An example evolution for the useful pages of list lb j (t). (b) The corresponding FT j and NT arrays.
From each page only the NT array is shown. In this example B-B entries. Since the page in front
of page A changes often, its NT(A) array fills up and at time t 6 an artificial copy of page A is created
with array NT(A'). Array NT(C) is also updated about the artificially created new page.
A
A
A
A
A
A
A
F
F
F
F
(a)
(b)
"artificial" entry
insertion in this page. To achieve good answer clustering, the alive oids from such a page are
merged with the alive oids of a sibling useful page (if such a sibling exists) to create one (or two,
depending on the number of alive oids) new useful page(s). The new useful page(s) may not be full
of record oids, i.e., future oid insertions can be accommodated there. As a result, when a new oid
is inserted, the list of useful pages is serially searched and the new oid is added in the first useful
page found that has space (in the B r area) to accommodate it. Details are described in the Appendix.
To answer a temporal membership query for oid k at time t the appropriate bucket b j , where oid
would have been mapped by the hashing scheme at t must be found. This part is the same with
the evolving-set approach. Reconstructing the state of bucket b j (t) is performed in two further
steps. First, using t the first useful page in lb j (t) is found by searching array FT j (which corresponds
to searching the time-tree of each bucket in the evolving-set approach). This search is bounded by
. The remaining useful pages of lb j (t) (and thus the oids in b j (t)) are found by
locating t in the NT array of each subsequent useful page (instead, the evolving-set approach uses
the access forest of the Snapshot Index). Since all useful pages (except the last in the list lb j (t)) have
at least V alive oids from the answer, the oids in b j (t) are found with an additional
I/O's. The space used by all the evolving-list structures is O(n j /B).
There are two differences between the evolving-list and the evolving-set approaches. First,
updating using the Snapshot Index remains constant, while in the evolving list the whole current
list may have to be searched for adding or deleting an oid. Second, the nature of reconstructing b j (t)
is different. In the evolving-list reconstruction starts from the top of the list pages while in the
evolving-set reconstruction starts from the last page of the bucket. This may affect the search for a
given oid depending whether it has been placed near the top or near the end of the bucket.
5. Performance Analysis
We compared Partially Persistent Hashing (PPH) against Linear Hashing (in particular Atemporal
linear hashing, to be discussed later), the MVBT and the R*-tree. The implementation and the
experimental setup are described in 5.1, the data workloads in 5.2 and our findings in 5.3.
5.1 Method Implementation - Experimental Setup.
We set the size of a page to hold 25 oid records (B=25). An oid record has the following form, <oid,
start_time, end_time, ptr>, where the first field is the oid, the second is the starting time and the
third the ending time of this oid's lifespan. The last field is a pointer to the actual object (which
may have additional attributes).
O
log
O
We first discuss the Atemporal linear hashing (ALH). It should be clarified that ALH is not the
ephemeral linear hashing whose evolution the partially persistent hashing observes and stores.
Rather, it is a linear hashing scheme that treats time as just another attribute. This scheme simply
maps objects to buckets using the object oids. Consequently, it "sees" the different lifespans of the
same oid as copies of the same oid. We implemented ALH using the scheme originally proposed
by Litwin in [Lin80]. For split functions we used the hashing by division functions h
as to get good space utilization, controlled splits were employed. The
lower and upper thresholds (namely f and g) had values 0.7 and 0.9 respectively.
Another approach for Atemporal hashing would be a scheme which uses a combination of oid
and the start_time or end_time attributes. However this approach would still have the same
problems as ALH for temporal membership queries. For example, hashing on start_time does not
help for queries about time instants other than the start_times.
The Multiversion B-tree (MVBT) implementation is based on [BGO+96]. For fast updating the
MVBT uses a buffer that stores the pages in the path to the last update (LRU buffer replacement
policy is used). Buffering during updating can be very advantageous since updates are directed to
the most current B-tree, which is a small part of the whole MVBT structure. In our experiments we
set the buffer size to 10 pages. The original MVBT uses this buffer for queries, too. However, for
a fair comparison with the other methods when measuring the query performance of the MVBT we
invalidate the buffer content from previous queries. Thus the measured query performance is
independent from the order in which queries are executed. Finally, in the original MVBT, the
process of answering a query starts from a root* array. For every time t, this array identifies the
root of the B-tree at that time (i.e., where the search for the query should start from). Even though
the root* can increase with time is small enough to fit in main memory. Thus we do not count I/O
accesses for searching root*.
As with the Snapshot Index, a page in the MVBT is "alive" as long as it has at least q alive
records. If the number of alive records falls below q this page has to be merged with a sibling (this
is called a weak version underflow). On the other extreme, if a page has already B records (alive or
not) and a new record has to be added, the page splits (a page overflow). Both conditions need
special handling. First, a time-split happens (which is like the copying procedure of the Snapshot
has to be incorporated in the structure. The MVBT requires that the number of alive records in the
new page should be between q+e and B-e where e is a predetermined constant. Constant e works
as a buffer that guarantees that the new page can be split or merged only after at least e new
changes. Not all values for q, e and B are possible as they must satisfy some constraints; for details
we refer to [BGO+96]. In our implementation we set 4. The directory pages of the
MVBT have the same format as the data pages.
For the Partially Persistent Hashing we implemented both the set-evolution (PPH-s) and the
list-evolution (PPH-l) approaches. Both approaches observe an ephemeral linear hashing LH(t)
whose load l(t) lies between f=0.1 and g=0.2. Array H which identifies the hashing scheme used at
each time is kept in main-memory, so no I/O access is counted for using this structure. This is
similar to keeping the root* array of the MVBT in main memory. In all our experiments the size
of array H is never greater than 15 KB. Unless otherwise noted, PPH-s was implemented with
(various other values for usefulness parameter u were also examined). Since the entries in the
time-tree associated with a bucket have half the oid record size, each time-tree page can hold up to
50 entries.
In the PPH-l implementation, the space for the oid records B r can hold 20 such records. The
value of V is set equal to 5 since .This means that, a page in the list can be useful as
long as the number of alive oids in the page is greater or equal to 5. The remaining space in a list
page (of size 5 oid records) is used for the page's NT array. Similarly with the time-arrays, NT
arrays have entries of half size, i.e., each page can hold 10 NT entries. For the same reason, the
pages of each FT j array can hold up to 50 entries.
For the R*-tree method we used two implementations, one with intervals (Ri) in a two-dimensional
space, and another with points in a three-dimensional space (Rp). The Ri
implementation assigns to each oid its lifespan interval; one dimension is used for the oids and one
for the lifespan intervals. When a new oid k is added in set S at time t, a record <k, [t, now), ptr> is
added in an R*-tree data page. If oid k is deleted at , the record is updated to <k, [t, ), ptr>.
Directory pages include one more attribute per record so as to represent an oid range. The Rp
implementation has similar format for data pages, but it assigns separate dimensions for the
start_time and the end_time of the object's lifespan interval. Hence a directory page record has
seven attributes (two for each of the oid, start_time, end_time and one for the pointer). During
updating, both R*-tree implementations use a buffer (10 pages) to keep the pages in the path
leading to the last update. As with the MVBT, this buffer is not used for the query phase.
5.2 Workloads.
Various workloads were used for the comparisons. Each workload contains an evolution of a
dataset S and temporal membership queries on this evolution. More specifically, a workload is
defined by triplet W=(U,E,Q), where U is the universe of the oids (the set of unique oids that
appeared in the evolution of set S), E is the evolution of set S and is a collection
of queries, where is the set of queries corresponds to oid k.
Each evolution starts at time 1 and finishes at time MAXTIME. Changes in a given evolution
were first generated per object oid and then merged. First, for each object with oid k, the number
k of the different lifespans for this object in this evolution was chosen. The choice of n k was made
using a specific random distribution function (namely Uniform, Exponential, Step or Normal)
whose details are described in the next section. The start_times of the lifespans of oid k were
generated by randomly picking n k different starting points in the set {1,., MAXTIME}. The
end_time of each lifespan was chosen uniformly between the start_time of this lifespan and the
start_time of the next lifespan of oid k (since the lifespans of each oid k have to be disjoint). Finally
the whole evolution E for set S was created by merging the evolutions for every object.
For another "mix" of lifespans, we also created an evolution that picks the start_times and the
length of the lifespans using Poisson distributions; we called it the Poisson evolution.
A temporal membership query in query set Q is specified by tuple (oid,t). The number of
queries Q k for every object with oid k was chosen randomly between 10 and 20; thus on average,
To form the (k,t) query tuples the corresponding time instants t were selected using a
uniform distribution from the set {1, . , MAXTIME}. The MAXTIME is set to 50000 for all
workloads.
Each workload is described by the distribution used to generate the object lifespans, the number
of different oids, the total number of changes in the evolution n (object additions and deletions),
the total number of object additions NB, and the total number of queries.
5.3 Experiments.
First, the behavior of all implementations was tested using a basic Uniform workload. The number
of lifespans per object follows a uniform distribution between 20 and 40. The total number of
distinct oids was the number of real changes object
additions. Hence the average number of lifespans per oid was NB ~ (we refer to this workload
as Uniform-30). The number of queries was 115878.
Figure
4.a presents the average number of pages accessed per query by all methods. The PPH
methods have the best performance, about two pages per query. The ALH approach uses more
query I/O (about 1.5 times in this example) because of the larger buckets it creates. The MVBT
uses about twice as many I/O's than the PPH approaches since a tree has to be traversed per query.
The Ri uses more I/O's per query than the MVBT, mainly due to node overlapping and larger tree
height (which in the Ri structure relates to the total number of oid lifespans while in the MVBT
corresponds to the number of alive oids at the time specified by the query). The problem of node
overlapping is even greater with the query performance of the Rp tree, which in Figure 4.a has been
truncated to fit the graph (Rp used an average of 44 I/O's per query in this experiment). In the Rp
all alive oids have the same end_time (now) that causes them to be clustered together even though
they have different oids (that is, overlapping extends to the oid dimension as well). As observed
elsewhere [KTF98], transaction-time lifespans are not maintained efficiently by plain R-trees.
Figure
4.b shows the average number of I/O's per update. The best update performance was
given by the PPH-s method. The MVBT had the second best update performance. It is larger than
PPH-s since MVBT is traversing a tree for each update (instead of quickly finding the location of
the updated element through hashing). The update of Ri follows; it is larger that the MVBT since
the size of the tree traversed is related to all oid lifespans (while the size of the MVBT tree traversed
is related to the number of alive oids at the time of the update). The ALH and PPH-l used even
larger update processing. This is because in ALH all lifespans with the same oid are thrown on the
same bucket thus creating large buckets that have to be searched serially during an update. In PPH-
l the NT array implementation inside each page limits the actual page area assigned for storing oids
and thus increases the number of pages used per bucket. The Rp tree uses even larger update
processing which is due to the bad clustering on the common now end_time.
The space consumed by each method appears in figure 4.c. The ALH approach uses the
smallest space since it stores a single record per oid lifespan and uses "controlled" splits with high
utilization (f and g values). The PPH methods have also very good space utilization with the PPH-
s being very close to ALH. PPH-l uses more space than PPH-s because the NT array
implementation reduces page utilization. The R-tree methods follow; Rp uses slightly less space
than the Ri because paginating intervals (putting them into bounding rectangles) is more
demanding than with points. Note that similarly to ALH, both R* methods use a single record per
oid lifespan; the additional space is mainly because the average R-tree page utilization is about
65%. The MVBT has the largest space requirements, about twice more space than the ALH and
Figure
4: (a) Query, (b) Update, and, (c) Space performance for all implementations on a uniform workload with
(a)
(b)
(c)
ALH PPH-s PPH-l MVBT Ri Rp1.03.05.07.0
Avg.
Number
of
I/O'
per
Update50001500025000ALH PPH-s PPH-l MVBT Ri Rp
Number
of
Pages5.015.0ALH PPH-s PPH-l MVBT Ri Rp
Avg.
Number
of
I/O'
per
Query
PPH-s methods.
In summary, the PPH-s has the best overall performance. Similarly with the comparison
between ephemeral hashing and B-trees, the MVBT tree behaves worse than temporal hashing
(PPH-s) for temporal membership queries. The ALH is slightly better than PPH-s only in space
requirements, even though not significantly. The R-tree based methods are much worse than PPH-
s in all three performance criteria.
To consider the effect of lifespan distribution all approaches were compared using four
additional workloads (namely the exponential, step, normal and poisson). These workloads had the
same number of distinct oids number of queries (115878) and similar n (~0.5M) and
parameters. The Exponential workload generated the n k lifespans per oid using an
exponential distribution with probability density function and mean =30.
The total number of changes was 487774, the total number of object additions was
245562 and 30.7. In the Step workload the number of lifespans per oid follows a step
function. The first 500 oids have 4 lifespans, the next 500 have 8 lifespans and so on, i.e., for every
500 oids the number of lifespans advances by 4. In this workload we had
and 34. The Normal workload used a normal distribution with and . Here
the parameters were:
For the Poisson workload the first lifespan for every oid was generated randomly between time
instants 1 and 500. The length of a lifespan was generated using a Poisson distribution with mean
1100. Each next start time for a given oid was also generated by a Poisson distribution with mean
value 500. For this workload we had 31. The main
characteristic of the Poisson workload is that the number of alive oids over time can vary from a
very small number to a large proportion of |U|, i.e., there are time instants where the number of
alive oids is some hundreds and other time instants where almost all distinct oids are alive.
Figure
5 presents the query, update and space performance under the new workloads. For
simplicity only the Ri method is presented among the R-tree approaches (as with the uniform load,
the Rp used consistently more query and update than Ri and similar space). The results resemble
the previous uniform workload. As before, the PPH-s approach has the best overall performance
using slightly more space than the "minimal" space of ALH. PPH-l has the same query
performance and comparable space with PPH-s but uses much more updating. Note that in Figure
5.a, the query performance of Ri has been truncated to fit the graph (on average, Ri used about 10,
13, per query in the exponential, step, normal and poisson workloads respectively).
Similarly, in Figure 5.c the space of the MVBT is truncated (MVBT used about 26K, 29K, 25K
and 35.5K pages for the respective workloads).
The effect of the number of lifespans per oid was tested using eight uniform workloads with
varying average number of lifespans. All used different oids and the same number of
queries (~115K). The other parameters are shown in the following table:
The results appear in Figure 6. The query performance of atemporal hashing deteriorates as NB
increases since buckets become larger (Figure 6.a). The PPH-s, PPH-l and MVBT methods have a
query performance that is independent of NB (this is because in all three methods the NB lifespans
of a given oid appear at different time instants and thus do not interfere with each other). The query
performance of Ri was much higher and it is truncated from Fig. 6.a. Interestingly, the Ri query
performance decreases gradually as NB increases (from 12.6 I/O's to 9.4 I/O's). This is because Ri
clustering improves as NB increases (there are more records with the same key).
PPH-s outperforms all methods in update performance (Figure 6.b). As with querying, the
updating of PPH-s, PPH-l and MVBT is basically independent of NB. Because of better clustering
with increased NB, the updating of Ri gradually decreases. In contrast, because increased NB
implies larger bucket sizes, the updating of ALH increases. The space of all methods increases with
NB as there are more changes n per evolution (Table 1). The ALH has the lower space, followed
by the PPH-s; the MVBT has the steeper space increase (for NB values 80 and 100, MVBT used
~68K and 84.5K pages).
The effect of the number of distinct oids used in an evolution was examined by considering
three variations of the uniform workload. The number of distinct oids |U| was: 5000, 8000 and
Table
1:
workload n NB NB
Figure
5: (a) Query, (b) Update, and, (c) Space performance for ALH, PPH-s, PPH-l, MVBT and Ri methods using
the exponential, step, normal and poisson workloads with 8K oids, n ~ 0.5M and NB ~ 30.
(a)
(b)
(c)
Exponential Normal Poisson1.03.05.0
Avg.
Number
of
I/O'
per
Query ALH
PPH-s
PPH-l
MVBT
Step
Exponential Step Normal Poisson1.03.05.07.0
Avg.
Number
of
I/O'
per
Update
ALH
PPH-s
PPH-l
MVBT
Ri40001200020000
Number
Of
Pages
Exponential Normal Poisson
Step
Figure
(a) Query, (b) Update, and, (c) Space performance for ALH, PPH-s, PPH-l, MVBT and Ri methods using
various uniform workloads with varying NB.
(a)
(b)
(c)
1001.03.05.0Avg.
Number
of
I/O'
per
Query1.03.05.07.09.0
Avg.
Number
of
I/O'
per
Update
Avg. Number of Lifespans per oid (NB)
Number
of
Pages
ALH
PPH-s
PPH-l
MVBT
ALH
PPH-s
PPH-l
MVBT
12000, respectively. All workloads had similar average number of lifespans per distinct oid (NB ~
30). The other parameters appear in Table 2. The results appear in figure 7. The query performance
of PPH-s and PPH-l is independent of |U|. In contrast, it increases for both MVBT and Ri (the Ri
used about 10.4, 12 and 13 I/O's per query). The reason for this increase is that there are more oids
stored in these tree structures thus increasing the structure's height (this is more evident in Ri as
all oids appear in the same tree). In theory, ALH should also be independent of the universe size
|U|; the slight increase for ALH in Figure 7.a is due to the "controlled" splits policy that constrained
ALH to a given space utilization. Similar observations hold for the update performance. Finally,
the space of all methods increases because n increases (Table 2).
From the above experiments, the PPH-s method appears to have the most competitive
performance among all solutions. As mentioned in section 4.1.2, the PPH-s performance can be
further optimized through the setting of usefulness parameter u. Figure 8 shows the results for the
basic but with different
values of u. As expected, the best query performance occurs if u is greater than the maximum load
of the observed ephemeral hashing. For these experiments the maximum load was 0.2. As asserted
in
Figure
8.a, the query time is minimized after 0.3. The update is similarly minimized (Figure
8.b) for u's above 0.2, since after that point, the alive oids are compactly kept into few pages that
can be updated easier (for smaller u's the alive oids can be distributed into more pages which
increases the update process). Figure 8.c shows the space of PPH-s. For u's below the maximum
load the alive oids are distributed among more data pages, hence when such a page becomes non-useful
it contains less alive oids and thus less copies are made, resulting in smaller space
consumption. Using this optimization, the space of PPH-s can be made similar to that of the ALH
at the expense of some increase in query/update performance.
6. Conclusions and Open Problems
This paper addressed the problem of Temporal Hashing, or equivalently, how to support temporal
membership queries over a time-evolving set S. An efficient solution termed partially persistent
workload n NB #of queries
Figure
7: (a) Query, (b) Update, and, (c) Space performance for ALH, PPH-s, PPH-l, MVBT and Ri methods using
various uniform workloads with varying |U|.
(a)
(b)
(c)
Avg.
Number
of
I/O'
per
Query ALH
PPH-s
PPH-l
MVBT
ALH
PPH-s
PPH-l
MVBT
ALH
PPH-s
PPH-l
MVBT
Avg.
Number
of
I/O'
per
Update
Number
of
Pages
Number of distinct oids |U| (in thousands)
hashing (PPH) was presented. For queries and updates, this scheme behaves as if a separate,
ephemeral dynamic hashing scheme is available on every state assumed by set S over time.
However the method still uses linear space. By hashing oids to various buckets over time, PPH
reduces the temporal hashing problem into reconstructing previous bucket states. Two flavors of
partially persistent hashing were presented, one based on an evolving-set abstraction (PPH-s) and
one on an evolving-list (PPH-l). They have similar query and comparable space performance but
PPH-s uses much less updating. Both methods were compared against straightforward approaches
namely, traditional (atemporal) linear hashing scheme, two R*-tree implementations and the
Multiversion B-Tree. The experiments showed that PPH-s has the most robust performance among
all approaches. Partially persistent hashing should be seen as an extension of traditional external
dynamic hashing in a temporal environment. The methodology is independent from which
ephemeral dynamic hashing scheme is used. While the paper considers linear hashing, it applies to
other dynamic hashing schemes as well. There are various open and interesting problems.
Figure
8: (a) Query, (b) Update, and, (c) Space
performance for PPH-s on a uniform
workload with varying values of the
usefulness parameter u.
(a) (b)
(c)
Avg.
Number
of
I/O'
per
Query PPH-s2.052.152.252.35
Avg.
Number
of
I/O'
per
Update PPH-s
u110001300015000Number
of
Pages
PPH-s
Traditionally hashing has been used to speed up join computations. We currently investigate the
use of temporal hashing to speed up temporal joins [SSJ94]. Another problem is to extend temporal
membership queries to time intervals (find whether oid k was in any of the states set S had over an
interval T). The discussion in this paper assumes temporal membership queries over a linear
transaction-time evolution. It is interesting to investigate hashing in branched transaction
environments [LST95].
Acknowledgments
We would like to thank B. Seeger for kindly providing us with the R* and MVB-tree code. Part of this work was
performed while V.J. Tsotras was on a sabbatical visit to UCLA; we would thus like to thank Carlo Zaniolo
for his comments and hospitality.
--R
"Performance evaluation of a temporal database management system"
"An Asymptotically Optimal Multiversion B-tree"
"The R*-tree: An efficient and Robust Access Method for Points and Rectangles"
"The Ubiquitous B-Tree"
Introduction to Algorithms
"Dynamic Perfect Hashing: Upper and Lower Bounds"
"Making Data Structures Persistent"
Fundamentals of Database Systems
"Nonoblivious Hashing"
"R-Trees: A Dynamic Index Structure for Spatial Searching"
"A Consensus Glossary of Temporal Database Concepts"
"Designing Access Methods for Bitemporal Databases"
"Linear Hashing: A New Tool for File and Table Addressing"
"LH*-A Scalable, Distributed Data Structure"
"Access Methods for Multiversion Data"
"On Historical Queries Along Multiple Lines of Time Evolution"
"Temporal and Real-Time Databases: A Survey"
Database Management Systems
Prentice Hall
"Timestamping After Commit"
"A Taxonomy of Time in Databases"
"Tradeoffs in Processing Complex Join Queries via Hashing in Multiprocessor Database Machines"
"Branched and Temporal Index Structures"
"Efficient Evaluation of the Valid-Time Natural Join"
"A Comparison of Access Methods for Time-Evolving Data"
"Efficient Management of Time-Evolving Databases"
"An Extensible Notation for Spatiotemporal Index Queries"
"The Snapshot Index, an I/O-Optimal Access Method for Timeslice Queries"
"An Efficient Multiversion Access Structure"
--TR
--CTR
Huanzhuo Ye , Hongxia Luo , Kezhen Song , Huali Xiang , Jing Chen, Indexing moving objects based on 2 n index tree, Proceedings of the 6th Conference on 6th WSEAS Int. Conf. on Artificial Intelligence, Knowledge Engineering and Data Bases, p.175-180, February 16-19, 2007, Corfu Island, Greece
S.-Y. Chien , V. J. Tsotras , C. Zaniolo, Efficient schemes for managing multiversionXML documents, The VLDB Journal The International Journal on Very Large Data Bases, v.11 n.4, p.332-353, December 2002 | hashing;transaction time;access methods;data structures;temporal databases |
628284 | Efficient Queries over Web Views. | AbstractLarge Web sites are becoming repositories of structured information that can benefit from being viewed and queried as relational databases. However, querying these views efficiently requires new techniques. Data usually resides at a remote site and is organized as a set of related HTML documents, with network access being a primary cost factor in query evaluation. This cost can be reduced by exploiting the redundancy often found in site design. We use a simple data model, a subset of the Araneus data model, to describe the structure of a Web site. We augment the model with link and inclusion constraints that capture the redundancies in the site. We map relational views of a site to a navigational algebra and show how to use the constraints to rewrite algebraic expressions, reducing the number of network accesses. We show that similar techniques can be used to maintain materialized views over sets of HTML pages. | Introduction
As the Web becomes a preferred medium for disseminating information of all
kinds, the sets of pages at many Web sites have come to exhibit regular and
complex structure not unlike the structures that are described by schemes in
database systems. For example, Atzeni et al. [4] show how to describe the structure
of the well-known Database and Logic Programming Bibliography at the
University of Trier [9] using their own data model, the Araneus data model.
As these sites become large, manual navigation of these hypertext structures
becomes clearly inadequate to retrieve information effectively. Typ-
ically, ad-hoc search interfaces are provided, usually built around full-text indexing
of all the pages at the site. However, full-text queries are good for retrieving
documents relevant to a set of terms, but not for answering precise questions,
e.g. "find all authors who had papers in the last three VLDB conferences." If we
can impose on such a site a database abstraction, say a relational schema, we
can then use powerful database query languages such as SQL to pose queries,
and leave it to the system to translate these declarative queries into navigation
of the underlying hypertext.
In this paper we explore the issues involved in such a translation. In general,
a declarative query will admit different translations, corresponding to different
navigation paths to get to the data; for example, the query above could be
answered by:
1. Starting from the home page, follow the link to the list of conferences, from
here to the VLDB page, then to each of the last three VLDB conferences,
extract a list of authors for each, and intersect the three lists.
2. As above, but go directly from the home page to the list of database confer-
ences, a smaller page than the one that lists all conferences.
3. As above, but go directly from the home page to the VLDB page (there is a
link).
4. Go through the list of authors, for each author to the list of their publications,
and keep those who have papers in the last three VLDB's.
If we use number of pages accessed as a rough measure of query execution cost,
we see there are large differences among these possible access paths, in particular
between the last one and the other three. There are over 16,000 authors
represented in this bibliography, so the last access path would retrieve several
orders of magnitude more pages than the others. Given these large performance
differentials, a query optimizer is needed to translate a declarative query to an
efficient navigation plan, just as a relational optimizer maps an SQL query to
an efficient access plan. In fact, there is an even closer similarity to the problem
of mapping declarative queries to network and object-oriented data models, as
we discuss in Section 2.
To summarize, our approach is to build relational abstractions of large and
fairly well-structured web sites, and to use an optimizer to translate declarative
queries on these relational abstractions to efficient navigation plans. We use a
simple subset of the Araneus data model (adm) to describe web sites, augmenting
it with link constraints that capture the redundancy present in many
web sites. For example, if we want to know who were the editors of VLDB '96,
we can find this information in the page that lists all the VLDB conferences; we
do not need to follow the link from this page to the specific page for VLDB '96,
where the information is repeated. We also use inclusion constraints, that state
that all the pages that can be accessed using a certain path can also be accessed
using another path. We use a navigational algebra as the target language that
describes navigation plans, and we show how to use rewrite rules in the spirit of
relational optimizers, and taking link and inclusion constraints into account, to
reduce the number of page accesses needed to answer a query.
When a query on the relational views is issued, it is repeatedly rewritten
using the rules. This process generates a number of navigation plans to compute
the query; the cost of these plans is then estimated based on a simple cost
model that takes network accesses as the primary cost parameter. In this way,
an efficient execution plan is selected for processing the query.
Query optimization is hardly a new topic; however, doing optimization on
the Web is fundamentally different from optimizing relational or OO databases.
In fact, the Web exhibits two peculiarities: the cost model and the lack of control
over Web sites: (i) the cost model: since data reside at a remote site, our cost
model is based on the number of network accesses, instead of I/O and CPU cost,
and we allocate no cost to local processing such as joins; (ii) the lack of control
over the site: unlike ordinary databases, sites are autonomous and beyond the
control of the query system; first, it is not possible to influence the organization
of data in the site; second, the site manager inserts, deletes and modifies pages
without notifying remote users of the updates.
These points have fundamental implications on query processing. The main
one is that we cannot rely on auxiliary access structures besides the ones already
built right into the HTML pages. Access structures - like indices or class extents
are heavily used in optimizing queries over relational and object-oriented
databases [14]. Most of the techniques proposed for query optimization rely on
the availability of suitable access structures. One might think of extending such
techniques to speed up the evaluation of queries by, for example, storing URLs in
some local data structures, and then using them in query evaluation. However,
this solution is in general unfeasible because, after the data structures have been
constructed, they have to be maintained; and since our system is not notified
of updates to pages, the only way of maintaining these structures is to actually
navigate the site at query time checking for updates, which in general has a cost
comparable to the cost of computing the query itself.
We therefore start our analysis under the assumption that the only access
structures to pages are the ones built right into the hypertext. Due to space
limitations, we concentrate on the issue of mapping queries on virtual relational
views to navigation of the underlying hypertext, and develop an algorithm for
selecting efficient execution plans, based on a suitable cost function. In [10],
we study the problem of querying materialized views, and show how the same
techniques developed for virtual views can be extended to the management of
materialized views.
Outline of the Paper The outline of the paper is as follows. We discuss related
work in Section 2. Sections 3 and 4 present our data model and our navigational
algebra; the problem of querying virtual views is introduced in Section 5; the
rewrite rules and the optimization algorithm are presented in Section 6; Section 7
discusses several interesting examples. Due to space limitations, the presentation
is mainly informal. Details can be found in [10].
Related Work
Query optimization Our approach to query optimization based on algebraic
rewriting rules is inspired on relational and object-oriented query optimization
(e. g., [18], [5]). This is not surprising, since it has been noted in the context
of object-oriented databases that relational query optimization can be well extended
to complex structures ([14], [7]). However, the differences between the
problem we treat here and conventional query optimization, which we listed in
the Introduction, lead to rather different solutions.
Optimizing path expressions Evaluating queries on the Web has some points of
contact with the problem of optimizing path-expressions [22] in object-oriented
databases (see, for example, [6], [14]). Since path-expressions represent a powerful
means to express navigation in object databases, a large body of research
about query processing has been devoted to their optimization. In this research,
the focus is on transforming pointer chasing operations - which are considered
rather expensive - into joins of pointer sets stored in auxiliary access structures,
such as class extents [7], access support relations [8] and join indices [19], [20].
Although it may seem that a similar approach may be extended to the Web,
we show that the more involved nature of access paths in Web sites and the
absence of ad-hoc auxiliary structures introduces a number of subtleties. We
compare two main approaches to query optimization: (i) the first one, that we
might call a "pointer join" approach, is inspired on object-oriented query op-
timization: it aims at reducing link traversal by manipulating (joining) pointer
sets; (ii) the second is what we call a "pointer chase" approach, in which links
between data are used to restrict network access to relevant items. An interesting
result is that, in our cost model, sometimes navigation is less expensive than
joins. This is different from object-oriented databases, where the choice between
the two is generally in favor of the former [6].
Relational Views over Network Databases The idea of managing relational views
over hypertextual sources is similar to some proposals (e.g., [21], [13], [16]) for
accessing network databases through relational views: links between pages may
recall set types that correlate records in the network model. However, in these
works the focus is more on developing tools and methods for automatically deriving
a relational view over a network database, than on query optimization.
More specifically, one of the critical aspects of accessing data in the Web - i.e.,
selecting one among multiple paths to reach data - is not addressed.
Indices in Relational Databases It has already been noted in the previous section
that our approach extends to the Web a number of query optimization
techniques developed in the context of relational databases. Another related issue
is the problem of selecting one among several indices available for a relation
in a relational database (see, for example, [15] and [12]); this has some points
in common with the problem of selecting one among different access paths for
pages in a Web site; however, paths in the Web are usually more complex than
simple indices, and our cost model is radically different from the ones adopted
for relational databases.
Path Constraints The presence of path constraints on Web sites is the core of
the approach developed in [1]. The authors recognize that important structural
information about portions of the Web can be expressed by constraints; they
consider the processing of queries in such a scenario, and discuss how to take
advantage of constraints. The fundamental difference with our approach is that
we work with an intensional description of Web data, based on a database-like
data model, while the authors of [1] reason directly on the extension of data.
3 The Data Model
Our data model is essentially a subset of adm [4], the Araneus data model;
the notion of page-scheme is used to describe the (possibly nested) structure of
a set of homogeneous Web pages; since we are interested in query optimization,
in this paper we enrich the model with constraints that allow reasoning about
redundancies in a site, e.g., multiple paths to reach the same data. From this
perspective, a scheme gives a description of a portion of the Web in terms of
page-schemes and constraints. It is important to note that this description of
the Web portion is usually a posteriori, that is, both the page-schemes and
the constraints are obtained, not from a forward engineering phase, but rather
from a reverse engineering phase, which aims at describing the structure of an
existing site. This analysis is conducted by a human designer, with the help of
a number of tools which semi-automatically analyze the Web in order to find
regular patterns.
3.1 Page-schemes
Each Web page is viewed as an object with a set of attributes. Structurally similar
pages are grouped together into sets, described by page-schemes. Attributes
may have simple or complex type. Simple type attributes are mono-valued and
correspond essentially to text, images, or links to other pages. Complex type,
multi-valued attributes are used to model collections of objects inside pages, and
correspond to lists of tuples, possibly nested.
The set of pages described by a given page-scheme is an instance of the page-
scheme. It is convenient to think of a page-scheme as a nested relation scheme,
a page as a nested tuple on a certain page-scheme, and a set of similar pages
as an instance of the page-scheme. There is one aspect of this framework with
no counterpart in traditional data models. There are pages that have a special
role: they act as "entry-points" to the hypertext. Typically, at least the home
page of each site falls into this category. In adm entry points are modeled as
page-schemes whose instance contains only one tuple.
To formalize these ideas, we need two interrelated definitions for types and
page-schemes, as follows. Given a set of base types containing the types text and
image, a set of attribute names (or simply attributes), and a set of page-scheme
names, the set of web types is defined as follows (each type is either mono-valued
or multi-valued):
- each base type is a mono-valued web type;
- link to P is a mono-valued web type, for each page-scheme name
list multi-valued web type, if A 1
are attributes and T are web types;
A page-scheme has the form P (URL;
is a page scheme name, each A i is an attribute, each T i is a web type, and URL
is the Universal Resource Locator of P , and forms a key for P .
An entry-point is a pair (P, URL), where P is a page-scheme and URL is the
URL for a page p which is the only tuple in the instance of P . As we suggested
above, an instance of a page-scheme is a page-relation, i.e., a set of nested tuples,
one for each of the corresponding pages, each with a URL and a value of the
appropriate type for each page-scheme attribute. Entry points are page-relations
containing a single nested tuple.
Note that we do not assume the availability of page-scheme extents: the only
pages whose URL is known to the system are instances of entry points; any
other page-relation can only be accessed by navigating the site starting from
some entry point. It is also worth noting that, in order to see pages, i.e., HTML
files, as instances of page-schemes, i.e., nested tuples, we assume that suitable
wrappers [3, 2] are applied to pages in order to access attribute values.
We have experimented with our approach on several real-life Web sites. How-
ever, in this paper we choose to refer to a fictional site - a hypothetical university
Web site - constructed in such a way as to allow us to discuss with a single and
familiar example all relevant aspects of our work. Figure 1 shows some examples
of page-schemes from such site.
3.2 Constraints
The hypertextual nature of the Web is usually associated with a high degree of
redundancy. Redundancy appears in two ways. First, many pieces of information
are replicated over several pages. Consider the Department example site: the
name of a Department -say, Computer Science- can be found not only in the
Computer Science Department page but also in many other pages: for instance,
it is presumably used as an anchor in every page in which a link towards the
department page occurs. Second, pages can be usually reached following different
navigational paths in the site. To capture these redundancies so they can be
ToSes
ToProf
ProfList
DName
ToDept
DeptList
CName
ToCourse
CourseList
CName
Description
Type
Instructor
ToProf
Rank
Email
DName
PName
CName
ToCourse
CourseList
ToDept
Address
DName
PName
ToProf
Professors
Departments
PName
HomePage
SessionListPage
/sessions/index.html
ProfListPage
/prof/index.html
DeptListPage
/dept/index.html
SessionPage ProfPage DeptPage
CoursePage
Name
http://url
Name
ListName
Legenda
Page-scheme
unique
Page-scheme
Text attribute
Link attribute
List attribute
SessionPage.Session CoursePage.Session
CoursePage.CName ProfPage.CourseList.CName
ProfPage.CourseList.CName= CoursePage.CName
DeptPage.ProfList.PName ProfPage.PName
DeptPage.DName ProfPage.DName
ProfPage.DName DeptPage.DName
ProfPage.PName DeptPage.ProfList.PName
DeptListPage.DeptList.DName DeptPage.DName
DeptPage.ProfList.ToProf ProfListPage.ProfList.ToProf
CoursePage.ToProf ProfListPage.ProfList.ToProf
ProfPage.ToDept. DeptListPage.DeptList.ToDept
ProfPage.CourseList.ToCourse SessionPage.CourseList.ToCourse
Inclusion Constraints:
Fig. 1. The Web-Scheme of a University Web Site
exploited in query optimization, we enrich the model with two kinds of integrity
constraints: link constraints, and inclusion constraints.
A link constraint is a predicate associated with a link. It is used to document
the fact that the value of some attribute in the source page-relation equals the
value of another attribute in a related tuple in the target page-relation. For exam-
ple, with respect to Figure 1, this is the case for attribute DName in page-schemes
DeptPage and ProfPage or for attribute Session in SessionPage and CoursePage.
In our model, this can be documented by the following link constraints:
To formalize, given two page-schemes, P 1 and P 2 connected by a link ToP 2 ,
a link constraint between P 1 and P 2 is any expression of the form:
A is a monovalued attribute of P 1 and B a monovalued attribute of P 2 . Given
an instance of the two page-schemes, we say that the link holds if: for each pair
of tuples t attribute URL of t 2
if and only if attribute A of t 1 equals attribute B of t 2 .
Besides link constraints, we also extend to the model the notion of inclusion
constraint, in order to reason about containment among different navigation
paths. Consider again Figure 1: it can be seen that page-scheme ProfPage can
be reached either from ProfListPage or from DeptPage or from CoursePage. Since
page-scheme ProfListPage corresponds to the list of all professors, it is easy to
see that the following inclusion constraints hold:
CoursePage.ToProf ' ProfListPage.ProfList.ToProf
Note that the inverse containments do not hold in general. For example,
following the path that goes through course pages, only professors that teach at
least one course can be reached; but there may be professors who do not teach
any courses.
To formalize, given a page-scheme P , and two link attributes
an inclusion constraint is an expression of the form: P 1
Given instances p 1 and p 2 of each page-scheme, we say that the constraint
holds if: for each tuple t there is a tuple t such that the value of L 1
in t 1 equals the value of L 2 in t 2 . Two constraints of the form P 1
may be written in compact form as
Figure
1 also shows link and inclusion constraints for the Department example
Navigational Algebra
In this Section we introduce the Navigational Algebra (nalg), an algebra
for nested relations extended with navigational primitives. nalg is an abstraction
of the practical language Ulixes [4] and is also similar in expressive power
to (a subset of) WebOQL [2], and it allows the expression of queries against an
adm scheme.
Besides the traditional selection, projection and join operators, in nalg two
simple operators are introduced in order to describe navigation. The first opera-
tor, called unnest page is the traditional unnest [17] operator (-), that allows to
access data at different levels of nesting inside a page; instead of the traditional
prefix notation: - A (R), in this paper we prefer to use a different symbol, \Pi, and
an infix notation: R\PiA. The second, called follow link and denoted by symbol
\Gamma!, is used to follow links. In some sense, we may say that \Pi is used to navigate
inside pages, i.e., inside the hierarchical structure of a page, whereas \Gamma!
to navigate outside, i.e., between pages.
Note that the the selection-projection-join algebra is a sublanguage of our
navigational algebra. In this way, we are able to manipulate both relational
and navigational queries, as it is appropriate in the Web framework. To give an
example, consider Figure 1. Suppose we are interested in the name and e-mail
of all professors in the Computer Science department. To reach data of interest,
we first need to navigate the site as follows:
ProfListPage \Pi ProfList
ToProf
\Gamma! ProfPage
The semantics of this expression is as follows: entry point ProfListPage is
accessed through its URL; the corresponding nested relation is unnested with
respect to attribute ProfList in order to be able to access attribute ToProf;
finally, each of these links is followed to reach the corresponding ProfPage. Operator
ToProf \Gamma! essentially "expands" the source relation by joining it with the
target one; the join is a particular one: since it physically corresponds to following
links, it implicitly imposes the equality of the link attribute in the source
relation with the URL attribute in the target one. We assume that attributes
are suitably renamed whenever needed.
Since the result of the expression above is a (nested) relation containing a
tuple for each tuple in page scheme ProfPage, the query "Name and e-mail of all
professors in the Computer Science Department" can be expressed as follows:
-PName;e\Gammamail (oe DName= 0 C:S: 0
(proflistpage\PiProfList ToProf
\Gamma! ProfPage)) (1)
To formalize, the Navigational Algebra is an algebra for the adm model.
The operators of the navigational algebra work on page-relations and return
page-relations, as follows:
- selection, oe, projection, - and join, 1 , have the usual semantics;
unnest page, \Pi, is a binary operator that takes as input a nested relation
R and a nested attribute A of R; its semantics is defined as the result of
unnesting R with respect to A: (R).
\Gamma!, is a binary operator that takes as input two page-relations,
such that there is a link attribute, L from R 1 to R 2 ; the execution of
\Gamma!R 2 corresponds to computing the join of R 1 and R 2 based
on the link attribute, that is: R 1
It thus
"expands" the source relation following links corresponding to attribute L.
A nalg expression over a scheme S is any combination of operators over page-
relations in S. With each expression it is possible to associate in the usual way a
query tree (or query plan) in which leaf nodes correspond to page-relations and
all other nodes to nalg operators (see Figures 2, 3).
Note that not all navigational algebra expressions are computable. In fact,
the only page-relations in a Web scheme that are directly accessible are the
ones corresponding to entry-points, whose URL is known and documented in
the scheme; thus, in order to be computable, all navigational paths involved in
a query must start from an entry point. We thus define the notion of computable
expression as a navigational algebra expression such that all leaf nodes in the
corresponding query plan are entry points.
Querying Virtual Views of the Web
Our approach to querying the Web consists in offering a relational view of data
in a portion of the Web, and allowing users to pose queries against this view.
In this paper we concentrate on conjunctive queries. When a query is issued to
the system, the query engine transparently navigates the Web and returns the
answer. We assume that the query engine has knowledge about the following
elements: (i) the adm scheme of the site; (ii) the set of relations offered as
external view to the user; we call these relations external relations; (iii) for each
external relation, one or more computable navigational algebra expression whose
execution correspond to materializing the extent of that external relation. Note
that the use of both adm and the navigational algebra is completely transparent
to the user, whose perception of the query process relies only on the relational
view and the relational query language.
To give an example, suppose we consider the Department site whose scheme
is reported in Figure 1. Suppose also we are interested in pieces of information
about Departments, Professors, and Courses. We may decide to offer a view of
the site based on the following external relations:
1. Dept(DName, Address);
2. Professor(PName, Rank, email);
3. ProfDept(PName, DName);
4. Course(CName, Session, Description, Type);
5. CourseInstructor(CName, PName);
In this case, in order to answer queries, the query engine must know the Web
scheme in Figure 1 and the external scheme of items 1-5; moreover, it must also
know how to navigate the scheme in order to build the extent of each external
relation; this corresponds to associating with each external relation one or more
computable nalg expressions, whose execution materializes the given relation.
For example, with respect to external relations Dept, Professor and ProfDept
above, we have the following navigations:
1. Dept(DName, Address)
-DName;Address(DeptListPage\PiDeptList
ToDept \Gamma! DeptPage)
2. Professor(PName, Rank, email)
(proflistpage\PiProfList ToProf \Gamma! ProfPage)
3. ProfDept(PName, DName)
-PName;DName (proflistpage\PiProfList ToProf \Gamma! ProfPage)
-PName;DName (DeptListPage\PiDeptList ToDept \Gamma! DeptPage\PiProfList)
We call these expressions the default navigations associated with external
relations. There may be different alternative expressions associated with the
same external relation (see 3). Note also that, for a given external relation,
there may be other possible navigational expressions, "contained" in the default
navigations. For example, professors may be reached also through their courses.
However, it is not guaranteed that all professors may be reached using this path.
6 Query Optimization
When the system receives a query on the external view, it has to choose an
efficient strategy to navigate the site and answer the query. The optimization
proceeds as follows:
- the original query is translated into the corresponding projection-selection-
join algebraic expression;
this expression is converted into a computable nalg expression, which is
repeatedly rewritten by applying nalg rewriting rules in order to derive a
number of candidate execution plans, i.e., executable algebra expressions;
- finally, the cost of these alternatives is evaluated, and the best one is chosen,
based on a specific cost model.
Since network accesses are considerably more expensive than memory accesses,
we decide to adopt a simple cost model [10] based on the number of pages down-loaded
from the network. Thus, we aim at finding an execution plan for the
query that minimizes the number of pages visited during the navigation. Note
that the cost model can be made more accurate by taking into account also other
parameters such as the size of pages, the deployment of Web servers over the net-work
or the query locality [11]. Also, some expensive local operations should be
considered. We omit these details here for the sake of simplicity. In the following
section, we introduce a number of rewriting rules for the navigational algebra
that can be used to this end. In [10] we develop an optimization algorithm based
on these rules that, by successive rewritings, generates a number of candidate
execution plans. Each of these is then evaluated based on the cost function, and
the optimal one is chosen.
6.1 nalg Rewriting Rules
The first, fundamental rule simply says that, in order to evaluate a query that
involves external relations, each external relation must be replaced by one of the
corresponding nalg expressions. In fact, the extent of an external relation is not
directly accessible, and must be built up by navigating the site.
Rule 1 [Default Navigation] Each external relation can be replaced by any
of its default navigations.
Other rules are based on simple properties of the navigational algebra, and
thus are rather straightforward, as follows.
Rule 2 Given two relations R 1 such that R 1 has an attribute L of type
link to R 2 , suppose that a link constraint R 1 associated with L;
Rule 3 Given a relation R, suppose X is a set of non-nested attributes of R
and A a nested attribute; then: -X
Rule 4 Given a relation R, suppose A is a nested attribute of R, and Y any set
of non-nested attributes of R; then: (i)R 1Y
Rule 5 Given two relations R 1 suppose X is a set of attributes of R 1
suppose also that R 1 has an attribute L of type link to R 2 ; then:
(R 1
The following two rules extend ordinary selection and projection pushing to
navigations. They show how, based on link constraints, selections and projections
can be moved down along a path, in order to reduce the size of intermediate
results, and thus network accesses.
Rule 6 [Pushing Selections] Given two relations R 1 such that R 1 has
an attribute L of type link to R 2 , suppose that a link constraint R 1
is associated with L; then: oe B='v 0 (R 1
Rule 7 [Pushing Projections] Given two relations R 1 such that there is
an attribute L in R 1 of type link to R 2 , suppose that a link constraint R 1
R 2 :B is associated with L; then: -B (R 1
We now concentrate on investigating the relationship between joins and nav-
igations. The rules make use of link and inclusion constraints. The first rule
(rule states that, in all cases in which it is necessary to join the result of two
different paths (denoted by R 1 and R 2 ) both pointing to R 3 , it is possible to
join the two sets of pointers in R 1 and R 2 before actually navigating to R 3 , and
then navigate the result.
Rule 8 [Pointer Join] Given relations R 1 such that both R 1
and R 2 have an attribute L of type link to R 3 , suppose that a link constraint
associated with L; then:
(R 1
\Gamma!R 3
The second rule says that, in some cases, joins between page sets can be
eliminated in favor of navigations; in essence, the join is implicitly computed by
chasing links between pages.
Rule 9 [Pointer Chase] Given relations R 1 such that both R 1
and R 2 have an attribute L of type link to R 3 ; suppose X is a set of attributes
not belonging to R 1 ; suppose also that a link constraint R 2
is associated with L, and that there is an inclusion constraint R 2 :L ' R 1 :L;
7 Pointer-join vs Pointer-chase
Rules 8 and 9 essentially correspond to two alternative approaches to query opti-
mization, which we have called the "pointer join" approach - aiming at reducing
link traversal by pushing joins of link sets - versus a "pointer chase" approach
- in which links between data are followed to restrict network access to relevant
items. For a large number of queries, both strategies are possible. Our
optimization algorithm is such that it generates and evaluates plans based on
both strategies. In the following, we discuss this interaction between join and
navigation in the Web and show that pointer chase is sometimes less expensive
than joins.
Example 1. [Pointer-Join] Consider the scheme in Figure 1, and suppose we
need to answer the following query: "Name and Description of courses taught
by full professors in the Fall session". The query can be expressed against the
external view as follows:
-CName;Desc: (oe Ses:='F all 0 ;Rank='Full 0
Note that there are several ways to rewrite the query. For example, since external
relation CourseInstructor has two different default navigations, by rule 1,
the very first rewrite step originates two different plans. Then, the number of
plans increases due to the use of alternative rules. We examine only two of these
possible rewritings, based on a pointer-join and a pointer-chase strategy, respec-
tively, and discuss the relationship between the two.
The first rewriting is essentially based on rule 8, and corresponds to adopting
a traditional optimization strategy, in which link chasing is reduced by using
joins. The rewriting goes as follows:
-CName;Descr (oe Ses:='F all 0 ;Rank='F ull 0 (Professor 1PName CourseInstructor
1CName Course))
rule 1
((proflistpage\PiProfList ToProf \Gamma!
ProfPage)
1PName (proflistpage\PiProfList ToProf \Gamma! ProfPage\PiCourseList)
1CName (SessionListPage\PiSesList ToSes \Gamma! SessionPage\PiCourseList
ToCourse \Gamma! CoursePage)))
rule 4
((proflistpage\PiProfList ToProf
\Gamma!
1CName (SessionListPage\PiSesList ToSes \Gamma!
SessionPage\PiCourseList
ToCourse \Gamma! CoursePage)))
rule 8
(((proflistpage\PiProfList ToProf \Gamma!
1ToCourse (SessionListPage\PiSesList ToSes
\Gamma!
ToCourse \Gamma! CoursePage))
rule 6
1ToCourse (oe Ses:='F all 0 (SessionListPage\PiSesList) ToSes \Gamma!
ToCourse \Gamma! CoursePage)
First, by rule 1, each external relation is replaced by a corresponding default
navigation (1a); then, rule 4 is applied to eliminate repeated navigations (1b);
then, by rule 8, the join is pushed down the query plan: in order to reduce the
number of courses to navigate, we join the two pointer sets in CourseList, and
then navigate link ToCourse (1c); finally, based on link constraints, rule 6 is used
to push selections down (1d). The plan can then be further rewritten to push
down projections as well.
A radically different way of rewriting the query is based on rule 9; in this
case, the first two rewritings are the same as above; then, by rule 9, the join is
removed in favor of navigations in the site ; finally, projections are pushed down
to generate plan (2d), as follows:
proflistpage\Pi ProfList
ToProf dProfPage\Pi CourseList
ToCourse dCoursePage
SesList
ToSes dCoursePage
ToCoursed
d
ToCourse
ToProf 6oe Rank= 0 F ull 0
ProfPage\PiCourseList SessionPage\PiCourseList
proflistpage\PiProfList
Fig. 2. Alternative Plans for the query in Example 1
(2d) -CName;Descr (oe Ses:='F all 0
(oe Rank='F ull 0
(proflistpage\PiProfList)
ToProf
\Gamma! ProfPage\PiCourseList
ToCourse \Gamma! CoursePage))
Plans corresponding to expressions (1d) and (2d) are represented in Figure 2.
Plan (1d) corresponds to: (i) finding all links to courses taught by full professors;
(ii) finding all links to courses taught in the fall session; (iii) joining the two
sets in order to obtain the intersection; (iv) navigate to the pages in the result.
On the other hand, plan (2d) corresponds to: (i) finding all full professors; (ii)
navigating all courses taught by full professors; (iii) selecting courses in the fall
section.
It is rather easy to see that plan (1d) has a lower cost. In fact, plan (2d)
navigates all courses taught by full professors, and then selects the ones belonging
to the result; on the contrary, in plan (1d), pointers to courses are first selected,
and then only pages belonging to the result are navigated.
The pointer-join strategy chosen by the optimizer in Example 1 is reminiscent
of the ones that have been proposed for relational databases to optimize
selections on a relation with multiple indices [12], and for object-oriented query
processors to reduce pointer chasing in evaluating path-expressions - assuming
a join index on professors and courses is available [8].
However, the following two examples show that, in the Web context, this is
not always the optimal solution: in some cases, pointer-chasing is less expensive.
This is shown in the following example.
Example 2. [Pointer-chasing] Consider the scheme in Figure 1, and suppose
we need to answer the following query: "Name and Email of Professors who
are members of the Computer Science Department, and who are instructors of
Graduate Courses". The query can be expressed on the external view as follows:
Professor 1 ProfDept))
We examine the two most interesting candidate execution plans. A pointer-join
approach yields an expression (1), in which rule 8 is applied to join links ToProf
CoursePage
ToCourse 6
d
ToSes
DeptListPage\PiDeptList
ToDept
DeptPage\PiProfList
dSessionPage\PiCourseList
(1) (2)
ToProf
ddProfPage\PiCourseList
SessionListPage\PiSesList
ToCourse
dProfPage
a a a aa
ToProfd
d
ToProf
ToDept6
DeptPage\PiProfList
CoursePage
DeptListPage\PiDeptList
Fig. 3. Alternative Plans for the query in Example 2
in CoursePage and ProfList before navigating to ProfPage; then, rule 6 is used to
push down selections. The alternative pointer-chasing strategy (2) corresponds
to completely eliminate joins by replacing them with navigations. The query
plans corresponding to these expressions are in Figure 3.
Let us compare the cost of the two plans. Plan (1) intersects two pointer
sets, obtained as follows: the left-hand side path navigates the Computer Science
Department page and retrieves all pointers to its members; the right-hand side
path essentially downloads all session pages, and all course pages, and derives all
pointers to instructors of graduate courses; then, the two pointer sets are joined,
and URLs are navigated to build the result. On the contrary, plan (2) downloads
all pages of professors in the Computer Science Department, and, from those,
the pages of the corresponding courses. Now, a little reflection shows that plan
(2) has a lower cost: navigating all instances of CoursePage in plan (1) makes
it excessively expensive. In fact, due to the topology of the site, we know that
there are several professors for each Department and several courses for each
professor.
An intuitive explanation of this fact is the following: in this case, there is no
efficient access structure to page-scheme CoursePage: in order to select graduate
courses, it is necessary to navigate all courses; this makes the cost excessively
high, and the pointer-join approach fails. On the contrary, following links from
the Computer Science Department yields a reasonable degree of selectivity, that
reduces the number of network accesses.
Based on the previous examples, we can conclude that ordinary pointer-join
techniques do not transfer directly to the Web; a number of new issues have
to be taken into account, namely, the different cost model and the absence of
adequate access structures; in general, several alternative strategies, based on
pointer-chasing, need to be evaluated.
Acknowledgments
The authors would like to thank Paolo Atzeni and Giuseppe Sin-
doni, for useful discussions on early drafts of this paper. Special thanks go to Alessandro
Masci, who implemented the navigational algebra and the relational view manager, provided
insightful comments and supported us in every phase of this work. This work was
in part done while the third author was visiting the University of Toronto. The first and
the third author were partially supported by Universit'a di Roma Tre, MURST, and
Consiglio Nazionale delle Ricerche. The second author was supported by the Natural
Sciences and Engineering Research Council of Canada and the Center for Information
Technology of Ontario.
--R
Regular path queries with constraints.
Restructuring documents
Cut and Paste.
To Weave the Web.
Algebraic optimization of object-oriented query lan- guages
A general framework for the optimization of object-oriented queries
Query processing in distributed ORION.
Access support relations: An indexing method for object bases.
Database systems and logic programming bibliography site.
Efficient queries over Web views.
Querying the World Wide Web.
Single table access using multiple indexes: Optimization
An intuitive view to normalize network structured data.
An architecture for query optimization.
Querying relational views of networks.
Extended algebra and calculus for :1NF relational databases.
An object-oriented query algebra
Join indices.
Join index hierarchies for supporting efficient navigations in object-oriented databases
Design of relational views over network schemas.
The database language GEM.
--TR | query optimization;query languages;view maintenance |
628293 | A Statistical Method for Estimating the Usefulness of Text Databases. | AbstractSearching desired data on the Internet is one of the most common ways the Internet is used. No single search engine is capable of searching all data on the Internet. The approach that provides an interface for invoking multiple search engines for each user query has the potential to satisfy more users. When the number of search engines under the interface is large, invoking all search engines for each query is often not cost effective because it creates unnecessary network traffic by sending the query to a large number of useless search engines and searching these useless search engines wastes local resources. The problem can be overcome if the usefulness of every search engine with respect to each query can be predicted. In this paper, we present a statistical method to estimate the usefulness of a search engine for any given query. For a given query, the usefulness of a search engine in this paper is defined to be a combination of the number of documents in the search engine that are sufficiently similar to the query and the average similarity of these documents. Experimental results indicate that our estimation method is much more accurate than existing methods. | Introduction
The Internet has become a vast information source in recent years. To help ordinary users find desired data in
the Internet, many search engines have been created. Each search engine has a corresponding database that
defines the set of documents that can be searched by the search engine. Usually, an index for all documents
in the database is created and stored in the search engine. For each term which represents a content word
or a combination of several (usually adjacent) content words, this index can identify the documents that
contain the term quickly. The pre-existence of this index is critical for the search engine to answer user
queries efficiently.
Two types of search engines exist. General-purpose search engines attempt to provide searching capabilities
for all documents in the Internet or on the Web. WebCrawler, HotBot, Lycos and Alta Vista are a few
of such well-known search engines. Special-purpose search engines, on the other hand, focus on documents
in confined domains such as documents in an organization or of a specific interest. Tens of thousands of
special-purpose search engines are currently running in the Internet.
The amount of data in the Internet is huge (it is believed that by the end of 1997, there were more
than 300 million web pages [15]) and is increasing at a very high rate. Many believe that employing a
single general-purpose search engine for all data in the Internet is unrealistic. First, its processing power
and storage capability may not scale to the fast increasing and virtually unlimited amount of data. Second,
gathering all data in the Internet and keeping them reasonably up-to-date are extremely difficult if not
impossible. Programs (i.e., Robots) used by search engines to gather data automatically may slow down
local servers and are increasingly unpopular.
A more practical approach to providing search services to the entire Internet is the following multi-level
approach. At the bottom level are the local search engines. These search engines can be grouped, say based
on the relatedness of their databases, to form next level search engines (called metasearch engines). Lower
level metasearch engines can themselves be grouped to form higher level metasearch engines. This process
can be repeated until there is only one metasearch engine at the top. A metasearch engine is essentially
an interface and it does not maintain its own index on documents. However, a sophisticated metasearch
engine may maintain information about the contents of the (meta)search engines at a lower level to provide
better service. When a metasearch engine receives a user query, it first passes the query to the appropriate
(meta)search engines at the next level recursively until real search engines are encountered, and then collects
Metasearch Engine
Search
Search
Engine n
Search
query q resulr r
Figure
1: A Two-Level Search Engine Organization
(sometimes, reorganizes) the results from real search engines, possibly going through metasearch engines
at lower levels. A two-level search engine organization is illustrated in Figure 1. The advantages of this
approach are (a) user queries can (eventually) be evaluated against smaller databases in parallel, resulting
in reduced response time; (b) updates to indexes can be localized, i.e., the index of a local search engine
is updated only when documents in its database are modified; (Although local updates may need to be
propagated to upper level metadata that represent the contents of local databases, the propagation can
be done infrequently as the metadata are typically statistical in nature and can tolerate certain degree of
inaccuracy.) (c) local information can be gathered more easily and in a more timely manner; and (d) the
demand on storage space and processing power at each local search engine is more manageable. In other
words, many problems associated with employing a single super search engine can be overcome or greatly
alleviated when this multi-level approach is used.
When the number of search engines invokable by a metasearch engine is large, a serious inefficiency may
arise. Typically, for a given query, only a small fraction of all search engines may contain useful documents
to the query. As a result, if every search engine is blindly invoked for each user query, then substantial
unnecessary network traffic will be created when the query is sent to useless search engines. In addition,
local resources will be wasted when useless databases are searched. A better approach is to first identify
those search engines that are most likely to provide useful results to a given query and then pass the query
to only these search engines for desired documents. Examples of systems that employ this approach include
WAIS [12], ALIWEB [13], gGlOSS [6], SavvySearch [9] and D-WISE [27]. A challenging problem with this
approach is how to identify potentially useful search engines. The current solution to this problem is to rank
all underlying databases in decreasing order of usefulness for each query using some metadata that describe
the contents of each database. Often, the ranking is based on some measure which ordinary users may not
be able to utilize to fit their needs. For a given query, the current approach can tell the user, to some degree
of accuracy, which search engine is likely to be the most useful, the second most useful, etc. While such a
ranking can be helpful, it cannot tell the user how useful any particular search engine is.
In this paper, the usefulness of a search engine to a given query is measured by a pair of numbers
(NoDoc, AvgSim), where NoDoc is the number of documents in the database of the search engine that have
high potentials to be useful to the query, that is, the similarities between the query and the documents as
measured by a certain global similarity function are higher than a specified threshold and AvgSim is the
average similarity of these potentially useful documents. Note that the global similarity function may or
may not be the same as the local similarity function employed by a local search engine. While the threshold
provides the minimum similarity for a document to be considered potentially useful, AvgSim describes more
precisely the expected quality of each potentially useful document in a database. The two numbers together
characterize the usefulness of each search engine very nicely. NoDoc and AvgSim can be defined precisely as
follows:
d2D"sim(q;d)?T sim(q; d)
(2)
where T is a threshold, D is the database of a search engine and sim(q; d) is the similarity (closeness) between
a query q and a document d in D.
A query is simply a set of words submitted by a user. It is transformed into a vector of terms with weights
[22], where a term is essentially a content word and the dimension of the vector is the number of all distinct
terms. When a term appears in a query, the component of the query vector corresponding to the term,
which is the term weight, is positive; if it is absent, the corresponding term weight is zero. The weight of a
term usually depends on the number of occurrences of the term in the query (relative to the total number
of occurrences of all terms in the query) [22, 26]. It may also depend on the number of documents having
the term relative to the total number of documents in the database. A document is similarly transformed
into a vector with weights. The similarity between a query and a document can be measured by the dot
product of their respective vectors. Often, the dot product is divided by the product of the norms of the
two vectors, where the norm of a vector
i . This is to normalize similarities into
values between 0 and 1. The similarity function with such a normalization is known as the Cosine function
[22, 26]. Other similarity functions, see for example [21], are also possible.
In practice, users may not know how to relate a threshold to the number of documents they like to
retrieve. Therefore, users are more likely to tell a metasearch engine directly the number of most similar
documents (to their query) they like to retrieve. Such a number can be translated into a threshold by the
metasearch engine. For example, suppose we have three databases D1, D2 and D3 such that for a user query
q, when
2. In this case, if a user wants 5 documents,
then should be used. As a result, 3 documents will be retrieved from D1 and 2 documents from D3.
In general, the appropriate threshold can be determined by estimating the NoDoc of each search engine in
decreasing thresholds.
Note that knowing how useful a search engine is can be very important for a user to determine which
search engines to use and how many documents to retrieve from each selected search engine. For example,
if a user knows that a highly-ranked search engine with a large database has very few useful documents and
searching such a large database is costly, then the user may choose not to use the search engine. Even if
the user decides to use the search engine, the cost of the search can still be reduced by limiting the number
of documents to be returned to the number of useful documents in the search engine. Such an informed
decision is not possible if only ranking information is provided.
This paper has several contributions. First, a new measure is proposed to characterize the usefulness
of (the database of) a search engine with respect to a query. The new measure is easy to understand and
very informative. As a result, it is likely to be more useful in practice. Second, a new statistical method,
a subrange based estimation method, is proposed to identify search engines to use for a given query and
to estimate the usefulness of a search engine for the query. We will show that both NoDoc and AvgSim
can be obtained from the same process. Therefore, little additional effort is required to compute both of
them in comparison to obtaining any one of them only. The method yields very accurate estimates and is
substantially better than existing methods as demonstrated by experimental results. It also guarantees the
following property. Let the largest similarity of a document with a query among all documents in search
engine i be large sim i . Suppose large sim i ? large sim j for two search engines i and j, and a threshold of
retrieval T is set such that large sim i ? T ? large sim j . Then based on our method, search engine i will
be invoked while search engine j will not if the query is a single term query. This is consistent to the ideal
situation where documents are examined in descending order of similarity. Since a large portion of Internet
queries are single term queries [10, 11], the above property of our approach means that a large percentage of
all Internet queries will be sent to the correct search engines to be processed using our method. In addition,
the new method is quite robust as it can still yield good result even when approximate statistical data are
used by the method. This method is further improved when adjacent terms in a query are combined. Close
to optimal performance is obtained.
The rest of the paper is organized as follows. Section 2 reviews related work. Section 3 presents our
basic method for estimating the usefulness of search engines. Section 4 discusses several issues on how the
proposed method can be applied in practice. Experimental results will be presented in Section 5. Section 6
describes how adjacent query terms can be combined to yield higher performance. Section 7 concludes the
paper.
Related Work
To be able to identify useful search engines to a query, some characteristic information about the database
of each search engine must be stored in the metasearch engine. We call such information the representative
of a search engine. Different methods for identifying useful search engines can be developed based on the
representatives used.
Several metasearch engines have employed various methods to identify potentially useful search engines
[6, 9, 12, 13, 17, 27]. However, the database representatives used in most metasearch engines cannot be used
to estimate the number of globally most similar documents in each search engine [3, 12, 13, 27]. In addition,
the measures that are used by these metasearch engines to rank the search engines are difficult to understand.
As a result, separate methods have to be used to convert these measures to the number of documents to
retrieve from each search engine. Another shortcoming of these measures is that they are independent of
the similarity threshold (or the number of documents desired by the user). As a result, a search engine will
always be ranked the same regardless of how many documents are desired, if the databases of these search
engines are fixed. This is in conflict with the following situation. For a given query, a search engine may
contain many moderately similar documents but very few or zero highly similar documents. In this case,
a good measure should rank the search engine high if a large number of moderately similar documents are
desired and rank the search engine low if only highly similar documents are desired.
A probabilistic model for distributed information retrieval is proposed in [2]. The method is more suitable
in a feedback environment, i.e., documents previously retrieved have been identified to be either relevant or
irrelevant.
In gGlOSS [6], a database of m distinct terms is represented by m pairs (f is the number
of documents in the database that contain the ith term and W i is the sum of the weights of the ith term over
all documents in the database, m. The usefulness of a search engine with respect to a given query in
gGlOSS is defined to be the sum of all document similarities with the query that are greater than a threshold.
This usefulness measure is less informative than our measure. For example, from a given sum of similarities of
documents in a database, we cannot tell how many documents are involved. On the other hand, our measure
can derive the measure used in gGlOSS. The representative of gGlOSS can be used to estimate the number
of useful documents in a database [7] and consequently, it can be used to estimate our measure. However,
the estimation methods used in gGlOSS are very different from ours. The estimation methods employed in
[6, 7] are based on two very restrictive assumptions. One is the high-correlation assumption (for any given
database, if query term j appears in at least as many documents as query term k, then every document
containing term k also contains term j) and the other is the disjoint assumption (for a given database, for
all term j and term k, the set of documents containing term j and the set of documents containing term
are disjoint). Due to the restrictiveness of the above assumptions, the estimates produced by these two
methods are not accurate. Note that when the measure of similarity sum is used, the estimates produced by
the two methods in gGlOSS form lower and upper bounds to the true similarity sum. As a result, the two
methods are more useful when used together than when used separately. Unfortunately, when the measure
is the number of useful documents, the estimates produced by the two methods in gGlOSS no longer form
bounds to the true number of useful documents.
[25] proposed a method to estimate the number of useful documents in a database for the binary and
independent case. In this case, each document d is represented as a binary vector such that a 0 or 1 at the
ith position indicates the absence or presence of the ith term in d; and the occurrences of terms in different
documents are assumed to be independent. This method was later extended to the binary and dependent case
in [16], where dependencies among terms are incorporated. A substantial amount of information will be lost
when documents are represented by binary vectors. As a result, it is seldom used in practice. The estimation
method in [18] permits term weights to be non-binary. However, it utilizes the non-binary information in a
way that is very different from our subrange-based statistical method to be described in Section 3.2 of this
paper.
3 A New Method for Usefulness Estimation
We present our basic method for estimating the usefulness of a search engine in Section 3.1. The basic method
allows the values of term weights to be any non-negative real numbers. Two assumptions are used by the
basic method: (1) the distributions of the occurrences of the terms in the documents are independent. In
other words, the occurrences of term i in the documents have no effect on the occurrences or non-occurrences
of another term, say term j, in the documents; (2) for a given database of a search engine, all documents
having a term have the same weight for the term. Under the two assumptions, the basic method can
accurately estimate the usefulness of a search engine. In Section 3.2, we apply a subrange-based statistical
method to remove the second assumption. The first assumption can also be removed by incorporating term
dependencies (co-variances) into the basic solution [18]. The problem of incorporating term dependencies
will be addressed in Section 6. We will see in Section 5 that very accurate usefulness estimates can be
obtained even with the term independence assumption.
3.1 The Basic Method
Consider a database D of a search engine with m distinct terms. Each document d in this database can
be represented as a vector is the weight (or significance) of the ith term t i
in representing the document, 1 i m. Each query can be similarly represented. Consider query
is the weight of t i in the query, 1 i m. If a term does not appear in the
query, then its corresponding weight will be zero in the query vector. The similarity between q and document
d can be defined as the dot product of their respective vectors, namely sim(q; d) um dm .
Similarities are often normalized between 0 and 1. One common normalized similarity function is the Cosine
function [22] although other forms of normalization are possible, see for example [21].
With the basic method, database D is represented as m pairs f(p is the
probability that term t i appears in a document in D and w i is the average of the weights of t i in the set
of documents containing t i . For a given query um ), the database representative is used to
estimate the usefulness of D. Without loss of generality, we assume that only the first r u i 's are non-zero,
m. Therefore, q becomes This implies
that only the first r terms in each document in D need to be considered.
Consider the following generating function:
where X is a dummy variable. The following proposition relates the coefficients of the terms in the above
function with the probabilities that documents in D have certain similarities with q.
Proposition 1. Let q and D be defined as above. If the terms are independent and the weight of term t i
whenever present in a document is w i , which is given in the database representative (1 i r), then the
coefficient of X s in function (3) is the probability that a document in D has similarity s with q.
Proof: Clearly, s must be the sum of zero or more w i u i 's with each w i u i being used at most once.
Different combinations of w i u i 's may add up to s. Without loss of generality, let us assume that there
are two such combinations. Suppose
. Then the
probability that q has similarity s with a document d in D is the probability that d has either exactly the
terms in ft i 1
or exactly the terms in ft j1 ; :::; t j l g. With the independence assumption about terms,
the probability that d has exactly terms in ft i 1
the first product
is over all v in and the second product is over all y in f1, 2, ., rg - g. Similarly, the
probability that d has exactly terms in ft j1 ; :::; t j l g is
first product is over
all v in and the second product is over all y in f1, 2, ., rg - g. Therefore, the probability
that d has either exactly the terms in ft i 1
or exactly the terms in ft j1 ; :::; t j l g is the sum of P and Q
which is the same as the coefficient of X s in function (3).
Example 1 Let q be a query with three terms with all weights equal to 1, i.e., ease of
understanding, the weights of terms in the query and documents are not normalized). Suppose database
D has five documents and their vector representations are (only components corresponding to query terms
are Namely, the first document has query term
1 and the corresponding weight is 3. Other document vectors can be interpreted similarly. From the five
documents in D, we have (p out of 5 documents have term 1 and the average weight
of term 1 in such documents is 2. Similarly, (p Therefore, the
corresponding generating function is:
(0:6
Consider the coefficient of X 2 in the function. Clearly, it is the sum of p 1
. The former is the probability that a document in D has exactly the first query term
and the corresponding similarity with q is w 1 (=2). The latter is the probability that a document in D has
exactly the last query term and the corresponding similarity is w 3 (=2). Therefore, the coefficient of X 2 ,
namely, 0:416, is the estimated probability that a document
in D has similarity 2 with q.
After generating function (3) has been expanded and the terms with the same X s have been combined,
we obtain
We assume that the terms in (5) are listed in descending order of the exponents, i.e.,
By Proposition 1, a i is the probability that a document in D has similarity b i with q. In other words, if
database D contains n documents, then n a i is the expected number of documents that have similarity b i
with query q. For a given similarity threshold T , let C be the largest integer to satisfy b C ? T . Then, the
NoDoc measure of D for query q based on threshold T , namely, the number of documents whose similarities
with query q are greater than T , can be estimated as:
est
a
Note that n a i b i is the expected sum of all similarities of those documents whose similarities with
the query are b i . Thus,
is the expected sum of all similarities of those documents whose
similarities with the query are greater than T . Therefore, the AvgSim measure of D for query q based on
threshold T , namely, the average similarity of those documents in database D whose similarities with q are
greater than T , can be estimated as:
est
Since both NoDoc and AvgSim can be estimated from the same expanded expression (5), estimating both
of them requires little additional effort in comparison to estimating only one of them.
Example 2 (Continue Example 1). When the generating function (4) is expanded, we have:
0:048
From formula (6), we have est NoDoc(3; q; est AvgSim(3; q;
(0:048 5+0:192 It is interesting to note that the actual NoDoc is NoDoc(3; q;
1 since only the fourth document in D has a similarity (the similarity is higher than 3 with q and the actual
AvgSim is AvgSim(3; q; 4. The second and the third columns of Table 1 list the true usefulness of D
with respect to q and different T 's. Note that in Table 1, the NoDoc and AvgSim values are obtained when
the estimated similarities are strictly greater than the threshold T. When no value for AvgSim
is available. In this case, the corresponding entry under AvgSim will be left blank. The remaining columns
list the estimated usefulness based on different methods. The fourth and the fifth columns are for our basic
method. The sixth and the seventh columns are for the estimation method based on the high-correlation case,
and the eighth and ninth columns are for the estimation method for the disjoint case, which are proposed
in [6, 7]. It can be observed that the estimates produced by the new method approximate the true values
better than those given by the methods based on the high-correlation and the disjoint assumptions.
True Basic Method High-Correlation Disjoint
Table
1: Estimated Usefulness versus True Usefulness
Furthermore, the distribution of the exact similarities between q and documents in D can be expressed
by the following function (analogous to expression (8)):
That is, the probability that a document having similarity 5 with q is zero and the probability that a
document having similarity 4 with q is 0.2 since no document in D has similarity 5 with q and exactly one
document in D has similarity 4 with q. Other terms can be interpreted similarly. Notice the good match
between the corresponding coefficients in (8) and (9).
3.2 Subrange-based Estimation for Non-uniform Term Weights
One assumption used in the above basic solution is that all documents having a term have the same weight
for the term. This is not realistic. In this subsection, we present a subrange-based statistical method to
overcome the problem.
Consider a term t. Let w and oe be the average and the standard deviation of the weights of t in the
set of documents containing t, respectively. Let p be the probability that term t appears in a document in
the database. Based on the basic solution in Section 3.1, if term t is specified in a query, then the following
polynomial is included in the probability generating function (see Expression (3)):
where u is the weight of the term in the user query. This expression essentially assumes that the term t
has an uniform weight of w for all documents containing the term. In reality, the term weights may have a
non-uniform distribution among the documents having the term. Let these weights in non-ascending order of
magnitude be w is the number of documents having the term and n is the total
number of documents in the database. Suppose we partition the weight range of t into 4 subranges, each
containing 25% of the term weights, as follows. The first subrange contains the weights from w 1 to w s , where
the second subrange contains the weights from w s+1 to w t , where the third subrange
contains the weights from w t+1 to w v , where and the last subrange contains weights from w v+1
to w k . In the first subrange, the median is the (25% k=2)-th weight of the term weights in the subrange
and is wm1 , where similarly, the median weights in the second, the third and the fourth
subranges have median weights wm2 ; wm3 and wm4 , respectively, where
k. This can be illustrated by the following figure.t
kThen, the distribution of the term weights of t may be approximated by the following distribution: The
term has a uniform weight of wm1 for the first 25% of the k documents having the term, another uniform
weight of wm2 for the next 25% of the k documents, another uniform weight of wm3 for the next 25% of
documents and another uniform weight of wm4 for the last 25% of documents.
With the above weight approximation, for a query containing term t, polynomial (10) in the generating
function can be replaced by the following polynomial:
is the probability that term t occurs in a document and has a weight of wmj
25% of those documents having term t are assumed to have a weight of wmj for t and for each j,
Essentially, polynomial (11) is obtained from polynomial (10) by decomposing the probability p that a
document has the term into 4 probabilities, corresponding to the 4 subranges. A weight
of term t in the first subrange, for instance, is assumed to be wm1 and the corresponding exponent of X
in polynomial (11) is the similarity due to this term t, which equals u wm1 , taking into consideration the
query term weight u.
Since it is expensive to find and to store wm1 ; wm2 ; wm3 and wm4 , they are approximated by assuming
that the weights of the term are normally distributed with mean w and standard deviation oe. Then
is a constant that can be looked up from a table for the standard normal distribution.
It should be noted that these constants are independent of individual terms and therefore one set of such
constants is sufficient for all terms.
Example 3 Suppose the average weight of a term t is presentation, assume that term
weights are not normalized) and the standard deviation of the weights of the term is 1.3. From a table of the
standard normal distribution, c \Gamma1:15. Note that these constants
are independent of the term. Thus,
Suppose the probability that a document in the database has the term t is 0.32. Then
4. Suppose the weight of the term t in the query is 2. Then, the polynomial for the term t in the
generating function is
In general, it is not necessary to divide the weights of the term into 4 equal subranges. For example, we
can divide the weights into 5 subranges of different sizes, yielding a polynomial of the form:
represents the probability that the term has weight in the ith subrange,
and wmi is the median weight of the term in the ith subrange.
In the experiments we report in Section 5, a specific six-subrange is used with a special subrange (the
highest subrange) containing the maximum normalized weight only (see Section 5). The normalized weight
of a term t in a document is the weight of the term in the document divided by the norm of the document.
The maximum normalized weight of t in the database is the largest normalized weight among all documents
containing t in the database. The probability for the highest subrange is set to be 1 divided by the number of
documents in the database. This probability may be an underestimate. However, since different documents
usually have different norms and therefore there is usually only one document having the largest normalized
weight, the estimated probability is reasonable.
Example 4 (Continuing Example 3: Term weights are not normalized to facilitate ease of reading.) Suppose
that the number of documents in the database is and the maximum weight of the term t is
Since the probability that the term occurs in the documents is 0:32, the number of documents having the
We use 5 subranges, which are obtained by splitting the first subrange in Example 3 into two subranges,
and the first new subrange is covered by the maximum weight. Since we assume that there is only one
document having the largest term weight, the probability that a document has the largest term weight is
0:01. Among the documents having the term, the percentage of documents having the largest
term weight is 1=32 100% = 3:125%. This document occupies the first new subrange. The second new
subrange contains term weights from the 75 percentile to the 100 \Gamma percentile. The probability
associated with the second new subrange is 0:07. The median
for the second new subrange is (75 percentile. By looking up a standard normal
distribution table, c
The next 3 subranges are identical to the last 3 subranges given in Example 3, that is, c
Assume that the query term weight is 2. The polynomial for the term t in the generating function is
Note that the subrange-based method needs to know the standard deviation of the weights for each
term. As a result, a database with m terms is now represented as m triplets f(p
is the probability that term t i appears in a document in the database, w i is the average weight
of term t i in all documents containing the term and oe i is the standard deviation of the weights of t i in
all documents containing t i . Furthermore, if the maximum normalized weight of each term is used by the
highest subrange, then the database representative will contain m quadruplets f(p
being the maximum normalized weight for term t i . Our experimental results indicate that the maximum
normalized weight is a critical parameter that can drastically improve the estimation accuracy of search
engine usefulness. In the following subsection, we elaborate why the maximum normalized weight is a
critically important piece of information for correctly identifying useful search engines.
3.2.1 Single-term Query
Consider a query q that contains a single term t. Suppose the similarity function is the widely used Cosine
function. Then the normalized query has a weight of 1 for the term t and the similarity of a document d
with the query q using the Cosine function is w 0 , which is the dot product (1
the normalized weight of the term t in the document d, jdj is the norm of d, and w is the weight of the
term in the document before normalization. Consider a database D 1 that contains documents having term
t. The component of the database representative concerning term t will contain the maximum normalized
weight mw 1 , if mw 1 is the largest normalized weight of term t among all documents in database D 1 . By
our discussion just before this subsection, the highest subrange contains the maximum normalized weight
only and its probability is set to be 1 divided by the number of documents in the database. The generating
function for the query q for the database D 1 is:
is the number of documents in this database. For a different database D i , i 6= 1, having
maximum normalized term weight mw i for term t, the generating function for the same query for database D i
is obtained by replacing mw 1 by mw i in the above expression (with being modified accordingly). Suppose
mw 1 is the largest maximum normalized term weight of term t among all databases and mw 2 is the second
largest with mw 1 ? mw 2 . Suppose the threshold of retrieval T is set such that mw 1
the estimated number of documents with similarities greater than T in database D 1 is at least p 1
because (j 6= 1; 2), the estimated numbers of documents with similarities
greater than T in database D 2 and other databases are zero. Thus, database D 1 is the only database which
can be identified by our estimation method as having documents with similarities greater than T for the
single term query. This identification is correct, because documents with normalized term weight mw 1 only
appear in database D 1 and documents in other databases have similarities less than or equal to mw 2 . In
general, if the maximum normalized weights of term t in the databases are arranged in descending order
is the number of databases, and the threshold T is set
such that mw will be identified by our estimation method
to be searched. This identification is consistent with the ideal situation, where these selected databases
contain documents with similarities greater than T and other databases do not have the desired documents
(with similarities greater than T). Thus, our method guarantees the correct identification of useful search
engines for single term queries. The same argument applies to other similarity functions such as [21]. Several
recent studies indicate that 30% or higher percentages of all Internet queries are single-term queries [10, 11].
Thus, for a large percentage of all Internet queries, our method guarantees optimal identification when the
maximum normalized weight of each term is utilized.
4 Discussion on Applicability
We now discuss several issues concerning the applicability of the new method.
4.1 Scalability
If the representative of a database used by an estimation method has a large size relative to that of the
database, then this estimation method will have a poor scalability as such a method is difficult to scale to
thousands of text databases. Suppose each term occupies four bytes. Suppose each number (probability,
average weight, standard deviation and maximum normalized weight) also occupies 4 bytes. Consider a
database with m distinct terms. For the subrange-based method, m probabilities, m average weights, m
standard derivations and m maximum normalized weights are stored in the database representative, resulting
in a total storage overhead of 20 m bytes. The following table shows, for several document collections,
the percentage of the sizes of the database representatives based on our approach relative to the sizes of the
original document collections.
collection name collection size #distinct terms representative size percentage
FR 33315 126258 1263 3.79
In the above table, all sizes are in pages of 2 KB. The statistics of the second and third columns of the three
document collections, namely, WSJ (Wall Street Journal), FR (Federal Register) and DOE (Department of
Energy), were collected by ARPA/NIST [8]. The above table shows that for the three databases, the sizes of
the representatives range from 3.85% to 7.40% of the sizes of the actual databases. Therefore, our approach
is fairly scalable. Also, typically, the percentage of space needed for a database representative relative to
the database size will decrease as the database grows. This is because when new documents are added to a
large database, the number of distinct terms either remains unchanged or grows slowly.
In comparison to the database representative used in gGlOSS, the size of the database representative for
our approach is 67% larger (due to storing the standard deviation and the maximum normalized weight for
each term). The following methods can be used to substantially reduce the size of the database representative.
There are several ways to reduce the size of a database representative. Instead of using 4 bytes for
each number (probability, average weight, standard deviation, maximum normalized weight), a one-byte
number can be used to approximate it as follows. Consider probability first. Clearly, all probabilities are
in interval [0, 1]. Using one byte, 256 different values can be represented. Based on this, interval [0, 1]
is partitioned into 256 equal-length intervals. Next, the average of the probabilities falling into each small
interval can be computed. Finally, we map each original probability to the average of its corresponding
interval. The probability 0.15, for example, lies in the 39-th interval ([0:1484; 0:1523]). In the database
representative, this probability will be represented by the number 38 (using one byte). Suppose the average
of all probabilities in the 39-th interval is 0.1511. Then, 0.1511 will be used to approximate the probability
0.15. Similar approximation can also be applied to average weights, maximum normalized weights and
standard deviations. Our experimental results show (see Section 5) that the approximation has negligible
impact on the estimation accuracy of database usefulness. When the above scheme is used, the size of the
representative of a database with m distinct terms drops to 8 m bytes from 20 m bytes. As a result, the
sizes of the database representatives for the above databases will be about 1.5% to 3% of the database sizes.
Further size reduction is possible by using 4 bits for each weight, maximum normalized weight and standard
deviation. Our experimental results show (also in Section 5) that good accuracy can still be obtained with
the reduction. When 4 bits are used for each weight, maximum normalized weight and standard deviation
while each probability still uses one byte, the size of the representative of a database with m distinct terms
drops to 6:5 k bytes, reducing the above percentages further to 1.23% to 2.4%. As mentioned above, for
larger databases, the database representatives are likely to occupy even lower percentages of space.
4.2 Hierarchical Organization of Representatives
If the number of search engines is very large, the representatives can be clustered to form a hierarchy of
representatives. Each query is first compared against the highest level representatives. Only representatives
whose ancestor representatives have been estimated to have a large number of very similar documents will
be examined further. As a result, most database representatives will not be compared against the query.
Similar idea has also been suggested by others [6].
Suppose are the representatives of v local databases D 1 ; :::; D v . A higher level representative
above these representatives in the hierarchy can be considered as a representative of a database D, where
D is combined from D 1 ; :::; D v by a union. We now discuss how to obtain P from We assume
that databases D 1 ; :::; D v are pair-wise disjoint. Let T i be the set of terms in D i , v. For a given
term t, let p i (t) be the probability of a document in D i that contains t; w i (t) be the average weight of t
in all documents in D i that contain t; mw i (t) be the maximum normalized weight of t in all documents in
be the standard deviation of all positive weights of t in D i . Let p(t); w(t); mw(t) and oe(t) be
the probability, average weight, maximum normalized weight, and standard deviation of term t in the new
representative P, respectively. We now discuss how to obtain p(t); w(t); mw(t) and oe(t). To simplify the
notation, assume that p i is not a term in T i ,
The first three quantities, namely p(t); w(t) and mw(t), can be obtained easily.
is the number of documents in D i .
We now compute oe(t). Let k i (t) be the number of documents containing t in D i . Note that k i
denote the weight of t in the jth document in D i .
?From oe i
Based on the definition of standard deviation, we have
s
?From Equation (12), we have
s
The above derivations show that each quantity in the representative P can be computed from the quantities
in the representatives at the next lower level.
4.3 Obtaining Database Representatives
To obtain the accurate representative of a database used by our method, we need to know the following
information: (1) the number of documents in the database; (2) the document frequency of each term in
the database (i.e., the number of documents in the database that contain the term); and (3) the weight of
each term in each document in the database. (1) and (2) are needed to compute the probabilities and (3)
is needed to compute average weights, maximum normalized weights and standard deviations. (1) and (2)
can usually be obtained with ease. For example, when a query containing a single term is submitted to a
search engine, the number of hits returned is the document frequency of the term. Many other proposed
approaches for ranking text databases also use the document frequency information [3, 6, 27]. Most recently,
the STARTS proposal for Internet metasearching [5] suggests that each database (source) should provide
document frequency for each term.
We now discuss how to obtain information (3). In an Internet environment, it may not be practical to
expect a search engine to provide the weight of each term in each document in the search engine. We propose
the following techniques for obtaining the average term weights, their standard deviations and the maximum
normalized term weights.
1. Use sampling techniques in statistics to estimate the average weight and the standard deviation for
each term. When a query is submitted to a search engine, a set S of documents will be returned as
the result of the search. For each term t in S and each document d in S, the term frequency of t in d
(i.e., the number of times t appears in d) can be computed (the STARTS proposal even suggests that
each search engine provides the term frequency and weight information for each term in each returned
document [5]). As a result, the weight of t in d can be computed. If the weights of t in a reasonably
large number of documents can be computed (note that more than one query may be needed), then an
approximate average weight and an approximate standard deviation for term t can be obtained. Since
the returned documents for each query may contain many different terms, the above estimation can
be carried out for many terms at the same time.
2. Obtain the maximum normalized weight (with respect to the global similarity function used in the
metasearch engine) for each term t directly as follows. Submit t as a single term query to the local search
engine which retrieves documents according to a local similarity function. Two cases are considered:
Case 1: The global similarity function is known to be the same as the similarity function in the search
engine. In this case, if the search engine returns a similarity for each retrieved document, then the
similarity returned for the first retrieved document is the maximum normalized weight for the term; if
the search engine does not return similarity explicitly, then the first retrieved document is downloaded
to compute its similarity with the one-term query and this similarity will be the maximum normalized
weight for the term.
Case 2: The global similarity function and the local similarity function are different or the local
similarity function is unknown. In this case, the first few retrieved documents are downloaded to
compute the global similarities of these documents with the one-term query. The largest similarity
is then tentatively used as the maximum normalized weight, which may need to be adjusted when
another document from the same search engine is found to have a higher weight for the term (with
respect to the global similarity function).
Although using sampling techniques can introduce inaccuracy to the statistical data (e.g., average weight
and standard deviation), our usefulness estimation method is quite robust with respect to the inaccuracy as a
4-bit approximation of each value can still produce reasonably accurate usefulness estimation. Furthermore,
a recent study indicates that using sampling queries is capable of generating decent statistical information
for terms [4].
5 Experimental Results
Three databases, D1, D2 and D3, and a collection of 6,234 queries are used in the experiment. D1, containing
761 documents, is the largest among the 53 databases that are collected at Stanford University for testing
the gGlOSS system. The 53 databases are snapshots of 53 newsgroups at the Stanford CS Department
news host. D2, containing 1,466 documents, is obtained by merging the two largest databases among the 53
databases. D3, containing 1,014 documents, is obtained by merging the 26 smallest databases among the 53
databases. As a result, the documents in D3 are more diverse than those in D2 and the documents in D2
are more diverse than those in D1. The queries are real queries submitted by users to the SIFT Netnews
server [23, 6]. Since most user queries in the Internet environment are short [1, 14], only queries with no
more than 6 terms are used in our experiments. Approximately 30% of the 6,234 queries in our experiments
are single-term queries.
For all documents and queries, non-content words such as "the", "of", etc. are removed. The similarity
function is the Cosine function. This function guarantees that the similarity between any query and document
with non-negative term weights will be between 0 and 1. As a result, no threshold larger than 1 is
needed.
We first present the experimental results when the database representative is represented by a set of
quadruplets (w normalized weight, probability, standard deviation, maximum normalized
weight) and each number is the original number (i.e., no approximation is used). The results will be
compared against the estimates generated by the method for the high-correlation case and our previous
method proposed in [18]. (The method in [18] is similar to the basic method described in Section 3.1 of
this paper except that it utilizes the standard deviation of the weights of each term in all documents to
dynamically adjust the average weight and probability of each query term according to the threshold used
for the query. Please see [18] for details. No experimental results for the method for the disjoint case [6] will
be reported here as we have shown that the method for the high-correlation case performs better than that
for the disjoint case [18].) We then present the results when the database representative is still represented
by a set of quadruplets but each original number is approximated either by a one-byte number or a 4-bit
number. This is to investigate whether our estimation method can tolerate certain degree of inaccuracy on
the numbers used in the database representative. These experiments use six subranges for our subrange-
based method. The first subrange contains only the maximum normalized term weight; the other subranges
have medians at 98 percentile, 93.1 percentile, 70 percentile, 37.5 percentile and 12.5 percentile, respectively.
Note that narrower subranges are used for weights that are large because those weights are often more important
for estimating database usefulness, especially when the threshold is large. Finally, we present the
results when the database representative is represented by a set of triplets (w each number is the
original number. In other words, the maximum normalized weight is not directly obtained but is estimated
to be the 99.9 percentile from the average weight and the standard deviation. The experimental results show
the importance of maximum normalized weights in the estimation process. All other medians are the same.
Using Quadruplets and Original Numbers
Consider database D1. For each query and each threshold, four usefulnesses are obtained. The first is the
true usefulness obtained by comparing the query with each document in the database. The other three are
estimated based on the database representatives and estimation formulas of the following methods: (1) the
method for the high-correlation case; (2) our previous method [18]; and (3) our subrange-based method with
the database representative represented by a set of quadruplets and each number being the original number.
All estimated usefulnesses are rounded to integers. The experimental results for D1 are summarized in Table
2.
high-correlation our previous method subrange-based method
T U match/mismatch d-N d-S match/mismatch d-N d-S match/mismatch d-N d-S
Table
2: Comparison of Different Estimation Methods Using D1
In
Table
2, T is the threshold and U is the number of queries that identify D1 as useful (D1 is useful
to a query if there is at least one document in D1 which has similarity greater than T with the query, i.e.,
the actual NoDoc is greater than 1). When out of 6,234 queries identify D1 as useful. The
comparison of different approaches are based on the following three different criteria.
For a given threshold, "match" reports among the queries that identify D1 as useful
based on the true NoDoc, the number of queries that also identify D1 as useful based on the estimated
"mismatch" reports the number of queries that identify D1 as useful based on the estimated
NoDoc but in reality D1 is not useful to these queries based on the true NoDoc. For example, consider
the "match/mismatch" column using the method for the high-correlation case. When
means that out of the 1,474 queries that identify D1 as useful based on the true NoDoc, 296 queries also
identify useful based on the estimated NoDoc by the high-correlation approach; and there are
also queries that identify D1 as useful based on the high-correlation approach but in reality, D1 is
not useful to these 35 queries. Clearly, a good estimation method should have its "match" close to "U"
and its "mismatch" close to zero for any threshold. Note that in practice, correctly identifying a useful
database is more significant than incorrectly identifying a useless database as a useful database. This
is because missing a useful database does more harm than searching a useless database. Therefore, if
estimation method A has a much larger "match" component than method B while A's ``mismatch''
component is not significantly larger than B's ``mismatch'' component, then A should be considered to
be better than B.
Table
2 shows that the subrange-based approach is substantially more accurate than our previous
method [18] which in turn is substantially more accurate than the high-correlation approach under the
"match/mismatch" criteria. In fact, for thresholds between 0.1 and 0.4, the accuracy of the subrange-
based method is 91% or higher for the "match" category.
d-N: For each threshold T, the "d-N" (for "difference in NoDoc") column for a given estimation method
indicates the average difference between the true NoDoc and the estimated NoDoc over the queries
that identify D1 as useful based on the true NoDoc. For example, for the average difference
is over the 1,474 queries. The smaller the number in "d-N" is, the better the corresponding estimation
method is. Again, Table 2 shows that the subrange-based approach is better than our previous method
for most thresholds which in turn is much better than the high-correlation approach under the "d-N"
criteria.
d-S: For each threshold T, the "d-S" (for "difference in AvgSim") column for a given estimation method
indicates the average difference between the true AvgSim and the estimated AvgSim over the queries
that identify D1 as useful based on the true NoDoc. Again, the smaller the number in "d-S" is, the
better the corresponding estimation method is. Table 2 shows that the subrange-based approach is
substantially more accurate than the other two approaches for all thresholds.
The experimental results for databases D2 and D3 are summarized in Table 3 and Table 4, respectively.
From
Tables
2 to 4, the following observations can be made. First, the subrange-based estimation method
significantly outperformed the other two methods for each database under each criteria. Second, the "match"
components are best for database D1, and not as good for database D2 and for database D3. This is probably
due to the inhomogeneity of data in databases D2 and D3.
high-correlation our previous method subrange-based method
T U match/mismatch d-N d-S match/mismatch d-N d-S match/mismatch d-N d-S
Table
3: Comparison of Different Estimation Methods Using D2
high-correlation our previous method subrange-based method
T U match/mismatch d-N d-S match/mismatch d-N d-S match/mismatch d-N d-S
Table
4: Comparison of Different Estimation Methods Using D3
Using Quadruplets and Approximate Numbers
In Section 4.1, we proposed a simple method to reduce the size of a database representative by approximating
each needed number (such as average weight) using one byte or 4 bits. When the accuracies of the
parameter values are reduced from 4 bytes to 1 byte each, there is essentially no difference in performance
(see
Table
5 relative to Table 2). Table 6 lists the experimental results when all numbers are represented
by 4 bits except that each probability continues to use 1 byte, again for database D1. The results (compare
Tables
5 and 6 with Table 2) show that the drop in accuracy of estimation due to approximation is small
for each criteria.
match/mismatch d-N d-S
Table
5: Using One Byte for Each Number for
match/mismatch d-N d-S
Table
Using 4 Bits for Each Number (Each
Probability Uses One Byte) for D1
Similar but slightly weaker results can be obtained for databases D2 and D3 (see Tables 7 and 8 relative to
Table
3 for D2 and Tables 9 and 10 relative to Table 4 for D3).
Using Triplets and Original Numbers
In Section 3.2.1, we discussed the importance of the maximum normalized weights for correctly identifying
useful databases, especially when single term queries are used. Since single term queries represent a large
fraction of all queries in the Internet environment, it is expected that the use of maximum normalized weights
match/mismatch d-N d-S
Table
7: Using One Byte for Each Number for
match/mismatch d-N d-S
Table
8: Using 4 Bits for Each Number (Each
Probability Uses One Byte) for D2
match/mismatch d-N d-S
Table
9: Using One Byte for Each Number for D3
match/mismatch d-N d-S
Table
10: Using 4 Bits for Each Number (Each
Probability Uses One Byte) for D3
will significantly improve the overall estimation accuracy for all queries. Among 6,234 queries used in our
experiments, 1,941 are single term queries. Table 11 shows the experimental results for database D1 when
the maximum normalized weights are not explicitly obtained. (Instead, it is assumed that for each term, the
normalized weights of the term in the set of documents containing the term satisfy a normal distribution and
therefore the maximum normalized weight is estimated to be the 99.9 percentile based on its average weight
and its standard deviation.) Comparing the results in Table 2 and those in Table 11, it is clear that the use of
maximum normalized weights can indeed improve the estimation accuracy substantially. Nevertheless, even
when estimated maximum normalized weights are used, the results based on the subrange-based approach
are still much better than those based on the high-correlation assumption and those obtained by our previous
method [18] . Similar conclusion can be reached when the results in Table 3 and Table 12 are compared and
when the results in Table 4 and Table 13 are compared.
match/mismatch d-N d-S
Table
11: Result for D1 When
Maximum Weights Are Estimated.
match/mismatch d-N d-S
Table
12: Result for D2 When
Maximum Weights Are Estimated.
match/mismatch d-N d-S
Table
13: Result for D3 When
Maximum Weights Are Estimated.
6 Combining Terms
In the subrange-based estimation method presented earlier, terms are assumed to be independently distributed
in the documents of the database. Although the overall experimental results reported in Section
5 are very good, there are some room for improvement at high retrieval thresholds (see Tables 2, 3 and 4
with threshold values 0:5 and 0:6.) Thus, we propose the following scheme to incorporate the dependencies
of terms in the estimation process.
There are quite a few term dependency models in the information retrieval literature ( see, for example,
the tree-dependency model, the Bahadur-Lazarsfeld model and the generalized dependency model in [26].) In
[18], we employ the Bahadur-Lazarsfeld model to incorporate the dependencies into the estimation process.
That model is somewhat complicated. In addition, it does not make use of the maximum normalized term
weight. As the experimental results in the last section indicate, the maximum normalized term weight is a
critical parameter. Thus, the following approach is used instead.
Consider the distributions of terms t i and t j in a database of documents. Within the set of documents
having both terms, there is a document having the largest sum of the normalized term weight of t i and the
normalized term weight of t j . Let the largest sum be called the maximum normalized weight of the combined
term and be denoted by mnw ij . If terms t i and t j are combined into a single term, then the probability
that a document in the database has the maximum normalized weight of the combined term, mnw ij , can be
assumed to be 1=n, where n is the number of documents in the database. As pointed out earlier, it is unlikely
that another document in the database has the same maximum normalized weight under the combined term.
If the two terms were independently distributed in the documents of the database, then the probability that
a document in the database has the normalized sum of term weights mnw ij under the two terms t i and t j
can be estimated using the subrange-based estimation method. Specifically, a polynomial representing the
probability that a document has the maximum normalized term weight for t i , followed by the probabilities
that a document has certain percentiles of weights for term t i can be written (see Example 4). Similarly,
another such polynomial can be written for term t j . By multiplying these two polynomials together, the
desired probability can be estimated. The criteria that the two terms t i and t j should be combined into a
single is that the estimated probability under the term independence assumption is very different
from 1=n and the maximum normalized weight of the combined term is higher than the maximum normalized
weight of each of the two individual terms. Since our aim is to estimate the similarities of the most similar
documents, the latter condition is to ensure that if the combined term is used, it will not lead to smaller
similarities. The former condition is implemented by computing the difference in absolute value between
1=n and the estimated probability and then comparing to a pre-set threshold. If the difference exceeds the
threshold, then the two terms should be combined. The difference for the term pair t i and t j is denoted by
ij and is stored together with the combined term t ij .
If the two terms are combined, then we obtain from the documents containing both terms the distribution
of the sum of the normalized weights of the two terms. From the distribution, we apply the subrange-based
estimation for the combined term. For a combined term t ij , we store the maximum normalized sum mnw ij ,
the average normalized sum, its standard deviation, its probability of occurrence and its difference d ij . The
last quantity is utilized to determine which term should be combined with a given term in a query and will
be explained later.
Example 5 Suppose that the user's query q is ``computer algorithm'', and that normalized term weight is
used in this example.
Let the maximum normalized weight for the terms "computer" and "algorithm" be mw
respectively. Suppose that the polynomials for the two terms are
and
Suppose that the maximum normalized weight of the combined term "computer algorithm" mnw
which is greater than mw 1 and mw 2 . By multiplying the above polynomials, the probability that a document
has a total normalized weight (associated with these two terms) of mnw 12 or higher is 3:878 10 \Gamma5 . This
probability is based on the assumption that the two terms were independent. The actual probability is
761 in this example. Since the estimated probability and the actual probability
differ substantially, the two terms should be combined. The combined term occurs in 53 documents out of a
total of 761 documents. Its average normalized weight is 0:352; and the standard deviation of the normalized
weights is 0:203. Using the subrange-based method on the combined term, the first subrange contains the
maximum normalized sum of 0.825 and the probability associated with this subrange is 0.0013. The second
subrange has its median at the 98-th percentile. From the standard normal distribution table, the constant
c for the 98-th percentile is 2.055. Thus, the median weight is 0:352 0:769. Note that the
larger end point of the second subrange corresponds to the percentile. Since the median
is at the 98th percentile, the width of the second subrange is 2
Thus, the probability that the normalized weight of the combined term in a document lies in the second
subrange (having median at the 98-th percentile) is
0:000158. The median weights and
probabilities for the other subranges can be determined similarly. Thus, the polynomial for the combined
term is
In general, O(m 2 ) term pairs need to be tested for possible combination, where m is the number of terms.
When m is large, the testing process may become too time consuming. In order that the process can be
easily carried out, we restrict the terms to be query terms (i.e. terms appearing in previously submitted
queries) and each pair of terms to be in adjacent locations in a query. The latter condition is to simulate
phrases since the components of a phrase are usually in adjacent locations.
Given a query, we need to estimate the distribution of the similarities of the query with the documents
in the database, while taking into consideration that certain terms in the query may be combined. We shall
restrict a combined term to contain two individual terms only. It is essential to decide for a given term of
the query whether it is to be combined, and if the term is to be combined, which term should be combined
with it. Specifically, consider three adjacent terms t i , followed by t j and then followed by t k in the query.
If term t i has been combined with its preceding term, then it will not be combined with term t j (because
a phrase usually consists of two words and it is simpler to recognize phrases containing two words than
phrases containing three or more words); otherwise, check if the combined exists. If the combined
check if the combined term t jk exists. If both combined terms exist, then compare the
differences d ij and d jk . The larger difference indicates which term should be combined with term t j for this
query. For example, if d jk is larger than d ij , then term t j is combined with t k and the distribution of the
combined term should be used to estimate the distribution of the similarities of the documents with this
query. If only one of the combined term exists, then that combined term will be used. If none of the two
combined terms exists, then term t j is not combined with any term.
Using this strategy to combine terms, we perform experiments on the same set of queries and the same
three databases D1, D2 and D3. The results are reported in Tables 14, 15 and 16. It is clear that the
combined-term method is better than the subrange-based method in the match/mismatch measure, especially
when the thresholds (0.5 and 0.6) are large. Close to optimal results are obtained.
Theoretically, it is possible to get better results by (1) combining three or more terms together and
(2) modify the polynomial representing two terms when they are combined. It should be noted that the
polynomial for the combined term does not take into consideration the situations that exactly one of the
two terms occurs. It is possible to include those situations. However, that would require storing more
information. Similarly, the process of combining 3 or more terms into one is feasible but would introduce
complications. Since the simple combined-term method yields close to optimal results, it is not clear whether
it is worthwhile to complicate the estimation process.
subrange-based method combined-term method
T U match/mismatch d-N d-S match/mismatch d-N d-S
Table
14: Comparison of Different Estimation Methods Using D1
subrange-based method combined-term method
T U match/mismatch d-N d-S match/mismatch d-N d-S
Table
15: Comparison of Different Estimation Methods Using D2
subrange-based method combined-term method
T U match/mismatch d-N d-S match/mismatch d-N d-S
Table
Comparison of Different Estimation Methods Using D3
Conclusions
In this paper, we introduced a search engine usefulness measure which is intuitive and easily understood by
users. We proposed a statistical method to estimate the usefulness of a given search engine with respect to
each query. Accurate estimation of the usefulness measure allows a metasearch engine to send queries to
only the appropriate local search engines to be processed. This will save both the communication cost and
the local processing cost substantially. Our estimation method has the following properties:
1. The estimation makes use of the number of documents desired by the user (or the threshold of retrieval),
unlike some other estimation methods which rank search engines without using the above information.
2. It guarantees that those search engines containing the most similar documents are correctly identified,
when the submitted queries are single-term queries. Since Internet users submit a high percentage of
such short queries, they can be sent to the correct search engines to be processed.
3. Experimental results indicate that our estimation methods are much more accurate than existing
methods in identifying the correct search engines to use, in estimating the number of potentially useful
documents in each database, and in estimating the average similarity of the most similar documents.
We intend to fine-tune our algorithm to yield even better results and to perform experiments involving
much larger and many more databases. Our current experiments are based on the assumption that term
weights satisfy the normal distribution. However, the Zipfian distribution [20] may model the weights more
accurately. We will also examine ways to further reduce the storage requirement for database representatives.
Acknowledgment
This research is supported by the following grants: NSF (IRI-9509253, CDA-9711582,
HRD-9707076), NASA (NAGW-4080, NAG5-5095) and ARO (NAAH04-96-1-0049, DAAH04-96-1-0278).
We are grateful to Luis Gravano and Hector Garcia-Molina of Stanford University for providing us with
the database and query collections used in [6].
--R
Characterizing World Wide Web Queries.
A Probabilistic Model for Distributed Information Retrieval.
Searching Distributed Collections with Inference Networks.
Automatic Discovery of Language Models for Text Databases.
STARTS: Stanford Proposal for Internet Meta-Searching
Generalizing GlOSS to Vector-Space databases and Broker Hierar- chies
Generalizing GlOSS to Vector-Space databases and Broker Hier- archies
Overview of the First Text Retrieval Conference.
Information and Data Management: Research Agenda for the 21st Century
Real Life Information Retrieval: A Study of User Queries on the Web.
An Information System for Corporate Users: Wide Area information Servers.
Information Retrieval Systems
Searching the World Wide Web.
A Clustered Search Algorithm Incorporating Arbitrary Term Dependencies.
The Search Broker.
Determining Text Databases to Search in the Internet.
Estimating the Usefulness of Search Engines.
Information Retrieval.
Introduction to Modern Information Retrieval.
Finding the Most Similar Documents across Multiple Text Databases.
On the Estimation of the Number of Desired Records with respect to a Given Query.
Principles of Database Query Processing for Advanced Applications.
Server Ranking for Distributed Text Resource Systems on the Internet.
--TR
--CTR
King-Lup Liu , Clement Yu , Weiyi Meng, Discovering the representative of a search engine, Proceedings of the eleventh international conference on Information and knowledge management, November 04-09, 2002, McLean, Virginia, USA
King-Lup Liu , Adrain Santoso , Clement Yu , Weiyi Meng, Discovering the representative of a search engine, Proceedings of the tenth international conference on Information and knowledge management, October 05-10, 2001, Atlanta, Georgia, USA
Amir Hosein Keyhanipour , Behzad Moshiri , Majid Kazemian , Maryam Piroozmand , Caro Lucas, Aggregation of web search engines based on users' preferences in WebFusion, Knowledge-Based Systems, v.20 n.4, p.321-328, May, 2007
Clement Yu , Prasoon Sharma , Weiyi Meng , Yan Qin, Database selection for processing k nearest neighbors queries in distributed environments, Proceedings of the 1st ACM/IEEE-CS joint conference on Digital libraries, p.215-222, January 2001, Roanoke, Virginia, United States
Shengli Wu , Fabio Crestani, Distributed information retrieval: a multi-objective resource selection approach, International Journal of Uncertainty, Fuzziness and Knowledge-Based Systems, v.11 n.Supplement, p.83-99, September
King-Kup Liu , Weiyi Meng , Clement Yu, Discovery of similarity computations of search engines, Proceedings of the ninth international conference on Information and knowledge management, p.290-297, November 06-11, 2000, McLean, Virginia, United States
Clement Yu , Weiyi Meng , Wensheng Wu , King-Lup Liu, Efficient and effective metasearch for text databases incorporating linkages among documents, ACM SIGMOD Record, v.30 n.2, p.187-198, June 2001
Zonghuan Wu , Weiyi Meng , Clement Yu , Zhuogang Li, Towards a highly-scalable and effective metasearch engine, Proceedings of the 10th international conference on World Wide Web, p.386-395, May 01-05, 2001, Hong Kong, Hong Kong
Weiyi Meng , Zonghuan Wu , Clement Yu , Zhuogang Li, A highly scalable and effective method for metasearch, ACM Transactions on Information Systems (TOIS), v.19 n.3, p.310-335, July 2001
Clement Yu , King-Lup Liu , Weiyi Meng , Zonghuan Wu , Naphtali Rishe, A Methodology to Retrieve Text Documents from Multiple Databases, IEEE Transactions on Knowledge and Data Engineering, v.14 n.6, p.1347-1361, November 2002
Robert M. Losee , Lewis Church Jr., Information Retrieval with Distributed Databases: Analytic Models of Performance, IEEE Transactions on Parallel and Distributed Systems, v.15 n.1, p.18-27, January 2004
Weiyi Meng , Clement Yu , King-Lup Liu, Building efficient and effective metasearch engines, ACM Computing Surveys (CSUR), v.34 n.1, p.48-89, March 2002 | metasearch;information resource discovery;information retrieval |
628383 | Rigid Body Segmentation and Shape Description from Dense Optical Flow Under Weak Perspective. | AbstractWe present an algorithm for identifying and tracking independently moving rigid objects from optical flow. Some previous attempts at segmentation via optical flow have focused on finding discontinuities in the flow field. While discontinuities do indicate a change in scene depth, they do not in general signal a boundary between two separate objects. The proposed method uses the fact that each independently moving object has a unique epipolar constraint associated with its motion. Thus motion discontinuities based on self-occlusion can be distinguished from those due to separate objects. The use of epipolar geometry allows for the determination of individual motion parameters for each object as well as the recovery of relative depth for each point on the object. The algorithm assumes an affine camera where perspective effects are limited to changes in overall scale. No camera calibration parameters are required. A Kalman filter based approach is used for tracking motion parameters with time. | Introduction
Visual motion can provide us with two vital pieces of information: the segmentation of the
visual scene into distinct moving objects and shape information about those objects. In this
paper we will examine how the use of epipolar geometry under the assumption of rigidly moving
objects can be used to provide both the segmentation of the visual scene and the recovery of
the structure of the objects within it.
Epipolar geometry tells us that a constraint exists between corresponding points from
different views of a rigidly moving object (or camera). This epipolar constraint is unique to that
object. Optical flow provides a dense set of correspondences between frames. Therefore the
unique epipolar constraint can be used to find separate rigidly moving objects in the scene given
the optical flow. An algorithm will be developed for segmenting the scene while simultaneously
recovering the motion of each object in the scene. This algorithm makes the assumption that
the scene consists of connected piecewise-rigid objects. The image then consists of connected
regions, each associated with a single rigid object.
Once the motion of rigidly moving objects has been determined, scene structure can be
obtained via the same epipolar constraint. The scene structure problem becomes analogous to
stereopsis in that object depth is a function of distance along the epipolar constraint. Dense
correspondences such as those in optical flow can lead to rich descriptions of the scene geometry.
Recent work [TM94, SP93] has used the uniqueness of the epipolar constraint to attempt
a partition of sparse correspondences. These approaches can only deal with small sets of
correspondences and thus will result in sparse recovery of shape information. In this paper we
examine the use of the dense set of correspondences found from optical flow to partition the
scene into rigidly moving objects.
The epipolar geometry will be examined in the context of an affine camera where perspective
effects are limited to uniform changes in scale. Under weak perspective, the epipolar constraint
equation becomes linear in the image coordinates, thus allowing a least-squares solution for
the parameters of the constraint. Different regions of the image representing independently
moving rigid objects can then be segmented by the fact that they possess different linear
epipolar constraints on their motion in the image plane. Once the parameters of the constraint
equation have been recovered, they can be used to describe the three dimensional rigid motion
that each object in the scene has undergone.
In the next section we look at previous work using dense flow for scene segmentation. A
review of the affine camera, and how under a rigid transformation it leads to a special form
of the Fundamental Matrix [LF94] (which expresses the epipolar constraint in matrix form),
can be found in Section 3. Under the affine camera model the epipolar constraint is linear in
image coordinates, allowing for a standard least squares solution which is outlined in Section 4.
It is then shown how the least squares solution for the combination of two separate regions
of the image can be found easily from the combined measurements. Section 6 outlines the
statistic-based region-growing algorithm which estimates both the motion parameters and the
segmentation of the scene into distinct motion regions. Section 7 examines how once the
epipolar geometry has been estimated, the relative depth of each point in the scene can be
calculated. Section 8 describes how the motion parameters for each object can be dynamically
updated with time by using a modified Kalman Filter approach. In Section 9 the results of
the algorithm applied to a set of both real and synthetic motion sequences are examined and
finally a look at future improvements to the algorithm is discussed in Section 10.
2 Review of Past Work
Previous work on using optical flow to segment a visual scene can be divided into two general
classes. The first class looks for clues in the two dimensional optical flow field to find potential
boundaries between different three dimensional objects. The second class uses the unique
epipolar constraint relating points on rigidly moving objects in different views. Since only a
subset of discontinuities in the two dimensional flow field actually correspond to boundaries
between rigid objects, the first class can not distinguish between independently moving objects
and depth boundaries. Also, the optical flow near discontinuities is often difficult to recover.
As a result, the first class of algorithms are operating in regions of the image where the optical
flow is least accurate.
2.1 Motion Field Segmentation
Early work on segmentation via motion looked for discontinuities in one or both components
of the displacement field [SU87, TMB85, BA90, Bla92]. Since under general perspective projections
the motion field is continuous as long as the depth of the viewed surface is continuous,
discontinuities in the flow field signal depth discontinuities. Unfortunately, the flow field at
discontinuities is difficult to recover. For example, optical flow techniques based on derivatives
of the image function assume continuous or affine flow and thus fail at these regions. At locations
of depth edges, motion will introduce regions of occlusion and disocclusion which are
often not explicitly modeled in optical flow routines.
Some algorithms attempt to locate regions near a flow boundary before computing the
flow [BR87, Sch89, BJ94]. The first two papers look at the gradient constraint within a local
region. Schunck [Sch89] performs a cluster analysis on the constraint lines while Bouthemy
and Rivero [BR87] use a statistical test to find separate motions. Black and Jepson [BJ94]
first find regions of uniform brightness and calculate flow within these regions.
In a number of segmentation algorithms, the assumption that the optical flow field was
locally affine in image coordinates was used [Adi85, MW86, NSKO94, RCV92, WA94]. This
assumes that the scene is piecewise-continuous in depth.
ffl Adiv [Adi85] groups optical flow vectors that have similar affine coordinates using a
Hough transform. Assuming each cluster represents the motion of a planar object, the
object's motion is recovered. Clusters with similar motions are then merged. Wang and
Adelson perform a split-and-merge algorithm on the same parameterization.
ffl Under the assumption of planar objects, the motion and shape can be computed in a
least-squares fashion from a collection of measurements. Murray and Williams [MW86]
begin by computing these parameters for small patches of the image. Patches are merged
if they have similar parameters. A boundary is detected when residuals from the fit to
the parameters are high.
ffl Nagel et al. [NSKO94] fit the derivatives of the intensity function directly to affine flow
parameters. Assuming that the deviations from affine flow are normally distributed
random variables, they detect boundaries via a statistical test.
ffl Rognone et al. [RCV92] fit local patches of flow to five different flow templates. Each of
the templates consisted of first order flow (subsets of full affine). The results of these fits
are clustered into possible labels. A relaxation labeling is then performed to assign these
labels to the image patches.
All of these methods make the assumption of piecewise first order flow which implies piecewise
planar scene structure. However for objects of general shape, many depth variations can occur,
even self occlusion of a rigid object. Thus the piecewise planar assumption about the scene is
often invalid.
2.2 Segmentation via the Epipolar Constraint
Epipolar geometry tells us that a linear constraint exists between the projected points of a
rigid body as it undergoes an arbitrary rigid transformation. This constraint is unique to each
rigid transformation and can be used to identify independently moving objects. The epipolar
constraint has been used in a number of structure from motion algorithms [LH81, TH84, TK92].
The epipolar constraint can be used even in the case of uncalibrated cameras [LF94, LDFP93].
This allows for segmentation without any priors on shape or scene structure. In addition, the
constraint holds for each point on an object, not just at the boundaries. The optical flow can
therefore be sparse at the object boundaries.
As pointed out by Koenderink and van Doorn [KvD91], and implemented by Shapiro et
al. and Cernuschi-Frias et al. [SZB94, SZB93, CFCHB89], under weak perspective projection
motion parameters and shape descriptions can be obtained (modulo a relief transformation such
as depth scaling) from just two views. Thus even under the restrictions of scaled orthography,
important motion information can be obtained.
The use of the epipolar constraint to associate correspondences to distinct rigid objects has
been used by Torr [Tor93, TM94], Nishimura et al. [NXT93] and Soatto and Perona [SP93].
Each of these papers form many subsets of the measurements and use a statistical test to
determine which subsets are possible correct partitions. This would lead to a combinatorial
explosion in hypothesis tests for dense correspondences. Our work uses the assumption that
the independent objects in the scene form continuous regions on the image plane. Thus initial
sets can be constrained to nearest neighbors and avoid the combinatorial explosion.
In stereopsis, the epipolar constraint is used in conjunction with priors on the scene structure
to help constrain the problem. The violation of the prior can then also be used to detect
boundaries [Bel93]. These priors usually take the form of a penalty for high depth gradients,
biasing the solution toward a piecewise continuous depth map [MS85]. One work making use
of a prior on the structure in the context of motion is [CFCHB89]. We have not used a prior
on the scene structure for two reasons. First, most common priors try to limit some derivative
of the depth as a function of image coordinates. This biases the result toward fronto-parallel
solutions. One would like to incorporate the concept of piecewise smoothness into a prior
instead of a low depth gradient. Secondly, a prior would destroy the simple direct solution to
the problem as found in Section 4 due to the coupling of motion and structure estimation. We
believe strongly that a coupling between the two problems can lead to more robust estimation
of both structure and motion, but that the proper prior has not been investigated.
2.3 Scene Partitioning Problem
In Section 6 we will see that we can formulate the segmentation of the optical flow field
into a scene partitioning problem [Lec89]. In such problems, a partition of the image into
distinct regions according to an assumed model or descriptive language is desired. The problem
is formulated in terms of a cost functional which attempts to balance a number of model
constraints. These constraints include terms for fitting a smooth model to the data while
simultaneously minimizing the number of distinct regions. The cost functional is often modeled
as the probability of a given solution assuming gaussian departures from the model. Leclerc
[Lec89] showed that the minimal solution could also be interpretated as the minimal length
encoding describing the scene in terms of a given descriptive language.
There are stochastic [GG84], region-growing [BF70, HP74], and continuation [Lec89, BZ87,
GY91] methods for finding solutions to the scene partitioning problem when it is described in
terms of a cost functional. Stochastic methods use simulated annealing programs in which a
gradient descent method is perturbed by a stochastic process which decreases in magnitude
as the global minimum is approached. Continuation methods start from some variation of
the cost functional where a solution can be easily found. The modified cost functional is
then continuously deformed back to its original form and the solution is tracked during the
deformation. If the deformation is slow enough, the solution will track to the global minimum
of the original cost functional. This is the basis of the Graduated Non-Convexity Algorithm of
Blake and Zisserman [BZ87] and the mean field theory approach of Geiger and Girosi [GG91].
Our solution will use the region-growing method described in [Web94] to solve for the
partition. This method uses a statistic-based region growing algorithm which assumes the
solution is piecewise continuous in image coordinates.
3 Projections and Rigid Motions
This section will describe the weak perspective camera. The linear constraint introduced by
a rigid motion leads to a special form of the Fundamental Matrix. We will also introduce
the representation for rotations used by Koenderink and van Doorn [KvD91] and show how
the components of the Fundamental Matrix can be used to obtain the rotation and scale
parameters.
3.1 The Weak Perspective Camera
The weak perspective camera is a projection from 3D world coordinates to 2D scene coordi-
nates. The projection is scaled orthographic and preserves parallelism. The projection can be
as:
where X is the 3D world coordinate point and x its 2D image projection. The 2x3 matrix M
rotates the 3D world point into the camera's reference frame, scales the axes and projects onto
the image plane. The vector t is the image plane projection of the translation aligning the two
frames. The simplest form of the matrix M occurs when the world and camera coordinates
are aligned and the camera's aspect ratio is unity. In this case M can be written
where Z ave is the average depth of the scene. This transformation is a valid approximation to
a real camera only if the variance of the depth in the viewed scene is small compared to Z ave .
We will assume that in the first frame the camera and world coordinates are aligned such that
M takes the form above.
A rigid transformation of the world points that takes the point X to X 0 can be written as
where R is a rotation matrix with unit determinant. The projection of the point x after
undergoing this transformation is
which is another weak perspective projection. Under the assumption above, M 0 is simply
All perspective effects have been incorporated into the change in average depth Z 0
ave \Gamma Z ave .
Introducing the scale factor ave =Z 0
ave we can relate the two frames by
where the translation component t 0
is sMT. We can write the rotation matrix R as being
composed of a 2 \Theta 2 sub-matrix, B, and two vectors, d and f .
f
where the matrix B and vector d are
R
R 23A (8)
R i;j are the elements of the rotation matrix R. Because the matrix M removes the third
component of its right multiplying vector, the values of the vector f do not enter into the
elements of x 0 . From equation (1) we can write x 0 in terms of the first projected point x,
Z i is the scaled depth [SZB93] at point x, -
ave with Z i
being the true depth.
We can eliminate the depth component -
Z i to get a linear constraint relating x and x 0 . By
multiplying equation (9) by the vector orthogonal to d we obtain the linear constraint
In terms of image coordinates this linear constraint is
where
This constraint can be written in terms of a special form of the Fundamental Matrix.
z
x
y
f
Figure
1: Koenderink and van Doorn representation for rotations: a rotation about the viewing
direction followed by a rotation about an axis in the image plane, ~!.
This form has 5 non-zero terms which can be determined only up to a scale factor since any
scalar multiplying equation (14) does not change the result. The form of F is
c d eC C C A (15)
3.2 Koenderink and van Doorn Rotation Representation
A rotation in space can be expressed in a number of representations: Euler angles, axis/angle
quarternions etc. A particularly useful representation for vision was introduced by Koenderink
and van Doorn [KvD91]. In this representation, the rotation matrix is the composition
of two specific rotations: the first about the viewing direction (cyclorotation) and the second
about an axis perpendicular to the viewing direction at a given angle from the horizontal (see
Figure
1.) Assuming that the viewing direction is along the z axis, the rotation matrix for a
rotation of ' about z, and of ae about ~! is
where ~! is an axis in the image plane at an angle of OE from the horizontal. The rotation about
the viewing direction provides no depth information. All information about structure must
come from the rotation about an axis in the image plane. This representation separates the
rotation into an information containing component and a superfluous additive component.
Using the notation above the total rotation matrix can be written
Using these values in the formation of the Fundamental Matrix as in equations (11,15) we
find that
The motion parameters of interest can be obtain from the matrix elements:
Equations (18) and (19) are identical to the ones used in Shapiro et al. [SZB93]. In their
application, they require a test to see if the angle ' from the arctan function is correct or
is too large by a factor of -. Since we are dealing with small motions which occur between
consecutive frames, we can assume that the angle ' is less than - and thus subtract - if the
arctan function returns a ' with magnitude larger than -.
We have shown that given the elements of the Fundamental Matrix in (15), one can obtain
the motion parameters s; OE; and '. The angle ae in (17) can not be obtained under weak
perspective from just two views because of an unknown scaling factor, as proved by Huang
and Lee [HL89]. Koenderink and van Doorn [KvD91] solve for ae from three views using a
non-linear algorithm.
4 Solving for the Fundamental Matrix
In the previous section we saw how under the assumptions of an affine camera model, the
displacement field for a rigidly moving object satisfies an affine epipolar constraint (11). This
Measured
d
Constraint line for point x
Uncertainty Ellipsoid
Figure
2: Epipolar geometry constrains the displacement vector to lie on a line in velocity space.
The perpendicular distance between this line and the measured displacement is minimized to find
the parameters of the Fundamental Matrix.
constraint says that a point (x; y) in the first frame will be projected to a point somewhere on
a line in the second frame. The location of this line is dictated by the Fundamental Matrix.
In our scenario, the location of the point in the second frame is given by the optical flow,
v). The constraint can be written in terms of the optical flow,
where the Fundamental Matrix elements are related to the primed values by c
d+ b. The affine epipolar constraint equation forces the optical flow to lie on a line in velocity
space. Since optical flow is a measured quantity sensitive to noise, the flow may not lie on the
lines dictated by the Fundamental Matrix constraint. We can use weighted least squares to
solve for the parameters (a; b; c by minimizing the weighted distance in velocity space
between the measured optical flow and the constraint line (see Figure 2).
min
The weighting factor, w i , comes from the error covariance of the measured optical flow,
=\Omega ~v . Instead of minimizing the distances in velocity space between the epipolar
constraint line and the measured optical flow (21), one could minimize the distance in
image coordinates between the point in both frames and the respective epipolar constraint
lines. (The rigid transformation from frame 1 to frame 2 implies an epipolar constraint on
points in frame 2. By symmetry, the reverse motion implies an epipolar constraint on points
in frame 1.) This is the standard minimization when instead of optical flow, the measurements
consist of feature points tracked from frame to frame. When using feature correspondences the
point positions in both frames are subject to error. By using optical flow, we are assuming the
flow is associated with a particular point in the first frame. The flow itself is subject to error,
but not the image location. Therefore it is more appropriate to minimize in velocity space.
Equation (21) is similar to the cost function E 2 in Shapiro et al. [SZB94]. Adopting their
terminology, we write ~ The minimization is performed
by using a Lagrange multiplier, -, on the constraint a We introduce the diagonal
matrix Q which is zero except for ones at entries Q 1;1 and Q 2;2 . The constraint then becomes
which is equivalent to setting a
min
(~ n;e)
The minimization over e can be done immediately by setting
is the weighted centroid of the 4D points ~ x i . Substituting e into equation (22)
we have
min
where the measurement matrix
Differentiating with respect to ~ n and using the fact that Q T we obtain the matrix
equation
Since Q has only two non-zero entries, finding the value of - which causes (W \Gamma -Q) to drop
rank involves only a quadratic equation in -. In addition, W is real-symmetric so we also
know that - will be real. As will be shown in the next paragraph, the smaller of the quadratic
equation's two solutions is the one desired. The solution ~ n is now the vector which spans the
null space of (W \Gamma -Q). We normalize the solution by setting jjQ~ njj which results from
differentiating (23) with respect to -.
The sum in equation (21) can be found by substituting the solution for ~ n into (21). Using
the fact that W~ summation is simply -. Thus the sum of squared distances in
velocity space for the minimal fit can be found by solving the cubic equation implied by (24).
The solution ~ n minimizes the weighted sum squared distances in velocity space between the
epipolar constraint line and the measured displacements.
The correct weighting factor w i to use in the weighted least squares solution would be the
reciprocal of the variance in the direction perpendicular to the epipolar line, F~x. Using the
fact that ~ n is normalized this value is
Since we do not know ~ n before minimizing the cost functional (22), we can not know the values
of the w i . We therefore use the value of
trace(\Omega ~v ) as an initial value of w i and update once
we find ~ n. Weng et al. [WAH93] also calculated weighting factors in this fashion for their
non-linear minimization. At each iteration of their algorithm they recalculated the weights.
We will see in Section 7 that the variance of the optical flow measurement in the direction
parallel to the epipolar constraint line determines the uncertainty in the value of the depth
recovered. This is in contrast to the uncertainty perpendicular to the epipolar line which is
used here.
The fact that the total contribution to the cost functional for a particular choice of Fundamental
Matrix can be found from the elements of the matrix W and centroids of the measurements
makes it easy to calculate the least-squares solution for any combination of subsets
of the measurement data. Suppose we calculate the measurement matrices W 1
separate regions of the image, R 1 If we wish to know the change to the cost functional if
we assign a single ~ n to the combination of the image regions, we need to compute the combined
measurement matrix W which is a simple function of the elements of the separate matrices
and the centroids -
. The result is that only a simple calculation need be
performed to determine whether it would be advantageous in terms of cost for two regions to
combine their measurements into a single region. This fact is the basis for the simultaneous
segmentation and epipolar geometry calculation algorithm outlined in Section 6.
5 The Case of Affine Flow
The solution for the Fundamental Matrix elements in equation (24) requires that the matrix
three, i.e. the null space has dimension one. Otherwise, more than one
solution can exist. One case where this occurs is when the optical flow is affine in image
coordinates. That is, when@ u
vA =@ a b
c dA@ x
In this case, a linear relationship exists between (u; v) and (x; y) and thus W drops rank. The
Fundamental Matrix cannot be uniquely determined in these cases. This is a consequence of
that fact that a family of motions and scene structures can give rise to the same affine flow.
This observation was also made by Ullman [Ull79], but without reference to the Fundamental
Matrix. The non-trivial causes of affine flow are either a special arrangement of the points
observed (co-planarity) or special motions. The causes are enumerated below.
Case 1: Co-planar points. When the observed points are co-planar, a constraint exists
on their three dimensional coordinates of the form Ax
perspective, the depth of the projected point at image coordinates x is then a linear
function of x, i.e. Z From equation (9) we see that x 0 , the coordinates of
that point in the second view, is a linear function of x and thus the optical flow is affine
in image coordinates.
ffl Case 2: No rotation in depth. If there is no rotation in depth (the rotation ae in the
Koenderink and van Doorn representation, Figure 1) then the vector d in equation
is zero. As above, this causes x 0 to be a linear function of x and thus the optical flow is
affine.
If there is rotation in depth, multiple observations can be used to resolve Case 1. Under
the assumption that the rigid motion of each independently moving object does not vary
significantly from frame to frame, the individual Fundamental Matrices can be viewed as
being constant. We can therefore use more than one optical flow field in order to determine
them. The measurement matrix W formed from multiple frames will regain full rank. This
comes from the fact that we are viewing the plane from different views, and as a result, it
gains depth, as illustrated in Figure 3. In fact under orthographic projection, 4 non-coplanar
point correspondences through 3 frames are sufficient to determine both motion and structure
in this case [Ull79].
Motions consisting of pure translational motion and/or rotation about an axis parallel to the
optical axis (Case 2) will always result in affine flow under weak-perspective and orthographic
projections. The measurement matrix will not gain rank with multiple frames. When a region
Figure
3: A planar object rotating about an axis perpendicular to the optical axis gains depth over
time. The use of multiple frames allows for the recovery of the motion which is ambiguous from
only two frames.
has been detected as containing affine flow, the motion can be recovered directly from the affine
parameters and objects can be segmented based on these parameters. Thus objects undergoing
pure translation can be segmented, even though the Fundamental Matrix for the motions is not
uniquely defined. Other motion segmentation algorithms based on the Fundamental Matrix
do not handle this case.
Since the optical flow is corrupted by noise, a criterion must be developed for deciding if a
region contains affine flow. A region is designated as containing affine flow via a ratio of the
singular values of the measurement matrix. The symmetric matrix W \Gamma -Q should have rank
three and therefore have three positive, non-zero singular values. If the singular values, oe i , are
numbered in increasing order, the following ratio is used to test for rank less than three:
oe 4
If this ratio is less than a given threshold, the matrix is assumed to have rank less than three.
For example, if a rank 2 matrix is perturbed with a small amount of noise, the singular values
oe 1 and oe 2 will be of order ffl, where ffl is small, while oe 3 and oe 4 will be of order ?? ffl. The
ratio (27) will then be very small. For the experiments, the threshold for the ratio was set to
calculations were done in double precision.
6 Segmenting via a Region-Growing Method
We wish to partition the scene into distinct regions, each region being labeled by a unique
Fundamental Matrix. We define a cost functional which balances the cost of labeling each pixel
with a penalty for having too many different labelings. We define as a total cost functional
E(~ n;
where the summation i is over all pixels in the image and N i is the set of nearest neighbor pixels
to i. The delta function ffi(\Delta) is equal to one when its argument is zero, and zero otherwise.
The vector ~ n i is the estimate of the Fundamental Matrix at pixel i. In terms of the standard
form of a cost functional [BZ87], D(~ n) represents a goodness of fit term which attempts to keep
the estimate close to the data, and P (~ n; ff) is a discontinuity penalty term which tries to limit
the frequency of discontinuities. The D(~ n) term is the weighted sums of squared distances in
velocity space with a Lagrange multiplier as defined in the Section 4. The penalty term attaches
a fixed cost ff for each pixel bordering a discontinuity since the value of
unity unless ~
Our model assumes that the scene can be segmented in a discrete way: there are a finite
number of regions with particular motion parameters attached to each. Thus the field ~ n is
piecewise constant, being constant within each region and changing abruptly between regions.
This problem is an example of the scene partitioning problem [Lec89] in which the scene is to
be partitioned into distinct regions according to an assumed model or descriptive language.
The minimum of the cost functional E(~ n; ff) can be seen as a maximum apriori probability
(MAP) estimate if we model the deviations from the fit as coming from a stochastic process
with certain priors.
To solve this partitioning problem we will use the region-growing method described in
[Web94]. We outline the method below.
From the previous section we saw that computing the cost (in terms of deviation of fit from
data) of combining information from different regions involves only simple operations. The fit
cost for combining two regions, DR 1 +R 2
can be compared to the fit cost of the two regions
separately, DR 1
. The algorithm decides whether these two regions should be merged
via the statistic
which can be shown to be similar to an F -test statistic. The factor \Delta is the tolerated difference
between regions that can be considered similar.
The algorithm begins by forming small initial patches of size 4 \Theta 4 pixels. Each of these
patches then computes its solution, ~ n, and error, DR . For a small value of \Delta, all regions which
can be combined when the statistic F 0 is below a fixed confidence level are merged. Newly
formed regions are tested for affine flow solutions. The value of \Delta is increased and all possible
mergings are checked again. This continues until we reach the final value of \Delta. See [Web94]
for details and application of the algorithm on a range of scene segmentation problems.
The use of a statistical test for flow-based segmentation can also be found in [NSKO94,
BR87]. In these cases however, the deviation of the optical flow from affine parameters is
tested instead.
The statistical test can not be used to compare an affine region with a non-affine region
because there exists a subspace of solutions for the Fundamental Matrix in the affine case.
The same test however can be used between two affine regions. In this case, the cost term D
in equation (29) is the sum of squared errors between the measurements and the least-squares
affine motion fit. The same statistical test is used for both affine and non-affine regions in the
region growing algorithm, but with different data terms D in the statistic F 0 .
7 Recovering Depth
Once we have recovered the elements of the Fundamental Matrix for a region of the image
plane, we can attempt to recover the depth of each image point. From Section 3.1 we saw that
the projected point x in the second frame is
From the elements of F we know the direction of the vector d. Taking the dot product of
equation (31) with d and rearranging we obtain
We can not solve directly for the d T t 00
term since we do not have enough information to recover
the full translation. However the d T t 00
term is a constant throughout the object's projection.
The remaining terms are known (up to a scale factor). Solving for -
Z i we find
ay
is a constant for each object. Therefore, up to an additive constant
Z c , the scaled depth of each imaged point can be computed given the elements of the matrix
F .
The recovered depth map from (33) is a direct function of the optical flow measurements
and therefore noisy. The first term of equation (33) is the distance in velocity space along the
line Fx. If we know the error covariance of our measured optical flow, var(~v T ~v)
then the uncertainty of the measurement in the direction parallel to the epipolar line Fx is
equal to the uncertainty in the depth estimate. Thus from our solution to the fundamental
matrix, the relative weighting factor, q i , which represents the inverse of this uncertainty should
be
i\Omega ~v
~
n 0 is the unit vector perpendicular to ~ n. This is in contrast to
the weighting factor w i used in Section 4 for the deviation of the estimate in the direction
perpendicular to the epipolar constraint. This separation of depth and motion parameters
estimation was also pointed out by Weng et al. [WAH93].
The final depth estimate -
Z i is found using a weighted sum of local estimates.
where the
region\Omega is the intersection of a local region about the point x i and the rigid motion
region associated with that point. We are not including depth values from separate regions,
but we are making the assumption that local points on the object have similar depths.
In the case of affine flow, we know that the object is either undergoing pure translation or
is rotating about an axis parallel to the optical axis. In either case, no depth information can
be obtained under orthographic or weak-perspective projection. Consequently depth recovery
would have to rely on other cues.
In the formalism presented here, the determination of structure is separate from that of
motion. Once the motion is found, the depth comes directly from equations (33) and (34).
One of the strengths of the epipolar constraint is that it places no limit on the scene structure.
The scene could consist of a cloud of randomly placed points. One could however introduce
priors on the scene structure to regularize the estimate, as is done in stereopsis. This work
has not examined the use of structure priors beyond the local smoothness implied by equation
Object Two
Object One
Prediction Time t+1
Unassigned
Segmentation Time t
Figure
4: Illustration of the calculation of the predicted segmentation based on the current segmentation
and optical flow. The unassigned regions which have associated flow vectors will be filled in
by the segmentation algorithm which begins with the prediction image.
8 Object Tracking
Once the separate objects have been segmented, we would like to track them with time.
Beginning with the initial segmentation formed from the first three flow fields, the algorithm
proceeds by taking the present segmentation and forming a prediction of the segmentation for
the next flow field.
The prediction is formed by taking the current pixel assignment and translating it along
that pixels' optical flow vector. The assignment is rounded to the nearest integral pixel value.
Disoccluded regions are left unassigned. The segmentation algorithm is run on this prediction
image to fill in the unassigned regions. This is repeated for each new optical flow field. New
objects can be introduced if, after filling in the unassigned regions, a new region is formed
which does not merge with any of the existing regions. Figure 4 illustrates the prediction
method.
The proposed scheme avoids having to run the entire segmentation algorithm from scratch
at each new frame since it uses the previous segmentation as a prediction. Also, it avoids the
problem of matching regions in the new frame with those of the previous frame. However, this
method requires a correct initial segmentation. If two objects are labeled as a single object in
the initial segmentation they may remain so in subsequent frames.
Once we have correctly labeled the optical flow vectors in a frame according to which
rigid object they correspond to, we can use the information in each new frame to increase the
accuracy of both the shape and motion of each independently moving object. At this point
we can relax the assumption that the rigid motion parameters are constant from frame to
frame which was used to form the initial segmentation. We adopt a Kalman filter approach in
which the motion parameters are modeled as a process with a small amount of noise. In this
way the motion parameters can change with time. Each new optical flow field provides a new
measurement for estimating this varying state.
The work by Soatto et al. [SFP94] addresses the case of estimating the elements of the
Fundamental Matrix in a Kalman Filter framework. Although their work was for the full
Fundamental Matrix, it is easily adapted to the simpler affine form. The difficult part of using
a Kalman Filter approach to tracking the motion parameters is that the relation between
measurements, the optical flow, and state variables is not linear. Instead there exists the
epipolar constraint, equation (11), relating the product of the two. One can linearize this
implicit constraint about the predicted state to produce a linear update equation. Details can
be found in [SFP94]. We will simply state the results for the affine case below.
In what Soatto et al. call the Essential Estimator, the state consists of the 5 non-zero
elements of the affine Fundamental Matrix, ~ n(t). The vector ~ n(t) lies on the subspace of IR 5
corresponding to jjQ~ They model the dynamics of this state as simply being a random
walk in IR 5 which has been projected onto the jjQ~ As new measurements come
in, the value of ~ n(t) as well as an estimate of the error covariance, P (t), are updated. Borrowing
their notation, where represents the prediction at time t
to time t, and (t represents the estimate given measurements up to time t + 1, the
filter equations are:
Update:
Gain Matrices:
The matrix X is an n \Theta 5 matrix whose rows are made up of the n measurement vectors,
1). The measurement error covariance matrix R x (t) is a diagonal matrix consisting
of the weighting terms w
i\Omega ~v ~ n) \Gamma1 defined in Section 4. The operation \Phi is addition plus
a projection back onto the jjQ~ As a result, the gain matrix applies a change
to ~ n which is then normalized back to jjQ~
9 Experimental Results
The algorithm was tested on a number of synthetic and real image sequences. The optical
flow was computed using the multi-scale differential method of Weber and Malik [WM95].
The algorithm also produced the expected error
covariance,\Omega ~v , associated with each flow
estimate.
For each sequence, the segmentation of the scene into distinct objects, the tracked motion
of these objects, and the scaled scene structure are recovered. We begin with a synthetic
sequence in order to compare ground truth data with the tracked motion parameters.
9.1 Sequence I
Two texture-mapped cubes rotating in space were imaged on a Silicon Graphics computer
using its texture-mapping hardware. The still frames were used as input to the optical flow
algorithm. The magnitude of the optical flow ranged from zero to about 5 pixels/frame. For
the first 10 frames of the sequence, the cubes were rotating about fixed but different rotation
axes. For the second 10 frames these axes were switched. In this way we could examine the
recovery of the motion parameters when they are not constant with time. The rotation axes
used, as well as a sample image and optical flow field are shown in Figure 5. At the edges of
the foreground cube, the assumptions of a differential method for optical flow are invalid. As
a result the optical flow is noisy and confidence is low. If we were to attempt to find shape
boundaries by differentiating the flow field components, we would have difficulty in the very
regions which we are trying to find since the flow is not defined there.
The segmentation algorithm found two separate moving objects for each frame. The initial
segmentation along with the initial depth recovered for the smaller cube is shown in Figure 6.
The estimated angle OE as a function of frame number for each cube is shown in Figure 7.
The original estimate is good because of the density of the optical flow. Subsequent frames do
not show much improvement. The Kalman Filter successively tracks the change in rotation
axis which occurs at frame 10. Within a few frames the estimate is locked onto the new
directions. The response time of the tracker can be changed by increasing or decreasing R n ,
the expected variance in motion parameters in equation (36).
9.2 Sequence II
The algorithm was run on a real sequence consisting of a cube placed on a rotating platen.
(This sequence was produced by Richard Szeliski at DEC and obtained from John Barron). The
background was stationary. The displacements between frames are very small in this sequence,
with the largest displacement on the cube itself being only 0:5 pixel. The background had zero
flow and was labeled as affine. An image from the sequence, the computed optical flow and
recovered depth map are shown in Figure 8.
In this case, the rotation axis of the cube makes an angle of 90 degrees in the image plane.
The recovered value of this angle as a function of frame number is shown in Figure 9. Again the
large number of measurements from the initial segmentation produced an accurate estimate.
9.3 Sequence III
The next image sequence consists of textured patterns translating in the background while a
toy train moves in the foreground. The planar background produces uniform flow. A frame
from the sequence, an example optical flow recovered and the segmentation are shown in
Figure
10. The planar background regions were correctly recognized as consisting of affine
flow.
This sequence demonstrates the algorithm's ability to identify regions of affine flow. The
boundaries appear irregular because no priors on the segmentation are used. Priors favoring
Figure
5: Two independently rotating texture-mapped cubes were created on a Silicon Graphics
workstation. A single frame from the sequence and a sample optical flow field is shown on the
top row. For the first 10 frames, the cubes rotated with rotation axes indicated in the bottom left
figure. For the second 10 frames, the rotation axes were as indicated in the bottom right figure.
Figure
The boundary between the two independently moving objects found by the segmentation
algorithm and the pixel depths of the smaller cube.
Frame Number
-20.020.0Degrees
Rotation Axis Angle
Foreground Cube
True
Estimate
Frame Number
-20.020.0Degrees
Rotation Axis Angle
Background Cube
True
Estimate
Figure
7: The recovered value of the angle the rotation axis of each cube makes in the image plane
as a function of frame number. After 10 frames, the rotation directions were switched.
Figure
8: A single frame of a Rubik's Cube on a rotating platen. The optical flow and recovered
depth map as seen from a side view are shown below.
Frame Number85.095.0Degrees
Rotation Axis Angle
Rubik Cube
Figure
9: A single frame of a Rubik's Cube on a rotating platen. The optical flow and recovered
depth map are shown below.
straight over jagged boundaries could be introduced as well as combining information from
intensity and texture boundaries [BJ94].
Discussion and Future Work
We have shown that even with just the optical flow of an image sequence, it is possible to
segment the image into regions with a consistent rigid motion and determine the motion
parameters for that rigid motion. Furthermore, the relative depth of points within the separate
regions can be recovered for each point displacement between the images.
The recovery requires no camera calibration but does make the assumptions of an affine
camera: i.e. perspective effects are small. The special form of epipolar geometry for the case
considered here has its epipoles at infinity. Perspective dominant motions can not be fit by the
motion parameters. The region-growing algorithm used for the simultaneous region formation
and motion parameter estimation was not dependent on this particular form of the geometry.
If a recovery of the full perspective case was required, the same algorithm could be used.
However, the calculation of the Fundamental Matrix from small displacements such as found
in optical flow is not stable [WAH93, LDFP93].
Figure
10: A single frame of the "mobile" sequence from RPI. The background consists of translating
patterns while a toy train traverses the foreground. An example optical flow recovered is also
shown. The labeled image is shown below. The background parts (colored grey) were identified as
undergoing pure translational motion by the singular value ratio test. The black and white colored
regions (corresponding to the train, rotating ball and transition regions) were not labeled as affine.
Another method of motion segmentation based on the Fundamental Matrix can be found
in [Tor93, TM94]. In this paper, the displacement vectors are segregated by finding outliers in
a robust estimation of the Fundamental Matrix. Clusters of displacements are found through
an iterative method. The combinatorial explosion of dense displacement fields would make
this method difficult to implement. We take advantage of the gross number of estimates to
gain robustness as opposed to finding specific outliers. However, the eigenvector perturbation
method discussed in [Tor93] would make an easy test for outliers and could be implemented
in our framework to detect and remove outliers.
Acknowledgements
This research was partially supported by the PATH project MOU 83. The authors wish to
thank Paul Debevec for creating the synthetic image sequence.
--R
Determining three-dimensional motion and structure from optical flow generated by several moving objects
Constraints for the early detection of discontinuity from motion.
A Baysian Approach to the Stereo Correspondence Problem.
Scene analysis using regions.
Estimating optical flow in segmented images using variable-order parametric models with local deformations
Combining intensity and motion for incremental segmentation and tracking over long image sequences.
A hierarchical likelihood approach for region segmentation according to motion-based criteria
Visual reconstruction.
Toward a model-based bayesian theory for estimating and recognizing parameterized 3-d objects using two or more images taken from different positions
Stochastic relaxation
Parallel and deterministic algorithms from mrf's: Surface reconstruction.
A common framework for image segmentation.
Motion and structure from orthographic pro- jections
Picture segmentation by a directed split-and- merge procedure
Affine structure from motion.
On determining the Fundamental matrix: analysis of different methods and experimental results.
Constructing simple stable descriptions for image partitioning.
The Fundamental matrix: theory
A Computer Algorithm for Reconstructing a Scene from Two Projections.
Boundary detection by minimizing functionals.
Detecting the image boundaries between optical flow fields from several moving planar facets.
Motion boundary detection in image sequences by local stochastic tests.
Motion segmentation and correspondence using epipolar constraint.
multiple motions from optical flow.
Image flow segmentation and estimation by constraint line clustering.
Recursive motion estimation on the essential manifold.
Three dimensional transparent structure segmentation and multiple 3d motion estimation from monocular perspective image sequences.
The early detection of motion boundaries.
Motion from point matches using affine epipolar geometry.
Motion from point matches using affine epipolar geometry.
Uniqueness and estimation of three-dimensional motion parameters of rigid objects wirth curved surfaces
Shape and motion from image streams under or- thography: a factorization method
Stochastic motion clustering.
Dynamic occlusion analysis in optical flow fields.
Outlier detection and motion segmentation.
The Interpretation of Visual Motion.
Representing moving images with layers.
Optimal motion and structure estimation.
Scene partitioning via statistic-based region growing
Robust computation of optical flow in a multi-scale differential framework
--TR
--CTR
M. M. Y. Chang, Motion segmentation using inertial sensors, Proceedings of the 2006 ACM international conference on Virtual reality continuum and its applications, June 14-April 17, 2006, Hong Kong, China
Abhijit S. Ogale , Cornelia Fermuller , Yiannis Aloimonos, Motion Segmentation Using Occlusions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.6, p.988-992, June 2005
Alireza Bab-Hadiashar , David Suter, Robust segmentation of visual data using ranked unbiased scale estimate, Robotica, v.17 n.6, p.649-660, November 1999
A. Mitiche, Joint optical flow estimation, segmentation, and 3D interpretation with level sets, Computer Vision and Image Understanding, v.103 n.2, p.89-100, August 2006
Niloofar Gheissari , Alireza Bab-Hadiashar , David Suter, Parametric model-based motion segmentation using surface selection criterion, Computer Vision and Image Understanding, v.102 n.2, p.214-226, May 2006 | shape from motion;fundamental matrix;epipolar constraint;scene partitioning problem;motion segmentation;optical flow |
628464 | On the Advantages of Polar and Log-Polar Mapping for Direct Estimation of Time-To-Impact from Optical Flow. | The application of an anthropomorphic retina-like visual sensor and the advantages of polar and log-polar mapping for visual navigation are investigated. It is demonstrated that the motion equations that relate the egomotion and/or the motion of the objects in the scene to the optical flow are considerably simplified if the velocity is represented in a polar or log-polar coordinate system, as opposed to a Cartesian representation. The analysis is conducted for tracking egomotion but is then generalized to arbitrary sensor and object motion. The main result stems from the abundance of equations that can be written directly that relate the polar or log-polar optical flow with the time to impact. Experiments performed on images acquired from real scenes are presented. | Introduction
Autonomous systems should be able to control their movements in the environment
and adapt to unexpected events. This is possible only if the robot
has the capability to "sense" the environment. Among the sensors used for
robots, visual sensors are the ones that require the greatest computational
power to process the acquired data, but also generate the greatest amount
of information. Despite the fact that many researchers have addressed the
problem of "how to process" visual data [?, ?, ?, ?], less attention has been
given to the definition of devices and strategies for image acquisition which
could, possibly, reduce the amount of data to be processed and the algorithmic
complexity for the extraction of useful information [?, ?, ?, ?, ?, ?].
Ballard et al. [?] and also Sandini and Tistarelli [?, ?] among others, investigated
a tracking motion strategy which greatly simplifies the problem
of visual navigation.
To this extent the major effort done so far to try to reduce the amount of
information flowing from the visual sensors to the processing part of artificial
visual systems, has been concentrated on the image itself. This approach
is certainly justified in those situations in which images are acquired "pas-
sively" and transmitted to a distant station to be analyzed (e.g. satellites).
The data reduction strategy solves, in this case, a communication problem.
Very different strategies can be exploited when the data reduction has to be
achieved in order to solve understanding and behavioural problems. In this
case the main goal of the data reduction strategy is to avoid overloading the
system with "useless" information. The hardest part is to define the term
useless unless its meaning is directly tied to the task to be performed: useful
information is the information necessary to carry out the task. Also in this
case, if we only think in terms of images, it is impossible to extract the "use-
ful" information unless at least a rough processing of the images has been
performed. For example, if the strategy is related to the extraction of edges,
this processing has to be applyed to the overall image even if, for the task
at hand, only a small portion of the image would be sufficient. Solutions
to this problem have been proposed in the past through the use of multiple
windows system and the achieved results are certainly very interesting [?, ?].
Our approach is that of studying and developing smart sensing devices with
buit-in data compression capabilities [?, ?]. The space-variant sensor which
produces the log-polar transformation described in this paper, is an example
of how it may be possible to reduce the information amount directly at the
sensor level.
Using a foveated sensor (i.e. a sensor with a small, high resolution part,
surrounded by a lower resolution area) poses the problem of where to look
(i.e. where to position the central, high resolution, part of the visual field).
To solve this problem two approaches are possible: the first is to base the
selection of the focus of attention on the results of some interpretation pro-
cedure, the second is to take advantage of the task to be carried out. The
first approach is mandatory for static systems while the second approach
has the advantage of being less dependent on visual information processing.
In fact, following the second approach the selection of the focus of attention
can be done pre-attentively i.e. before the visual processing part has even
started. In case of space-variant sensors of the kind described in this paper,
the selection of the focus of attention goes along with the selection of the
direction of gaze. In fact, the direction of gaze defines the part of the scene
which is analyzed with the highest detail, or conversely, the region around
the focus of attention has to be sampled with the highest resolution. Stemming
from this consideration the strategies of gaze control can be seen as a
way to reduce the amount of information to be processed. Moreover, if these
strategies can be made dependent upon the task alone this data reduction
can be carried out before the interpretation of visual images. Examples of
what we mean by this are some visuo-motor strategies performed by humans
during the execution of specific behaviours. For example during locomotion
the direction of gaze is either pointed straight ahead toward the direction
of motion (to provide gross orientation information) or down, just ahead of
the actual position (to detect unexpected obstacles) [?, ?]. Of course other
safety level processes are running simultaneously but the major feature is
the possibility of defining the "focus of attention" (and as a consequence the
direction of gaze) as a function of the task alone. Other examples can be
presented in the field of manipulation. During the final stage of grasping,
for example, the focus-of-attention is in the vicinity of the contact area that
is, close to the end-effector. Also in this case the position of the hand in
space is known and the direcition of gaze can be controlled first by knowing
the trajectory of the end-effector and, secondly, by tracking it.
It is obvious that, even in this case, some image processing has to be
performed (e.g. the tracking of a point in space) but the important observation
is that these procedures are independent of the content of the images
or, in other words, they are data driven. One of the major goal of this paper
is to demonstrate that these data driven processes are performed even
"better" by using a space-variant sensor than using a conventional device.
1 It is worth noting, however, that in reality this is a problem that need to be solved
also for conventional sensors. In fact the location of the "fixation point" for stereo systems
with convergent optical axis has to be defined as well as for focussing procedures (it is not
feasible to "focus" everywhere).
As a result the poposed approach is not only usefulf in limiting the amount
of information to be processed through a "hardware focus-of-attention" (the
fovea) but its geometrical structure is advantageous to control the position
of the focus-of-attention.
Active tracking motion is the basis of the analysis conducted in this pa-
per. It is related to the properties of a retina-like sensor, defining the depth-
from-motion equations after the log-polar mapping. Due to its particular
topology, a retina-like sensor incorporates many advantages for dynamic image
processing and shape recognition. This potential can be considerably
augmented, making the sensor "smart", e.g. defining a set of visual tasks
that can be performed directly on the sensor, without the need for any other
external input, and dramatically reducing the time required for processing.
The computation of the optical flow from an image sequence and the tracking
of moving targets are examples of the visual tasks that can be defined.
In this paper we derive the optical flow equations for the retinal CCD
sensor. The flow field, computed on the log-polar plane, is then used to
estimate the time-to-impact. The analysis is performed considering the optical
flow equations due to the tracking egomotion on the retinal, Cartesian
plane and relating them to the computed velocity field on the log-polar
plane. Even though quantitative results can be obtained, this is not a natural
way of approaching the problem. Jain [?] pointed out the advantages of
processing the optical flow, due to camera translation, by using a log-polar
complex mapping of the images and choosing the position of the FOE as the
center for the representation. If the observer rotates during translation, to
fixate a static target in space, or conversely, just rotates without translating,
but tracking a moving target; then it is possible to intuitively understand
that the time to fly before reaching the obstacle(s) is proportional to the
rate of expansion of the projection of the object's shape on the retinal plane.
We demonstrate in fact, that only the radial component of the optical flow,
represented on the polar plane, depends on the time-to-impact. The polar
flow representation is analysed and a number of simple relations are derived,
which allow us to directly compute the time-to-impact with arbitrary ego-
and/or eco- motion.
Conformal mapping
In the human visual system the receptors of the retina are distributed in
space with increasing density toward the center of the visual field (the fovea)
and decreasing density from the fovea toward the periphery. This topology
can be simulated, as proposed by Sandini and Tagliasco [?], by means of
a discrete distribution of elements whose sampling distance (the distance
between neighbouring sampling points) increases linearly with eccentricity
from the fovea. An interesting feature of the space-variant sampling is the
topological transformation of the retinal image into the cortical projection 2
[?, ?].
This transformation is described as a conformal mapping of the points
on the polar (retinal) plane (ae;
log ae; where the values of (ae; j) can be obtained mapping the Cartesian
coordinates (x; y) of each pixel into the corresponding polar coordinates.
The resulting cortical projection, under certain conditions, is invariant to
linear scalings and rotations on the retinal image. These complex transformations
are reduced to simple translations along the coordinate axes of the
cortical image. This property is valid if, and only if, the scene and/or the
sensor move along (scaling) or around (rotation) the optical axis.
The same properties hold in the case of a simple polar mapping of the im-
age, but a linear dialation around the fovea is transformed into a linear shift
along the radial coordinate in the (ae; plane. Meanwhile, the log-polar
transformation produces a constant shift along the radial coordinate of the
cortical projection.
Beyond the geometric properties of the polar and log-polar mapping, which
will be referred to in the following sections, the log-polar transformation
performs a relevant data reduction (because the image is not equally sampled
throughout the field of view) while preserving a high resolution around
the fovea; thus providing a good compromise between resolution and band-limiting
needs. This property turns out to be very effective if you wish to
focus attention on a particular object feature, or to track a moving target
(i.e. to stabilize the image of the target in the fovea). Therefore the
properties of the topological log-polar mapping has found interesting applications
for object and, more specifically, shape recognition and object
tracking [?, ?, ?, ?, ?, ?].
The main focus of the paper is, in fact, to stress the peculiarity of this
transformation in the computation of dynamic measures. In particular the
main observation is that during the tracking of a moving object the mapping
2 The terms "retinal" and "cortical" derive from the observation that the conformal
mapping described here is very similar to the mapping of the retinal image onto the visual
cortex of humans [?, ?].
of the stabilized retinal image onto its cortical image deforms in such a way
that it is easier to compute those "behavioral variables" usefulf to control
the position of the focus-of-attention. In fact, if the object is perfectly
stabilized on the retina, and the shape does not change due to perspective
changes (this is a good approximation for images sampled closely in time)
the component of the velocity field along the log ae axis alone measures the
"rate of dilation" of the retinal image of the object. This measure can be
used to compute the time-to-crash.
In this paper we will illustrate how the log-polar transformation as well as
a simple polar mapping can dramatically simplify the recovery of structural
information from image sequences, allowing direct estimation of the time-
to-impact from the optical flow.
2.1 Structure of the retina-like sensor
A prototype retina-like visual sensor has been designed within a collaborative
project involving several partners 3 . In this paper we will refer to
the physical characteristics of the prototype sensor when dealing with the
log-polar transformation. The results could be easily generalised to any particular
log-polar mapping, by modifying the constant parameters involved
in the transformation.
The retino-cortical mapping is implemented in a circular CCD array, using
a polar scan of a space-variant sampling structure, characterized by the
sampling period and the eccentricity (the distance from the center of the
sensor). The spatial geometry of the receptors is obtained through a square
tassellation and a sampling grid formed by concentric circles [?]. The prototype
CCD sensor, depicted in Fig. 1, is divided into 3 concentric areas each
consisting of 10 circular rows and a central fovea. Each circular row consists
of 64 light sensitive elements [?]. The central fovea is covered by a square
array of 102 light sensitive elements. 4 In the experiments the information
coming from the fovea is not used.
As for the extra-foveal part of the sensor the retino-cortical transformation
3 The institutions involved in the design and fabrication of the retinal CCD sensor
are: DIST - University of Genoa, Italy; University of Pennsylvania - Dept. of Electrical
Engineering, Italy. The actual fabrication
of the chip was done at IMEC, Leuven, Belgium
4 Currently the performances of the CCD sensor are being evaluated (the first "real"
image has been recently acquired) and a prototype camera is being built. The experiments
reported in this paper are carried out by re-sampling standard TV images following the
geometry of the sampling structure of the sensor.
is defined by:
log a ae \Gamma p
(1)
are the polar coordinates of a point on the retinal plane,
and a are constants determined by the physical layout of the CCD sensor.
Tracking ego-motion and optical flow
The ability to quickly detect obstacles and evaluate the time-to-impact to
react in order to avoid it is of vital importance for animates. Passive vision
techniques can be beneficially adopted if active movements are performed
[?, ?, ?, ?, ?, ?]. A dynamic spatio-temporal representation of the scene,
which is the time-to-impact with the objects, can be computed from the
optical flow which is extracted from monocular image sequences acquired
during "tracking" movements of the sensor. The image velocity can be
described as function of the camera parameters and split into two terms
depending on the rotational and translational components of camera velocity
respectively:
~
F
(2)
~
Z
are the distances of the camera from the fixation point in two
successive instants of time, OE, ' and / are the rotations of the camera referred
to its coordinate axes, shown in Fig. 2 and Z is the distance of the
world point from the image plane. As we are interested in computing the
time-to-impact Wz
Z , then ~ V t must be derived from the total optical flow.
The rotational part of the flow field ~
V r can be computed from proprioceptive
data (e.g. the camera rotational angles) and the focal length. Once
the global optic flow ~
V is computed, ~
V t is determined by subtracting ~
from ~
. Adopting the constrained tracking egomotion, the rotational angles
are known, as they correspond to the motor control generated by the
fixation/tracking system.
The image velocity is computed by applying an algorithm for the estimation
of the optical flow to a sequence of cortical images [?]. In Fig. 3
the first and last images of a sequence of 16 are shown. The images have
been acquired with a conventional CCD camera and digitized with 256x256
pixels and 8 bits per pixel. The motion of the sensor was a translation plus a
rotation OE around its horizontal axis X. During the movement, the direction
of gaze was fixed on a point on the ground far behind the sensor. In Fig.
4 (a) the result of the retinal sampling applied to the first image in Fig. 3
and 4(b) the simulated output of the retinal sensor obtained following the
characteristics of the chip are shown. The resulting images are 30x64 pixels.
When evaluating the presented experimental results, the extremely low number
of pixels (1920) should be considered.
The optical flow is computed by solving an over-determined system of
linear equations in the unknown terms (u;
. The equations impose
the constancy of the image brightness over time [?] and the stationarity of
the image motion field [?, ?]:
d
d
where I represents the image intensity of the point (x; y) at time t. The
least squares solution of (3) is computed for each point on the cortical plane
as:
~
@I
@-
@I
@t
@- @t
where (-; fl) represent the point coordinates on the cortical plane.
In Fig. 5 the optical flow of the sequence of Fig. 4(b) is shown.
3.1 First step: relating the motion equations to the log-polar
mapping
In (2) we defined the relation between camera velocity and optical flow. Now
we develop the same equations for the velocity field transformed onto the
cortical plane. The goal is to find an expression which relates the cortical
velocity ~
to the rotational flow ~
V r to derive the translational flow
~
compute the time-to-impact).
Firstly we derive the motion equations on the cortical plane:
ae
ae log a e
where e is the natural logarithmic base and a; q are constants related to
the eccentricity and the density of the receptive fields of the retinal sensor.
The retinal velocity ( -
can be expressed as a function of the retinal
coordinates relative to a Cartesian reference system (as
y
x
is the retinal velocity of the image point, relative to a
Cartesian reference system centered on the fovea. Substituting (6) in (5)
yields:
log a
developing (7) and making explicit the retinal velocity ~
\Gammay
x
log e a
by substituting the expression for ~
V r from (2) in (8) we obtain the translational
flow ~
referred to the retinal plane:
~
log e a \Gamma y -
F
log e a
fl) is the velocity field computed from the sequence of cortical images.
It is worth noting that, expressing the Cartesian coordinates of a retinal
sampling element in microns, the focal length and the retinal velocity ~
are
also expressed in the same units. If the rotational velocity of the camera
during the tracking is avaliable, then ~
V t can be computed.
The time-to-impact of all the image points on the retinal plane can be
recovered using a well-known relation [?]:
D f is the displacement of the considered point from the focus of the translational
field on the image plane, and WZ is the translational component of the
sensor velocity along the optical (Z) axis. The ratio on the left-hand side
represents the time-to-impact with respect to the considered world point.
The location of the FOE is estimated by computing the least squares fitting
of the pseudo intersection of the set of straight lines determined by the
velocity vectors ~ V t .
This formulation is very direct as it does not exploit completely the implications
and advantages of the log-polar mapping, and also many external
parameters related to the camera and the motion are required to compute
the time-to-impact. Both the focal length and the rotational motion of the
camera must be known in order to estimate the translational component
of velocity ~
while the FOE must be computed from ~
V t to estimate the
time-to-impact. Moreover, this schema can not be used in the presence of
independently moving objects since the basic assumption is that the environment
is completely static. In summary the algorithm can be applied
- the rotational part of the camera motion is known;
- the intrinsic camera parameters, at least the focal length, are known;
- the objects in the scene do not move independently.
By exploiting further the advantages and peculiarities of the polar and
log-polar mapping, all these requirements and constraints will be "gradu-
ally" relaxed in the analysis performed in the remainder of the paper. Our
final aim being the estimation of the time to impact using only image-derived
parameters, in the case of camera and object motion.
Even though this algorithm requires many constraints, they still do not
limit the generality of its possible applications. It can be successfully ap-
plied, for example, to locate obstacles and detect corridors of free space
during robot navigation. The accuracy of the measurements depends on
the resolution of the input images, which, for the retinal sensor, is very
low. Nevertheless, the hazard map computed with this method can still be
exploited for its qualitative properties in visual navigation.
Fig. 6(a) shows the time-to-impact of the objects (hazard map) of the
scene in Fig. 4(a), and in Fig. 6(b) the associated uncertainty. In appendix
A a quantitative measure of the error in the estimated time-to-impact, is
derived.
3.2 Exploiting further the polar and log-polar mapping for
optical flow
It is possible to generalise the logarithmic-polar complex mapping property
of transforming object's dialation into a translation along the - (radial)
coordinate to more general and complex kind of motions. Generally, any
expansion of the image of an object, due either to the motion of the camera
or the object itself, will produce a radial component of velocity on the retinal
plane. This intuitive observation can be also stated in the following way :
the time-to-impact of a point on the retinal plane, only effects the radial
component of the optical flow
we will formally prove this assertion in the remainder of the paper.
This observation lead us to adopt an approach different to the one pursued
in the previous section, in order to represent the optical flow on the cortical
plane. It turns out that the most convenient way of representing and
analysing velocity is in terms of its radial and angular components with
respect to the fovea.
Let us consider, for the moment, a general motion of the camera both
rotational and translational. We will later consider the special case of tracking
egomotion, explaining how it simplifies the analyisis, and finally dealing
also with object motion. The velocity on the image plane along the radial
and angular coordinates is :
ae
y) is the retinal velocity with respect to a Cartesian coordinate
system centered on the fovea. Plugging in the motion equations for small
angular rotations (as from (2)):
F
F
sin j
ae
F
F
sin j
by substituting the values for
ae
hi Wx
sin
Z
as the retinal sensor performs a logarithmic mapping, we obtain:
ae
ae log a e
Z
ae
ae
OE sin fl
log a e
ae
hi Wx
sin
Z
cos
These equations simply show that, while both components of the optical
flow depend upon the depth Z of the objects in space, only the radial component
depends upon the time-to-impact Z
Wz . Moreover, only the angular
component -
depends upon rotations around the optical axis, while the radial
component is invariant with respect to /. Notice that up to now we
have not made any hypothesis about the motion of the sensor. Therefore
equations certainly hold for any kind of camera motion. Even though
the analysis has been conducted for a moving camera in a static environment
the result obtained in (13) holds for any combination of object and
camera motion. All the motion parameters are expressed in terms of translational
velocities in space (W x rotational velocities referred
to a camera-centered Cartesian coordinate system (OE; '; /). These velocities
have not to be absolute velocities but can represent the relative motion of the
camera with respect to the objects, which is the sum of the two velocities.
Equations (13) can be further developed in the case of tracking egomo-
tion. By imposing ~
in the general optical flow equations this
motion constraint can be expressed as:
D 2 is the distance of the fixation point from the retinal plane measured at
the frame time following the one where the optical flow is computed.
By developing equation (13) using the values for W x and W y given in
(14), we obtain:
ae
Z
OE sin fl
log a e
ae
Z
sin fl
The structure of the two equations is very similar. If the rotational
velocity of the camera is known, then it is possible to substitute the values
for (OE; '; /) into (15) to directly obtain Z
from -
Wz from the radial
component -
-:
Z
F
OE cos fl
F
OE cos fl
Z
log e a
F
OE sin fl
also the focal length F must be known , while q and a are constant values
related to the physical structure of the CCD sensor [?]. It is worth noting
the importance of Z
and Z
Wz . In fact, they both represent relative measurements
of depth which are of primal importance in humans and animals
to relate to the environment.
It is interesting to compare this last equation with equations (9) and
(10). As it can be noticed, equation (16) does not depend on the position
of the FOE. Therefore it is not necessary to differentiate the optical flow
with respect to the rotational component ~
V r to estimate the translational
component ~
A further option is to try to compute the time-to-impact from the partial
derivatives of both velocity components:
ae
OE sin fl
by combining this equation with the expression of -
-, as from equation (15),
we obtain:
log e a
Z
F
OE sin fl
Z
W z
log e a
ae
F
OE sin fl
also in this case the time-to-impact can be computed by directly substituting
the values of the rotational angles. Note that in this case the equation does
not depend on the rotation around the optical axis. This is the first important
result, as it makes the computation of the time-to-impact independent
of one motion parameter. We now develop alternative methods which allow
us to release the other parameters also, which can not be easily computed
from the images.
Let us consider the first derivative of -
-:
@-
F
F
ae
Z
'-'
OE sin
notice that ae = a -+q .
This result suggests another possibility of using the first derivative of -
- to
obtain a relation which is similar to (18):
Z
W z
log e a
F
OE sin fl
By considering neighbouring pixels (- at the same eccentricity
is possible to formulate two equations in two unknown terms in
the form:
Z
W z
F
OE sin
Setting now
@-
Z
W z
We have obtained an equation for the time-to-impact which only involves
image-derived parameters, like velocity and its derivative and image coor-
dinates, without requiring the knowledge of the motion parameters. It is
also worth noting that in all the formulations derived in this section the
optical flow has not to be differentiated to recover the translational flow
(like in other approaches [?, ?]), nor has the FOE position to be computed.
These are very important features of a method for the computation of the
time-to-impact, as in fact the rotational velocity and the FOE are computed
indirectly from the optical flow and are therefore subject to errors.
Even though equation (22) can be directly applied, more points along
the same ray should be used to improve the robustness of the algorithm. In
this case an over-constrained system of equations can be used and solved
using standard least squares techniques:
W z
Z
~
A t A
A t ~ b
~
Z
The underlying assumption of constant depth requires the use of a small
neighbourhood of ae i . Depth discontinuities can in fact introduce artifacts
and errors processing images from cluttered scenes.
Another equation (which is simpler) for the time-to-impact can be obtained
by combining equation (17) and (19):
W z
Z
log e a
ae
F
OE sin fl
W z
Z
log e a
F
OE sin fl
by combining these two equations we obtain:
Z
W z
log e a \Gamma
@-
Again, equation (25) allows the direct computation of the time-to-impact
from the images only. Notice that only first order derivatives of the optical
flow are required and the pixel position does not appear. The parameters q
and a are calibrated constants of the CCD sensor. It is interesting to relate
this result to the divergence approach proposed by Thompson [?] and also
recently by Nelson and Aloimonos [?]. Equation (25) can be regarded as a
formulation of the oriented divergence for the tracking motion, modified to
take into account the fact that the sensor is planar and not spherical.
In Fig. 7(a) the first and last image of a sequence of 10 is shown. The
images have been acquired at the resolution of 256x256 pixels and then re-sampled
performing the log-polar and polar mapping. The motion of the
camera was a translation plus a rotation ' around its vertical axis Y . The
direction of gaze was controlled so as to keep the fixation on the apple in the
center of the basket (which is the object nearest to the observer). The inverse
time-to-impact Wz
Z , computed by applying equation (25) to the optical flow
in Fig. 8(a) is shown in Fig. 9(a). Despite the low resolution the closest
object is correctly located.
A last equation for the time-to-impact can be obtained by computing
the second order partial derivative of -
-:
F
ae
Z
'-'
OE sin fl
log e a (26)
Z
W z
log e a \Gamma
log a e
This equation clearly states that the time-to-impact can be computed
using only the radial component of velocity with respect to the fovea. More-
over, this formulation does not depend on the motion of the fixated target.
The motion parameters involved can also represent relative motion. As the
equation is applied to each image point the method is still valid for scenes
containing many independently moving objects. In Fig. 9(b) the inverse
time-to-impact of the scene in Fig. 7, computed by applying equation (31)
to the optical flow in Fig. 8(b), is shown.
The analysis performed in this section has been developed for the CCD
retinal sensor but many of the results are still valid for conventional raster
sensors. Some advantages are still obtained by using the retinal sensor. Let
us consider equation (12) and apply the tracking constraint without making
the complex logarithmic mapping:
Z
ae
Z
The partial derivative of -
with respect to j only changes for a constant
factor, while the partial derivatives of -
ae with respect to ae are:
ae
@ae
Z
F
ae
A first formulation for the time-to-impact involves the second order partial
derivatives of the radial component -
ae of the optical flow:
Z
W z
ae
@ae
ae
An equation for the time-to-impact containing only first order partial
derivatives of velocity can be obtained by considering also the angular component
ae
by combining this equation with (28) and (29) obtain:
Z
W z
ae
ae
ae
@ae
We can conclude that an abundance of equations exist to compute the
time-to-impact in the case of tracking egomotion if the proper coordinate
system is chosen (polar in this approach). It has been shown that a polar
velocity representation seems to be best suited to recover the time-to-impact
from image sequences. Even though technology has been very conservative
in producing only raster CCD arrays and imaging devices (therefore strongly
linked to a Cartesian coordinate system), a polar transformation can be performed
in real-time by using commercially avaliable hardware, enforcing the
feasibility of the proposed methodology. On the other hand a retinal CCD
sensor naturally incorporates the polar representation, and also introduces
a logarithmic scaling effect which makes the equations simpler (they do not
depend on the radial coordinate of the point ae) [?].
In Fig. 10 the measurements of the inverse time-to-impact for the sequence
in Fig. 7, computed using 10(a) equation (31) and 10(b) equation
(33), are shown.
In order to demonstrate the applicability of the method in the case of
independently moving objects equation (33) has been applied to a sequence
of images where the camera is moving along a trajectory parallel to the
optical axis and an object is moving along a collision course toward the
camera (but not along the direction of the camera optical axis and slightly
rotating as well). The first and last images are shown in Fig. 11(a). The
output of the polar mapping on the Cartesian plane is shown in Fig. 11(b)
and in 11(c) the polar mapping in the ae; ' plane. The hazard maps of the
scene relative to 2 successive frames are shown in Fig 12.
In appendix B a comparative analysis of the accuracy of the formulations
derived for the time-to-impact in the case of polar and log-polar mapping is
performed.
4 Analysing a general motion
At this stage it is possible to relax the tracking constraint and consider a
general motion of the camera and/or of the objects in the scene (we already
pointed out that the equations still hold for relative motions). To compute
the time-to-impact in the case of general motion equation (25) can be still
applied. Computing the partial derivatives of the optical flow, as stated by
equation (13), we obtain:
ae Z
ae
OE sin fl
ae
hi Wx
cos
sin
now combining the expressions for the optical flow and its partial derivatives
we obtain:
log e a
F
OE sin fl
log e a
F
OE sin fl
combining these two equations we obtain:
Z
W z
log e a \Gamma
@-
which is exactly equation (25). Similarly it can be also demonstrated that
the other equations, developed to recover the time-to-impact, hold for general
motion.
This is a very important result as it enforces the relevance of a polar
representation for the optical flow and furthermore extends the validity to
arbitrary motion.
As already pointed out, the equations obtained for the time-to-impact
are very similar to the divergence operator: Wz
Z , which is the factor that is
directly estimated from the equations, corresponds to the divergence mea-
surement. But, while the divergence theory finds its mathematical proof
assuming a spherical geometry for the imaging device, the proposed method
holds exactly for any kind of planar sensor.
The best way to implement this schema, by using a conventional raster
sensor, is to map each sampled image into a polar representation and then
compute the optical flow directly on the polar images. The estimated optical
flow is then already represented in the polar (ae; plane. As a matter of fact,
computing the mapping of the optical flow obtained from the original images
in the Cartesian plane, is more efficient only considering the beginning of the
visual process. But, in a steady state, after the initial time delay has been
passed, by mapping the original images only one frame has to be processed
at any time (instead of the two components of the optical flow). Moreover,
mapping the original images results in a more accurate evaluation of the
retinal flow. The mapping can be implemented very efficently by using two
look-up tables to directly address the pixels from the polar to the Cartesian
coordinate system.
5 Conclusion
The application of a retina-like anthropomorphic visual sensor and the implications
of polar and log-polar image mapping for dynamic image analysis
has been investigated. In particular the case of a moving observer undertaking
active movements has been considered as a starting point to directly
estimate the time-to-impact from optical flow. The main advantages obtained
with a log-polar retina-like sensor are related to the space-variant
sampling structure characterized by image scaling and rotation invariance
and a variable resolution. Due to this topology, the amount of incoming
data is considerably reduced but a high resolution is preserved in the part
of the image corresponding to the focus of attention (which is also the part
of the image where a higher resolution in the computation of velocity is
necessary).
Adopting a tracking egomotion strategy, the computation of the optical
flow and time-to-impact, is simplified. Moreover, as the amplitude of image
displacements increases from the fovea to the periphery of the retinal image,
almost the same computational accuracy is achieved throughout the visual
field, minimizing the number of pixels to be processed.
The polar mapping introduces a considerable simplification in the motion
equations, which allow the direct computation of the time-to-impact. The
polar and log-polar representation for optical flow are certainly best suited
for the computation of scene structure mainly for three reasons:
- a number of simple equations can be written which relate the optical
flow and its derivatives (up to the second order only in one case) to
the time-to-impact;
- relative depth is easily computed from image parameters only, it is
either relative to the observer velocity Z
Wz or the distance from the
fixation point in space Z
- the dependence on depth is decoupled in the radial and angular component
of the optical flow: only the radial component is proportional
to the time-to-impact;
- the derived equations can be easily applied also to images acquired
with a conventional raster CCD sensor. The polar mapping can be
efficently implemented using general purpose hardware obtaining real-time
performances (the complete system, implemented on a Sun SPARC
station-1, computes the optical flow and the time-to-impact in less
than one minute).
The estimation of the time-to-impact seems to be a very important process
in animals to avoid obstacles or to catch prey. The importance of the
time-to-impact as a qualitative feature has already been pointed out [?].
The time-to-impact or its inverse measurement can be effectively used to
detect and avoid obstacles, without requiring an exact recovery of the ob-
ject's surface. Therefore, even though optical flow and its derivatives have
to be computed (unfortunately with low accuracy) they are not critical because
accuracy is not mandatory for qualitative estimates. This fact also
implies that accurate camera calibration in not needed to accomplish visual
navigation.
In human beings most "low-level" visual processes are directly performed
on the retina or in the early stages of the visual system. Simple image processes
like filtering, edge and motion detection must be performed quickly
and with minimal delay from the acquisition stage because of vital importance
for survival (for example to detect static and moving obstacles). The
computation of the time-to-impact represents a simple process (a sort of
building block ) which could be implemented directly on the sensor as local
(even analogic, using the electrical charge output of the sensitive elements)
parallel operations, avoiding the delay for decoding and transmitting data
to external devices. 5
Appendix
A: "A quantitative estimation of the error recovering
the time-to-impact"
The estimation of the time-to-impact is modeled as a stochastic process
where the parameters involved in the computation are uncorrelated
probabilistic variables. Assuming the process to be Gaussian, with a set of
variables whose mean values are equal to the measured ones, then:
oe T
Acknowledgements
: The authors thank F. Bosero, F. Bottino and A. Ceccherini for
the help in developing the computer simulation environment for the CCD retinal sensor.
We are also gratefull to Frank and Susan Donoghue which carefully proofread the text
and corrected the English.
This research was supported by the Special Project on Robotics of the Italian National
Council of Research.
(x; y) is the position of the point on the retinal plane, with reference to a
Cartesian coordinate system centered on the fovea; ~
is the estimated
retinal velocity; are the coordinates of the focus of
expansion on the retinal plane; (OE; '; /) are the rotation angles undertaken
by the camera during the tracking motion. J is the Jacobian of the T ti
function and S is the diagonal matrix of the variances of the independent
variables (x; Computing the partial derivatives
of (10) we obtain:
oe y+
oe
The variances (oe x associated with the position of the image point are
set equal to the radius of the sensitive cell on the CCD array, which depends
on the spatial position of the point within the field of view. The variances
in the rotational angles are set equal to a constant, which is determined by
the positional error of the driving motors. In the experiments performed
these values are set to (0:1; 0:1; 0:1) degrees, which is a reasonable error for
standard DC servo-motors. The variances of the FOE position (oe Fx
are determined from the least squares fit error.
The variances in the optical flow components (-
fl) on the
cortical plane are directly determined by differentiating the least squares
solution of (4). The variance of the computed image derivatives, which is
used to compute the variances in the optical flow, are estimated by assuming
a uniform distribution and unitary quantization step of the gray levels [?]:
are the weights of the derivative operator with a 5 point (frames for time
derivative) support. A value of oe I 2 equal to 0.0753 for the first derivative,
and oe II
2 equal to 0.8182 for the second derivative are obtained.
The variances of the velocity vectors on the retinal plane are obtained
by differentiating (8) with respect to -
- and -
fl:
oe
oe
are the variances of the optical flow computed on the cortical plane.
Grouping similar terms in (38), we obtain:
oe
The expression within brackets appearing in the last two rows represents
the more relevant term as it is multiplyed by D f
. Consequently, the higher
errors are those due to the computation of image velocity and the estimation
of the rotational angles of the cameras (or conversely, the positioning error
of the driving motors). The terms containing the errors in the rotational
angles are also quadratic in the image coordinates, hence the periphery of
the visual field (which is the area where the spatial coordinates of the pixels
reach the greatest values) will be affected more then the foeva. Nevertheless,
all these terms are divided by the modulus of velocity raised at the sixth
power. Therefore, if the amplitude of image displacement is sufficiently large
the errors drop very quickly. As a matter of fact, the amplitude of the optical
flow is of crucial importance in reducing the uncertainty in the estimation
of the time-to-impact.
Appendix
B: "A comparative analysis of the accuracy recovering
the time to impact in the case of polar and log-polar mapping"
It is beyond the aim of this paper to perform an exhaustive error analysis
for all the derived equations, but it is interesting to compare, analitically,
the results obtained for the polar and the log-polar mapping.
In analogy to the analysis conducted for equation (10) it is possible to model
the computation of the time-to-impact as a Gaussian stochastic process
where the parameters involved in the computation are uncorrelated probabilistic
variables. Then the variance of the time-to-impact as from equation
is:
oe T
(log e a) 2 oe -
@- and ffi -
@fl .
Similarly the variance of the time-to-impact as from equation (33) is:
oe T
ae
ae
ae
@ae and ffi -
@j .
Let us assume the variances of the optical flow and its partial derivatives
to be the same in both equation (41) and (42). This assumption is justified
by the fact that the same formulation is used to estimate the optical flow.
Considering a unitary sampling step in the polar mapping, then oe ae 2 - 1.
Then it is possible to compare the two variances through the inequality:
ae 4
where oe 2 is the variance of the radial component of the optical flow. We can
first notice that the value of the term on the left hand side depends on the
value of ae while the term on the right hand side is constant. Therefore the
first term on the left hand side can be neglected with a good approximation
because it constitutes a higher order infinitesimal term with respect to 1
and the variance oe 2 is close to a unit. Then, it is sufficient to evaluate the
given the value of a = 1:0945543, it is trivial to find that the value of the
variance as from equation (41) is lower then that from equation (42) if the
radial coordinate ae is less than about 22.
The same analysis can be made comparing equation (27) and (31). The
expressions for the variance of the time-to-impact for the two equations are:
oe T
(log e a) 2 oe -
(44a)
oe T
ae
ae
ae3
and
ae
. Let us assume the variances oe -
aeof the independent parameters to be small or, at least, bounded to some integer
value. If the velocity field is smooth then also the value of
ae
is
bounded. Then the higher terms turn out to be those relative to the error
in the second derivative of the optical flow. We can then compare equation
(44a) and (44b) by analysing the inequality:
ae
-Again, assuming the variances oe
ae
2 to be almost equal, then
this inequality simply represents the intersection of a parabola, which is
function of the radial coordinate ae, with an horizontal straight line. The
meaurement of the time to impact performed using equation (27) turns out
to be more accurate than that obtained using equation (31), in the sense
of minimum variance, if the radial coordinate of the considered point is
greater than about log a e. This result nicely complements the constraint
found comparing equation (25) and (33): considering a point in the image
and varying the radial coordinate ae, whereas equation (31) allows a more
accurate estimation than equation (27), equation (25) has a lower variance
than equation (33), thus balancing the performances of the two formulations
involving the polar or log-polar mapping throughout the entire field of view.
--R
Simulated output of the retinal CCD sensor.
of the time to impact in Fig.
applied to the first image
in the Cartesian (x
the log-polar (-
equation (31) to the optical flow in Fig.
same of (a)
in the Cartesian (x
to the 8th (a) and 10th (b) frame of the sequence in Fig.
--TR
On edge detection
Motion stereo using ego-motion complex logarithmic mapping
Active vision: integration of fixed and mobile cameras
Motor and spatial aspects in artificial vision
Obstacle Avoidance Using Flow Field Divergence
Active Tracking Strategy for Monocular Depth Inference over Multiple Frames
Extending the `oriented smoothness constraint'' into the temporal domain and the estimation of derivatives of optical flow
Estimation of depth from motion using an anthropomorphic visual sensor
Active vision based on space-variant sensing
Measurement of Visual Motion
Robot Vision
Computer Vision
--CTR
Konrad Schindler, Geometry and construction of straight lines in log-polar images, Computer Vision and Image Understanding, v.103 n.3, p.196-207, September 2006
Mohammed Yeasin, Optical Flow in Log-Mapped Image Plane-A New Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.125-131, January 2002
Jos Martnez , Leopoldo Altamirano, A new foveal Cartesian geometry approach used for object tracking, Proceedings of the 24th IASTED international conference on Signal processing, pattern recognition, and applications, p.133-139, February 15-17, 2006, Innsbruck, Austria
V. Javier Traver , Filiberto Pla, Similarity motion estimation and active tracking through spatial-domain projections on log-polar images, Computer Vision and Image Understanding, v.97 n.2, p.209-241, February 2005
Sovira Tan , Jason L. Dale , Alan Johnston, Performance of three recursive algorithms for fast space-variant Gaussian filtering, Real-Time Imaging, v.9 n.3, p.215-228, June
Nattel , Yehezkel Yeshurun, Direct feature extraction in a foveated environment, Pattern Recognition Letters, v.23 n.13, p.1537-1548, November 2002
Didi Sazbon , Hctor Rotstein , Ehud Rivlin, Finding the focus of expansion and estimating range using optical flow images and a matched filter, Machine Vision and Applications, v.15 n.4, p.229-236, October 2004
Pierre Chalimbaud , Franois Berry, Embedded active vision system based on an FPGA architecture, EURASIP Journal on Embedded Systems, v.2007 n.1, p.26-26, January 2007
Swarup Reddi , George Loizou, Analysis of Camera Behavior During Tracking, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.17 n.8, p.765-778, August 1995
Phillipe Burlina , Rama Chellappa, Analyzing Looming Motion Components From Their Spatiotemporal Spectral Signature, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.10, p.1029-1033, October 1996
Philippe Burlina , Rama Chellappa, Temporal Analysis of Motion in Video Sequences through Predictive Operators, International Journal of Computer Vision, v.28 n.2, p.175-192, June 1998
Frank Tong , Ze-Nian Li, Reciprocal-Wedge Transform for Space-Variant Sensing, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.17 n.5, p.500-511, May 1995
Pelegrn Camacho , Fabin Arrebola , Francisco Sandoval, Multiresolution vision in autonomous systems, Autonomous robotic systems: soft computing and hard computing methodologies and applications, Physica-Verlag GmbH, Heidelberg, Germany,
Zoran Duric , Azriel Rosenfeld , James Duncan, The Applicability of Greens Theorem to Computation of Rate of Approach, International Journal of Computer Vision, v.31 n.1, p.83-98, Feb. 1999
Jose Antonio Boluda , Fernando Pardo, A reconfigurable architecture for autonomous visual-navigation, Machine Vision and Applications, v.13 n.5-6, p.322-331, March
F. Wrgtter , A. Cozzi , V. Gerdes, A parallel noise-robust algorithm to recover depth information from radial flow fields, Neural Computation, v.11 n.2, p.381-416, Feb. 15, 1999
C. Capurro , F. Panerai , G. Sandini, Dynamic Vergence Using Log-Polar Images, International Journal of Computer Vision, v.24 n.1, p.79-94, Aug. 1997 | direct estimation;log-polar mapping;motion estimation;egomotion;tracking;visual navigation;anthropomorphic retina-like visual sensor;CCD image sensors;time-to-impact;optical flow |
628513 | Comparing Images Using the Hausdorff Distance. | The Hausdorff distance measures the extent to which each point of a model set lies near some point of an image set and vice versa. Thus, this distance can be used to determine the degree of resemblance between two objects that are superimposed on one another. Efficient algorithms for computing the Hausdorff distance between all possible relative positions of a binary image and a model are presented. The focus is primarily on the case in which the model is only allowed to translate with respect to the image. The techniques are extended to rigid motion. The Hausdorff distance computation differs from many other shape comparison methods in that no correspondence between the model and the image is derived. The method is quite tolerant of small position errors such as those that occur with edge detectors and other feature extraction methods. It is shown that the method extends naturally to the problem of comparing a portion of a model against an image. | Introduction
A central problem in pattern recognition and computer vision is determining the extent
to which one shape differs from another. Pattern recognition operations such as correlation
and template matching (cf. [17]) and model-based vision methods (cf. [4, 8, 11])
can all be viewed as techniques for determining the difference between shapes. We have
recently been investigating functions for determining the degree to which two shapes
differ from one another. The goal of these investigations has been to develop shape
comparison methods that are efficient to compute, produce intuitively reasonable re-
sults, and have a firm underlying theoretical basis. In order to meet these goals, we
argue that it is important for shape comparison functions to obey metric properties (see
[3] for related arguments).
In this paper we present algorithms for efficiently computing the Hausdorff distance
between all possible relative positions of a model and an image. (The Hausdorff distance
is a max-min distance defined below.) We primarily focus on the case in which
the model and image are allowed to translate with respect to one another, and then
briefly consider extensions to handle the more general case of rigid motion. There are
theoretical algorithms for efficiently computing the Hausdorff distance as a function of
translation [12, 14] and rigid motion [13]. Here we provide provably good approximation
algorithms that are highly efficient both in theory and in practice. These methods
operate on binary rasters, making them particularly well suited to image processing and
machine vision applications, where the data are generally in raster form. The three key
advantages of the approach are: (i) relative insensitivity to small perturbations of the
image, (ii) simplicity and speed of computation, and (iii) naturally allowing for portions
of one shape to be compared with another.
We discuss three different methods of computing the Hausdorff distance as a function
of the translation of a model with respect to an image. The first of these methods is
similar in many ways to binary correlation and convolution, except that the Hausdorff
distance is a nonlinear operator. The second method extends the definition of the
distance function to enable the comparison of portions of a model to portions of an
image. The third method improves on the first two, by using certain properties of the
Hausdorff distance to rule out many possible relative positions of the model and the
image without having to explicitly consider them. This speeds up the computation by
several orders of magnitude. All three of these methods can be further sped up using
special-purpose graphics hardware (in particular a z-buffer). We present a number of
examples using real images. These examples illustrate the application of the method
to scenes in which a portion of the object to be identified is hidden from view. Then
finally we show how the methods can be adapted to comparing objects under rigid
motion (translation and rotation).
1.1 The Hausdorff Distance
Given two finite point sets g, the Hausdorff distance
is defined as
where
min b2B
and k \Delta k is some underlying norm on the points of A and B (e.g., the L 2 or Euclidean
norm).
The function h(A; B) is called the directed Hausdorff distance from A to B. It
identifies the point a 2 A that is farthest from any point of B, and measures the
distance from a to its nearest neighbor in B (using the given norm k \Delta k). That is,
h(A; B) in effect ranks each point of A based on its distance to the nearest point of
B, and then uses the largest ranked such point as the distance (the most mismatched
point of A). Intuitively, if h(A; d, then each point of A must be within distance
d of some point of B, and there also is some point of A that is exactly distance d from
the nearest point of B (the most mismatched point).
The Hausdorff distance, H(A;B), is the maximum of h(A; B) and h(B; A). Thus
it measures the degree of mismatch between two sets, by measuring the distance of
the point of A that is farthest from any point of B and vice versa. Intuitively, if the
Hausdorff distance is d, then every point of A must be within a distance d of some point
of B and vice versa. Thus the notion of resemblance encoded by this distance is that
each member of A be near some member of B and vice versa. Unlike most methods
of comparing shapes, there is no explicit pairing of points of A with points of B (for
example many points of A may be close to the same point of B). The function H(A;B)
can be trivially computed in time O(pq) for two point sets of size p and q respectively,
and this can be improved to O((p
It is well known that the Hausdorff distance, H(A;B), is a metric over the set of all
closed, bounded sets (cf., [9]). Here we restrict ourselves to finite point sets, because
that is all that is necessary for raster sensing devices. It should be noted that the
Hausdorff distance does not allow for comparing portions of the sets A and B, because
every point of one set is required to be near some point of the other set. There is,
however, a natural extension to the problem of measuring the distance between some
subset of the points in A and some subset of the points in B, which we present in
Section 3.
The Hausdorff distance measures the mismatch between two sets that are at fixed
positions with respect to one another. In this paper we are primarily interested in
measuring the mismatch between all possible relative positions of two sets, as given by
the value of the Hausdorff distance as a function of relative position. That is, for any
group G, we define the minimum Hausdorff distance to be
If the group G is such that, for any g 2 G and for any points x 1
then we need only consider transforming one of the sets:
This property holds when G is the group of translations and k \Delta k is any norm, and also
when G is the group of rigid motions and k \Delta k is the Euclidean norm.
In the cases we consider here (translations and rigid motions), the minimum Hausdorff
distance obeys metric properties (as was shown in [12] and [13]). That is, the function
is everywhere positive, and has the properties of identity, symmetry and triangle
inequality. These properties correspond to our intuitive notions of shape resemblance,
namely that a shape is identical only to itself, the order of comparison of two shapes
does not matter 1 and two shapes that are highly dissimilar cannot both be similar to
some third shape. This final property, the triangle inequality, is particularly important
in pattern matching applications where several stored model shapes are compared
to an unknown shape. Most shape comparison functions used in such applications do
not obey the triangle inequality, and thus can report that two highly dissimilar model
shapes are both similar to the unknown shape. This behavior is highly counter-intuitive
(for example, reporting that some unknown shape closely resembles both an 'elephant'
and a 'hatrack' is not desirable, because these two shapes are highly dissimilar).
1 Actually the order of comparison does matter in some psychophysical studies. One interesting
property of the Hausdorff distance in this regard is the fact that the directed distance h(A; B) is not
symmetric.
Figure
1: Two sets of points illustrating the distances H(A;B) and M T (A; B).
We focus primarily on the case where the relative positions of the model with respect
to the image is the group of translations. Without loss of generality we fix the set A
and allow only B to translate. The minimum value of the Hausdorff distance under
translation is then defined as,
where H is the Hausdorff distance as defined in equation (1), and \Phi is the standard
Minkowski sum notation (i.e., B \Phi Bg). For example, Figure 1 shows two
sets of points, where the set A is illustrated by dots and the set B by crosses. H(A;B)
is large because there are points of A that are not near any points of B and vice versa.
however, because there is a translation of B that makes each point
of A nearly coincident with some point of B and vice versa. We generally refer to the
set A as the 'image' and the set B as the `model', because it is most natural to view
the model as translating with respect to the image.
We now turn to the problem of efficiently computing H(A;B) and M T (A; B). The
organization of the remainder of the paper is as follows. We first discuss how to compute
H(A;B) for finite sets of points in the plane. The basic idea is to define a set of functions
measuring the distance from each point of A to the closest point of B (and vice versa),
as a function of the translation t of the set B. In section 3 we show how to extend
the method to the problem of comparing portions of the sets A and B (e.g., as occurs
when instances are partly occluded). Then in section 4 we discuss an implementation
of the Hausdorff distance computation for raster data, where the sets A and B are
represented in terms of binary rasters. This implementation is in many ways similar
to binary correlation. In section 5 we show how to improve the basic implementation,
by ruling out many possible translations of B without explicitly considering them. We
then present some examples, and contrast the method with binary correlation. Finally,
we consider the case of rigid motion (translation and rotation), and present an example
for this problem.
Computing H(A;B) and M T (A; B)
From the definition of the Hausdorff distance in equations (1) and (2), we have
min b2B
min a2A
If we define
a2A
That is, H(A;B) can be obtained by computing d(a) and d 0 (b) for all a 2 A and
respectively. The graph of d(x), f(x; g, is a surface that has been
called the Voronoi surface of B [14]. This surface gives for each location x the distance
from x to the nearest point b 2 B. For points in the plane one can visualize this surface
as a sort of 'egg carton', with a local minimum of height zero corresponding to each
and with a 'cone-shape' rising up from each such minimum. The locations at
which these cone-shapes intersect define the local maxima of the surface. Thus note
that the local maxima are equidistant from two or more local minima (hence the name
Voronoi surface, by analogy to Voronoi diagrams that specify the locations equidistant
from two or more points of a given set [16]). The graph of d 0 (x) has a similar shape,
with a 'cone-shape' rising up from each point of A.
A Voronoi surface, d(x), of a set B has also been referred to as a distance transform
(e.g., [10]), because it gives the distance from any point x to the nearest point in a
set of source points, B. Figure 2 illustrates a set of points and a top-down view of a
corresponding Voronoi surface, where brighter (whiter) portions of the image correspond
to higher portions of the surface. The norm used in the figure is L 2 .
We now turn to calculating the Hausdorff distance as a function of translation,
min b2B
min a2A
min b2B
min a2A
Figure
2: A set of points and a corresponding Voronoi surface.
That is, H(A;B \Phi t) is simply the maximum of translated copies of the Voronoi surfaces
d(x) and d 0 (x) (of the sets B and A respectively). Now define f A
the upper envelope (pointwise maximum) of p copies of the function d(\Gammat), which have
been translated relative to each other by each a 2 A. This gives for each translation
t the distance of the point of A that is farthest from any point of B \Phi t. That is,
f A \Delta) is the directed Hausdorff distance given in equation (2).
The directed Hausdorff distance f B defined analogously. Now
H(A;B \Phi t) is simply the maximum of the two directed distance functions. Thus, we
define
a2A
where H(\Delta; \Delta) is the Hausdorff distance as defined in (1). That is, the function f(t) specifies
the Hausdorff distance between two sets A and B as a function of the translation
t of the set B. Efficient theoretical algorithms for computing f(t) were developed in
[14]. There it was shown that if A and B contain respectively p and q points in the
plane, then the function f(t) can be computed in time O(pq(p
the L 1 , L 2 or L1 norm. This running time can be improved to O(pq log pq) when using
the L 1 or L1 norms [7]. These methods are quite complicated to implement, and are
substantially less efficient in practice than the rasterized approximation methods that
we investigate in sections 4 and 5.
Comparing Portions of Shapes
In many machine vision and pattern recognition applications it is important to be able
to identify instances of a model that are only partly visible (either due to occlusion or
to failure of the sensing device to detect the entire object). Thus we wish to extend
the definition of the Hausdorff distance to allow for the comparison of portions of two
shapes. This will allow both for scenes that contain multiple objects, and for objects
that are partially hidden from view.
3.1 Partial distances based on ranking
The Hausdorff distance can naturally be extended to the problem of finding the best
partial distance between a model set B and an image set A. For simplicity we first
consider just the directed Hausdorff distance from B to A, h(B; A). The computation
of h(B; simply determines the distance of the point of the model B that is farthest
from any point of the image A. That is, each point of B is ranked by the distance to
the nearest point of A, and the largest ranked point (the one farthest from any point
of determines the distance.
Thus a natural definition of 'distance' for K of the q model points (1 - K - q) is
given by taking the K-th ranked point of B (rather than the largest ranked one),
th
min a2A
where K th
b2B denotes the K-th ranked value in the set of distances (one corresponding
to each element of B). That is, for each point of B the distance to the closest point of
A is computed, and then the points of B are ranked by their respective values of this
distance. The K-th ranked such value, d, tells us that K of the model points B are each
within a distance d of some image point (and when all the points are considered,
and the value is simply the directed Hausdorff distance h(B; A)). This definition of
the distance has the nice property that it automatically selects the K 'best matching'
points of B, because it identifies the subset of the model of size K that minimizes the
directed Hausdorff distance.
In general, in order to compute the partial directed 'distance' hK (B; A), we specify
some fraction 0 - f 1 - 1 of the points of B that are to be considered. Each of the q
points of B is ranked by the distance to the nearest point of A. The K-th ranked such
value, given by equation (4), then gives the partial 'distance', where
K-th ranked value can be computed in O(q) time using standard methods such as those
in [1]. In practice, it takes about twice as long as computing the maximum.
This partial distance measures the difference between a portion of the model and the
image: the K points of the model set which are closest to points of the image set. One
key property of this method is that it does not require one to pre-specify which part of
the model is to be compared with the image. This is because the computation of the
directed Hausdorff distance determines how far each model point is from the nearest
image point, and thus automatically selects the K points of the model that are closest
to image points. In Section 6 we illustrate this partial matching capability, and contrast
it with correlation. We find that the directed partial Hausdorff 'distance' works well
for partial matches on images where correlation does not.
The partial bidirectional Hausdorff 'distance' is now naturally defined as
This function clearly does not obey metric properties, however it does obey weaker
conditions that provide for intuitively reasonable behavior. These conditions are, in
effect, that metric properties are obeyed between given subsets of A and B (of size L
and K respectively). In order to specify these properties more precisely, we need to
understand something about the subsets of A and B that achieve the minimum partial
'distance':
there exist sets A L ' A and BK ' B such that
d. Each of A L and BK will have exactly min(K; L) elements.
There are also "large" subsets of A and B which achieve the distance d:
there exist sets A 0
For proofs of these claims, see Appendices A and B.
It follows immediately that the identity and symmetry properties hold with respect
to A 0
K , because H(\Delta; \Delta) obeys these properties. Intuitively, this means that for
the partial 'distance' with some given K;L, the order of comparison does not matter,
and the distance is zero exactly when the two minimizing subsets A 0
K are the
same.
For the triangle inequality the minimizing subsets may be different when comparing
A with B than when comparing B with C. Thus in general the triangle inequality can
be violated. In the restricted case that the same subset B 0
is the minimizer of
HLK (A; B) and HKM (B; C) then by definition
Intuitively, this means that if two sets are compared with the same portion of a third
set (denoted B 0
K above) then the triangle inequality holds. For practical purposes this
is a reasonable definition: if two models both match the same part of a given image
then we expect the models to be similar to one another. On the other hand if they
match different parts of an image then we have no such expectation.
4 The Minimum Hausdorff Distance for Grid Points
We now turn to the case in which the point sets lie on an integer grid. This is appropriate
for many computer vision and pattern recognition applications, because the data are
derived from a raster device such as a digitized video signal. Assume we are given two
sets of points such that each point a 2 A and
coordinates. We will denote the Cartesian coordinates of a point
a 2 A by (a x ; a y ), and analogously (b x B. The characteristic function of
the set A can be represented using a binary array A[k; l] where the k; l-th entry in the
array is nonzero exactly when the point (k; l) 2 A (as is standard practice). The set B
has an analogously defined array representation B[k; l].
As in the continuous case in the previous section, we wish to compute the Hausdorff
distance as a function of translation by taking the pointwise maximum of a set of
Voronoi surfaces. In this case, however, the point sets from which these surfaces are
derived are represented as arrays, where the nonzero elements of the arrays correspond
to the elements of the sets.
For the two sets A and B, we compute the rasterized approximations to their respective
Voronoi surfaces d 0 (x) and d(x). These distance arrays, or distance transforms,
specify for each pixel location (x; y) the distance to the nearest nonzero pixel of A or B
respectively. We use the notation D 0 [x; y] to denote the distance transform of A[k; l],
and D[x; y] to denote the distance transform of B[k; l]. That is, the array D 0 [x; y] is
zero wherever A[k; l] is one, and the other locations of D 0 [x; y] specify the distance to
the nearest nonzero point of A[k; l]. There are a number of methods for computing the
rasterized Voronoi surface, or distance transform (e.g., [5, 15]), which we discuss briefly
below.
Proceeding with the analogy to the continuous case, we can compute the pointwise
maximum of all the translated D and D 0 arrays to determine the Hausdorff distance as
a function of translation (only now we are limited by the rasterization accuracy of the
integer grid):
In order for F [x; y] to be small at some translation must be that the distance
transform D[x; y] is small at all the locations A \Psi is small at all
the locations B \Phi In other words, every point (nonzero pixel) of the translated
model array B[k+x must be near some point (nonzero pixel) of the image array
A[k; l], and vice versa.
When the input points have integer coordinates, it is straightforward to show that
the minimum value of F [x; y] is very close to the minimum value of the exact function
f(t). In other words, the rasterization only introduces a small error compared to the
true distance function. Specifically,
a translation which minimizes F [x; y]. (There may be
more than one translation with the same, minimum, F value). Let t
translation which minimizes f(t 1 ), the exact measure. Then F [x differs from f(t 1 )
by at most 1, when the norm used, k \Delta k, is any L p norm.
For a proof of this claim see Appendix C. Note also that when
translation with integer coordinates x and y, then F [x; the rasterized function
is simply a sampling of the exact function.
Thus the minimum value of the rasterized approximation, F [x; y], specifies the minimum
Hausdorff distance under translation to an accuracy of one unit of quantization.
However, it should be noted that the translation minimizing F [x; y] is not necessarily
close to the translation t which actually minimizes f(t). To see an example of this, let
k be any odd number, and let A be the set f(0; 0); (k; be the set
0)g. Then the translation t which minimizes f(t) (in the exact case) is
In the rasterized case, however, M T (A;
with three translations which generate this minimum value: ((k +1)=2; 0),
0). Thus there there may be minimizing translations in the rasterized
case that are arbitrarily far away from the minimizing translation in the exact case.
This is not a problem, however, since the value of H(A;B \Phi t) for each of these translations
is the same (and the value at each translation must be within 1:0 of the exact
minimizing value). In other words all three of these matches have the same cost in
both the exact and rasterized cases, and this cost is nearly the exact minimum cost. In
practice we generally enumerate all minimizing translations.
The function f(t) and its rasterized approximation F [x; y] specify the Hausdorff
distance H(A;B \Phi t) as a function of the translation t. The directed Hausdorff distance
h(B \Phi t; A) is also useful for comparing two bitmaps. In particular, in order to identify
possible instances of a 'model' B in a cluttered `image' A, it is often desirable to simply
ensure that each portion of the model is near some portion of the image (but not
necessarily vice versa). We denote by FB [x; y] the directed Hausdorff distance from B
to A as a function of the translation (x; y) of B,
This measures the degree to which B[k; l] resembles A[k; l], for each translation (x; y) of
B. When each nonzero pixel of B[k near some nonzero pixel of A[k; l] then
the distance will be small. If on the other hand, some nonzero pixel of B[k
is far from all nonzero pixels of A[k; l] then the distance will be large. The directed
distance from A to B is analogously given by FA [x;
that F [x; y] is simply the pointwise maximum of these two directed distance functions.
4.1 Computing the Voronoi surface array D[x; y]
There are many methods of computing a distance transform (or rasterized approximation
to a Voronoi surface). In this section we summarize some of the approaches that
we have used for computing the distance transform D[x; y] of a binary array E[x; y]
(where we denote the nonzero pixels of E[x; y] by the point set E).
One method of computing D[x; y] is to use a local distance transform algorithm
such as that in [5], [10] or [15]. In practice we use a two-pass serial algorithm that
approximates the distance transform using a local mask to propagate distance values
through the array (such as that of [5]). Better distance values can be obtained using a
method such as that of [15] which produces distance transform values that are exact for
the L 1 and L1 norms, and are exact up to the machine precision for the L 2 norm. This
algorithm first processes each row independently. For each row of E[x; y], it calculates
the distance to the nearest nonzero pixel in that row, i.e. for each (x; y) it finds \Deltax such
that E[x + \Deltax; y] 6= 0 or E[x \Gamma \Deltax; y] 6= 0, and that \Deltax is the minimum non-negative
value for which this is true. It then scans up and down each column independently,
using the \Deltax values and a look-up table which depends on the norm being used, to
determine D[x; y].
Another method that we have used to compute distance transforms takes advantage
of specialized graphics hardware for rendering and z-buffering. The form of D[x; y] is,
as noted in Section 2, an 'egg carton': the lower envelope of a collection of cone-shapes,
one cone-shape for each point e (each nonzero pixel of E[x; y]), with its point at
. The exact form of the cone-shapes depends on the norm being used. For the L 1
norm the shapes are pyramids with sides of slope \Sigma1, oriented at 45 ffi with respect to
the coordinate axes. For the L1 norm they are again pyramids, but oriented parallel
to the coordinate axes. For the L 2 norm they are cones of slope 1.
The computation of D[x; y] is simply to take the pointwise minimum, or lower
envelope, of the cone-shapes rendered as described above. Consider the operations
performed by a graphics rendering engine set up to perform orthographic (rather than
perspective) projection, and set up to view this collection of surfaces from 'below'. It can
render the cones and perform visibility calculations quickly using a z-buffer. Suppose
that location in the z-buffer contains value d. Then the closest surface to the
viewer which intersects the line away. This means that the lower
envelope of the 'egg carton' is at height d at (x; y), and so D[x; d. Thus we simply
render each of the cones described above into a z-buffer doing orthographic projection
(i.e., with a view from z = \Gamma1). The running time of this method is O(p), where p
is the number of points in the set E. This is because each source point results in the
rendering of a single cone, and then the z-buffering operation is constant-time. With
current graphics hardware tens of thousands of polygons per second can be rendered in
a z-buffer, and thus it is possible to compute D[x; y] in a fraction of second. For the L 1
and L1 norms, D[x; y] is computed exactly; for the L 2 norm, there may be some error
in the computed D[x; y], which depends on the resolution of the z-buffer being used.
4.2 Computing the Hausdorff distance array F [x; y]
The Hausdorff distance as a function of translation, F [x; y], defined in equation (6) can
also be computed either using graphics hardware or standard array operations. For
simplicity of discussion, we focus on the computation of the directed distance from the
model to the image, FB [x; y] (the computation of FA [x; y] is analogous). Recall that
F [x; y] is just the maximum of these two directed distances.
Above we saw that FB [x; y] can be defined as the maximum of those values of D 0 [x; y]
(the distance transform of A[k; l]) that are selected by elements of B for each translation
(x; y) of B. Alternately, this can be viewed as the maximization of D 0 [x; y] shifted by
each location where B[k; l] takes on a nonzero value,
This maximization can be performed very rapidly with special-purpose graphics hardware
for doing pan and z-buffer operations. We simply pan D 0 [x; y] and accumulate
a pointwise maximum (upper envelope) using a z-buffer. In practice, for most current
graphics hardware this operation is not fast because it involves repeatedly loading the
z-buffer with an array from memory.
A second way of computing FB [x; y], using standard array operations, arises from
viewing the computation slightly differently. Note that (8) is simply equivalent to
maximizing the product of B[k; l] and D 0 [x; y] at a given relative position,
In other words, the maximization can be performed by 'positioning' B[k; l] at each
location (x; y), and computing the maximum of the product of B with D 0 .
In order to compute FB [x; y] using the method of equation (9), the array B[k; l] is
simply positioned centered at each pixel of the distance transform D 0 [x; y]. The value
of FB [x; y] is then the maximum value obtained by multiplying each entry of B[k; l] by
the corresponding entry D As B[k; l] is just a binary array, this amounts
to maximizing over those entries of D are selected by the nonzero
pixels of B[k; l]. That is, we can view the nonzero model pixels as probing locations
in the Voronoi surface of the image, and then FB [x; y] is the maximum of these probe
values for each position (x; y) of the model B[k; l].
This form of computing the directed Hausdorff distance under translation is very
similar to the binary correlation of the two arrays B[k; l] and A[k; l],
l
The only differences are that the array A[k; l] in the correlation is replaced by the
distance array D 0 [x; y] in equation (the distance to the nearest pixel of A[k; l]), and
the summation operations in the correlation are replaced by maximization operations.
It should be noted that the directed Hausdorff distance is very insensitive to small
errors in pixel locations, because the Voronoi surface, D 0 [x; y], reports the distance to
the nearest point of A[k; l]. Thus if pixels are slightly perturbed, the value of A[k; l]
only changes a small amount. In binary correlation, however, there is no such notion of
spatial proximity. Either pixels are directly superimposed or not. Binary correlation is
one of the most commonly used tools in image processing, and we will further examine
the relation between our method and correlation in Section 6.
4.3 Matching portions of the model and image
We can use the partial distance HLK (A; B \Phi t), given in equation (5), to define a version
of F [x; y] which allows portions of a model and image to be compared. For a given
translation fractions f 1 and f 2 representing the fraction of the nonzero
model and image pixels to be considered, respectively, let
and redefine
th
th
a2A
this is the same as the old version of
F [x; y].
When we are considering the partial distance between an image and a model, the
model is often considerably smaller than the image, reflecting the fact that in many
tasks a given instance of the model in the image will occupy only a small portion of the
image. In this case, the above definition of partial distance is not ideal for the directed
distance from the image to the model, H L because we must define L, the
number of image pixels that will be close to model pixels. This number, L, however
will depend on how many objects are in the image.
A natural way to compute a partial distance from the image to the model is to
consider only those image points that are near the current hypothesized position of
the model, since those farther away are probably parts of other imaged objects. In
practice, it is sufficient to consider only the image pixels which lie 'underneath' the
current position of the model: if we are computing F [x; y] and the model is m pixels by
pixels, then we compute a different version of FA [x; y] that considers just the points
of A[k; l] that are under the model at its given position, B[k
x-k!m+x
y-l!n+y
Note, that given this definition of comparing just a portion of the image to the
model, it is possible to further compute the 'partial distance' of this portion of the
image against the model. This can be done by combining this definition with the
ranking-based partial distance. We can do this by adjusting the value of L depending
on how many image pixels lie under the model at its given position (because we are
computing a partial distance with just this portion of the image). In other words we
let r is the number of nonzero image pixels which are 'underneath'
the translated model at the current translation. For this, the definition of FA [x; y] in
equation (10) would be modified to use L th instead of max.
Efficient Computation of F [x; y]
The na-ive approach to computing F [x; y], described in equation (6) of Section 4, can
take a significant time to run, because it considers every possible translation of the
model within the given ranges of x and y. We have developed some 'pruning' techniques
that decrease this running time significantly. These techniques take advantage of the
fact that, in typical applications, once F [x; y] has been computed, it will generally be
scanned to find all entries which are below some threshold, - . We can use this to
generate the (x; y) values where F [x; y] - directly, without generating all of F [x; y].
We present here some of the techniques which we have found to be useful.
The effects of these speedup techniques vary depending on the image and the model
being used. They are more effective when the image is sparse, and when the model has
a large number of points. In our work we have seen speedups of a factor of 1000 or
more over the na-ive approach. Some image/model pairs take only fractions of a second
to compare (some illustrative timings will be presented with the examples below).
5.1 Ruling out circles
We wish to compute the Hausdorff distance as a function of translation, F [x; y], given
a binary 'image' A[k; l] and a `model' B[k; l]. Let the bounds on A be
and the bounds on B be 0 - . While the array
F [x; y] is in principle of infinite extent, its minimum value must be attained when the
translated model overlaps the image in at least one location, so we only consider the
portion where \Gammam b
One property of FB [x; y] is that its slope cannot exceed 1. That is, the function
does not decrease more rapidly than linearly. Thus if FB [x
then FB [x; y] cannot be less than - in a circle of radius about the point
(The actual shape of the "circle" depends on the norm used; it is a true circle for L 2 ).
In other words, if the value of FB [x is large at some location, then it cannot be
small in a large area around that location. This fact can be used to rule out possible
translations near
)k. This is true for all
values of f 1 (the fraction of nonzero model pixels considered).
For a proof of this claim, see Appendix D. We use this fact in the algorithm detailed
below in order to speed up the computation.
This property does not necessarily hold for FA [x; y]. If we are considering only the
portion of the image under the model, as in Subsection 4.3, we might have a location
where moving the model by one pixel 'shifts' some image points into or out of the
window, which can change the value of FA [x; y] by a large amount. This also implies
that the property does not necessarily hold for F [x; y]. In practice, however, this is not
much of an issue because generally it is only for the image array that we wish to skip
over parts of a large array (e.g., by ruling out circles). The model array is usually small
enough that we do not need this technique.
5.2 Early scan termination
We may also obtain a speedup by not computing FB [x; y] completely if we can deduce
partway through the computation that it will be greater than - . Recall that FB is
computed by maximizing over all the locations of D are 'selected'
by nonzero pixels of B[k; l]. That is, each nonzero pixel of B[k; l] in effect probes a
location in the Voronoi surface of A[k; l], and we maximize over these probe values.
Thus if a single probe value is over the threshold - at translation (x; y), then we know
that F [x; y] must be over - (it is the maximum over all the probe values). Thus we can
stop computing FB for this translation, because it is over threshold.
An analogous result holds for the partial distances. Let qc. The value of
is the K-th ranked value of D 0 [b x
where are q such locations). We probe D 0 in q places, and maintain
a count of the number of these values from D 0 which exceed - . If this count exceeds
we know that the K-th ranked value must be greater than - , and so FB [x; y] ? - ,
so we need probe no more locations for the translation (x; y). In fact, we can determine
the minimum possible value FB [x; y] could have at this location, by assuming that the
unprobed values are all 0 and calculating the K-th ranked value of this set of values.
We can use this to eliminate nearby values of (x; y) from consideration. This method
works best for large values of f 1 .
5.3 Skipping forward
A third technique relies on the order in which the space of possible translations is
scanned. We must be scanning the distance transform array in some order; assume
that the order is a row at a time, in the increasing x direction. In other words, for some
y, we first consider FB to FB [m a \Gamma 1; y]. In this
case, it is possible to quickly rule out large sections of this row by using a variant of
the distance transform.
Let D 0
+x [x; y] be the distance in the increasing x direction to the nearest location
where D 0 [x; y] - , and 1 if there is no such location (in practice, D 0
would be
set to a large value if there is no such location; a value greater than the width of the
array is sufficiently large). Formally,
\Deltax
Note that D 0
. We can use D 0
+x [x; y] to determine how far we would
have to move in the increasing x direction to find a place where FB [x; y] might be no
greater than - . Let
GB [x; th
If GB [x; y] is 0, then K of the values of D 0
+x probed must have been 0, and so K of
the values of D 0 which would be probed in the computation of FB [x; y] would be - .
Further, if GB [x; not only do we know that FB [x; y] ? - , but also
(The proof of this is similar to the proof of
claim 4 and is omitted). We can therefore immediately increment x by \Deltax, and skip
a section of this row. Note also that we do not need to compute FB [x; y] at all if we
compute GB [x; y] and find that it is nonzero. Early scan termination can be applied to
this computation.
This method has the advantage over ruling out circles that it does not require any
auxiliary data structures to be maintained; once GB [x; y] has been computed, x can
be immediately incremented. The ruling out circles method must keep track of what
translations have been ruled out, and updating this map can be time-consuming.
5.4 Interactions between speedup methods
These techniques may be used in combination with each other. However, using one
technique may affect the efficiency of others. Interactions to be noted are:
ffl Using early scan termination greatly degrades the effect of both ruling out circles
and skipping forward. Early scan termination will generally give a value for
FB [x; y] which is only a small amount greater than - , and so very few locations
will be ruled out; continuing the scan could increase the value computed, thereby
saving work later.
A possible solution for this is to terminate the scan when FB [x; y] has been shown
to be greater than - +R, where R is some value, so that on a terminated scan, a
circle of radius at least R could be ruled out; similarly, terminate the scan when
GB [x; y] has been shown to be at least R. The value of R is arbitrary, and can be
adjusted for best performance.
ffl The order in which translations are considered can be arbitrary if skipping forward
is not used. The optimal order may well not consider adjacent translations
successively, as this will tend to consider translations which are on the edges of
ruled out circles, and so much of the neighborhood has already been ruled out.
Considering a translation in a "clear" area would provide more opportunity for
ruling other translations out.
5.5 An Efficient Algorithm
These observations give us an algorithm which will produce a list of values where
F [x; y] - quite efficiently.
Algorithm 1 Given two input binary image arrays, A[k; l] and B[k; l], two fractions
threshold - 0, generate a list T of (translation, value)
pairs ((x; y); v) such that Use the given fractions of B[k; l] and
A[k; l] for each translation (x; y) of B[k; l]. Consider only a fraction of that part of
A[k; l] which is covered by the translated B[k; l].
1. Let the bounds of A[k; l] be a and the bounds of B[k; l]
be
2. Compute the array D[x; y] that specifies the distance to the closest nonzero pixel
of B[k; l], making D[x; y] the same size as B[k; l].
3. Compute the array D 0 [x; y] that specifies the distance to the closest nonzero pixel
of A[k; l], making D 0 [x; y] with \Gammam b
(see Subsection 4.3).
4. Compute the array D 0
+x [x; y] that specifies the distance to the closest pixel (in the
increasing x direction) of D 0 [x; y] which is less than or equal to - . Make D 0
the same size as D 0 [x; y].
5. Let the number of model pixels considered be
6. Create an array, M [x; y], the same size as D 0 [x; y]. This will contain the minimum
possible value that FB [x; y] could have, given the information we have accumu-
lated. Initialize M [x; y] to zero.
7. Create two lists, T and T 0 . Initialize both to empty.
8. For each translation, let the number of image pixels considered be
where r is the number of nonzero pixels in A[k; l] that are covered by the current
position of B[k; l].
9. For each translation (x; y) of B[k; l] (scanned in reading order: top to bottom,
left to right),
(a) If M [x; y] ? - , then we need not consider this translation at all, and should
proceed to the next one. Otherwise,
(b) Set o to zero.
(c) For each b 2 B, consider D 0
+x [b x +x; b y +y]. If it is greater than R, increment
o. Also consider D during this process,
then
i. Take the smallest of the D 0
+x values which we have seen that exceeded
0. Call this \Deltax.
ii. Take the smallest of the D 0 values which we have seen that exceeded - .
Call this v 0 .
iii. For each value
This only needs
to be done for the M [x a radius of v need
not be done at all for any (x 0 ; y 0 ) which has previously been considered
in step 9.
iv. Skip to the next translation, (x+ \Deltax; y) (if x+ \Deltax - m a , go to the start
of the next row).
(d) If never exceeds q \Gamma K, then let v 0 be the K-th ranked value of the q values
from D 0 generated in step 9c. This will be - , since no more than q \Gamma K of
these values can be greater than - . Add ((x; to the list T 0 .
10. The list T 0 now contains all the (translation, value) pairs ((x;
. For each ((x; y); v 0 ) on the list T 0
(a) Consider the values of A[a x ; a y ]D[a x \Gamma x; a y \Gamma y], for all points a 2 A such
that x - a x ! x +m b and y - a y compute the L-th ranked value.
(Recall that r is the number of points of A that lie under
B[k; l] for the current translation (x; y), as described in Subsection 4.3.) Call
this value v. We know that F [x;
(b) If v is less than or equal to - , add ((x; to the list T .
This algorithm can also be used to produce a list of translations where the directed
Hausdorff 'distance' from the model to the image is less than - , by halting after step 9
and using the list T 0 .
6 Examples
We now consider some examples in order to illustrate the performance of the Hausdorff
distance methods developed above, using some image data from a camera in our
laboratory. The first test image is shown in Figure 3. This binary image is 360 \Theta 240
and was produced by applying an edge operator (similar to [6]) to a grey-level camera
image. The computation of the Hausdorff distance under translation was done using
an implementation written in C of the algorithm described above.
The model to be compared with the first test image is shown in Figure 4. The outline
around the figure delineates the boundary of the bitmap representing the model. The
model is 115 \Theta 199 pixels. Comparing this model against this image with
approximately twenty seconds on a Sun-4 (SPARCstation
2). This produced two matches, at (87; 35) and (87; 36). Figure 5 shows the match at
overlaid on the image. As a comparison, the na-ive approach from Subsection 4.2
takes approximately 5000 seconds to perform this comparison process.
We also ran the algorithm on the image and model shown in Figures 6 and 7, using
0:35. The image is 256 \Theta 233 and the model is 60 \Theta 50. Four
matches were found, at (99; 128), (100; 128), (99; 129) and (100; 129). Figure 8 shows
the match at (99; 129) overlaid on the image. The computation took approximately 5
seconds.
Our third test case consists of the image and model shown in Figures 9 and 10, using
0:5. The image is 360 \Theta 240 and the model is 38 \Theta 60. The
Figure
3: The first test image.
Figure
4: The first object model.
Figure
5: The first test image overlaid with the best match.
Figure
The second test image.
Figure
7: The second object model.
Figure
8: The second test image overlaid with the best match.
model was digitized from a different can, held at approximately the same orientation
and same distance from the camera. Four matches were found, at (199; 95), (199; 98),
(200; 98) and (199; 99).
Figure
11 shows the match at (199; 95) overlaid on the image.
The computation took approximately 1.4 seconds.
If several models are to be compared against the same image, then the distance
transform of the image need only be computed once. Our implementation takes about
one second to compute the distance transform of a 256 \Theta 256 image on a Sun-4 (SPARC-
station 2). Once this has been computed, the comparisons can take as little as one half
second per model (256 \Theta 256 images, 32 \Theta 32 model). The time taken depends on the - ,
f 1 and f 2 values used; larger - and smaller f 1 values increase the time taken. A more
cluttered image will also increase the time.
In order to compare the directed Hausdorff distance with correlation, we computed
the correlation of the stump model in Figure 7 with the second test image. We defined
a 'match' to be a translation where the correlation function was at a local peak. For
this image, the correlation performed poorly. There were eight incorrect matches which
had a higher correlation value than the correct match, and the correct match had a
peak value of only 77% of the largest peak. None of the incorrect matches was close
(spatially) to the correct match.
Hence we see support from these examples for our theoretical claim that the partial
Hausdorff 'distance' works well on images where the locations of image pixels have been
perturbed. Moreover, these same images cannot be handled well by correlation.
7 The Hausdorff Distance Under Rigid Motion
The methods we have described for computing the Hausdorff distance under translation
can be naturally extended to computing the Hausdorff distance under rigid motion
(translation and rotation). In this case, we require that the norm used be the Euclidean
norm (L 2 ). As before, we fix the set A and allow the set B to move (in this case rotate
and translate). The minimum value of the Hausdorff distance under rigid (Euclidean)
motion, ME (A; B), then gives the best transformation of B with respect to A,
(R ' B) \Phi t) (11)
where H is the Hausdorff distance as defined in equation (1), (R ' B) \Phi
Bg, and R ' is the standard rotation matrix. This distance is small exactly when there
Figure
9: The third test image.
Figure
10: The third object model.
Figure
11: The third test image overlaid with the best match.
is a Euclidean transformation that brings every point of B near some point of A and
vice versa.
We can compute a rasterized approximation to the minimum Hausdorff distance
under rigid motion. As above, the basic idea is to compute the Hausdorff distance for
all transformations of the model at the appropriate level of rasterization, and then find
the minima for which the distance is below some threshold, - . For translations, the
appropriate level of rasterization is again single pixels. For rotation, we want to ensure
that each consecutive rotation moves each point in the model by at most one pixel.
Each point b j in the set B is being rotated around the center of rotation, c r , on a circle
of radius r . This means that the rasterization interval in ', \Delta', should
be arctan(1=r k ) where r k is the radius of rotation of the point in B which is furthest
from the center of rotation.
Our current implementation is restricted to computing only the directed Hausdorff
distance from the model, B, to the image, A. This is analogous to FB [x; y] as defined
in equation (7). Furthermore, we consider only complete shapes (no partial distances).
Recall that the Hausdorff distance computation can be thought of as probing locations
in the Voronoi surface of the image corresponding to the transformed model points.
The probe values for each transformation are then maximized in order to compute the
distance for that transformation. For rigid motion, we build a structure which gives for
each model point a list of the locations that the point moves through as it rotates, one
location for each rasterized '. These locations are relative to the center of rotation and,
for each model point, this list describes a circle about the center of rotation.
Consider the problem of determining the minimum (directed) Hausdorff distance
under rotation for a fixed translation t, min ' h((R ' B) \Phi t; A). We first initialize to zero
an array, Q, containing an element for each ' value. During the computation, this array
will contain the minimum possible value of the Hausdorff distance for each rotation, on
the basis of the points that have been probed thus far. For each point we probe the
distance transform of the image at each relative rotated location, and maximize these
probe values with the corresponding values in the array Q. Once all points in B have
been considered, the array Q gives the directed Hausdorff distance for each discrete
rotation at the current translation:
We perform this computation for each rasterized translation t. This algorithm is analogous
to the na-ive algorithm for the translation-only case.
As in the translation-only case, many possible translations and rotations of B may
be ruled out without explicitly considering them. We have begun to investigate speedup
techniques for Euclidean motion and now describe the methods that have proven successful
First, we choose as the center of rotation that point of B which is closest to the
centroid of the model. This both reduces the number of rasterized ' values which we
need to consider and also explicitly makes the center of rotation be a point in B. This
is useful since the center of rotation does not move as ' changes. Thus, if at a given
translation we probe the center point first and find that the distance transform at that
point is greater than - , then we know that there are no good rotations of the model at
this translation.
Points of B that are close to the center of rotation will not pass through many
distinct grid points as they rotate about it. In building the model rotation structure,
we can therefore store only the distinct grid points through which each point of B
rotates. In addition, we also store the range of rasterized ' values that each of these
distinct points corresponds to. This compression of the rotation structure eliminates
a large number of extraneous probes. However, a single probe may now be used to
update several consecutive entries in Q.
Techniques analogous to ruling out circles and early scan termination can also be
used in '-space to prune the search for good transformations of the model.
If we find that for some rotation ' and point b 2 B the distance from R '
the nearest point in A is v and v ? - , then all rotations which bring b into the circle of
radius centered at R ' may be eliminated from consideration. These are the
rotations in the range ' \Sigma cos r is the radius of rotation of
point b. If all values of ' may be ruled out. Note that a conservative
approximation to this range is ' \Sigma (v \Gamma -)=r, which may be used if cos \Gamma1 is too expensive
to compute. Considering the points in order of increasing radius of rotation makes it
likely that large ' ranges will be eliminated early on, as a single large probe value for
one of the "inner" points will rule out more ' values than it would for one of the "outer"
points.
By maintaining a list of ruled-out regions of '-space, the transformations resulting in
a Hausdorff distance greater than - can be quickly discarded. As noted in subsection 5.4,
combining early scan termination with ruling out circles for the translation-only case
may result in only small areas being ruled out. This same problem holds in '-space,
and again, a possible solution is to delay early termination until we can guarantee that
the terminated scan eliminates at least some minimum radius circle.
It is also possible to use pruning techniques in both rotation and translation simul-
taneously. One possible technique is to use the Q array generated for one translation
to eliminate some ' values for an adjacent translation. Since the slope of the distance
transform cannot exceed 1, in moving to an adjacent translation each probe value could
at worst decrease by 1. Thus, any angle for which the lower bound on the Hausdorff
distance is greater than - + 1 at the current translation can also be ruled out at all
adjacent translations. In practice, however, we found that the extra overhead involved
actually slowed down the computation.
Finally, to illustrate the performance of our current implementation, we use an image
of some children's blocks on a table. This example is taken from a demonstration in
which a robot arm locates blocks on the table using the Hausdorff distance under rigid
motion and then uses the blocks to build an object the user has specified. Figure 12
shows the edge detector output for the 360 \Theta 240 grey-level camera image of the blocks.
The block model is shown in Figure 13 and is simply a square 31 pixels on each side.
Matching this model against the image using local minima are found
below threshold, corresponding to four rotations for each of the 15 blocks which are
completely in the field of view. The matches are shown overlaid on the original image
in
Figure
14. Note that because we are only computing the directed distance from
the model to the image, even the blocks that contain letters imprinted in the upward
facing side are recognized. At each translation and rotation of the model, we merely
require that there is an image point within three pixels of each point in the transformed
model. This is exactly what is needed for this application, though large black areas
in the image would present problems because spurious matches would be found. This
matching takes approximately 216 seconds on a SPARCstation 2. Taking advantage of
the symmetry of this particular model would allow a four-fold increase in speed for this
application.
With more time spent optimizing our pruning techniques, the running time should
be greatly improved, especially considering the improvement achieved over the na-ive
algorithm for the translation-only case.
Summary
The Hausdorff distance under translation measures the extent to which each point of a
translated 'model' set lies near some point of an `image' set and vice versa. Thus this
Figure
12: A test image showing some blocks.
Figure
13: The block model.
Figure
14: The test image blocks overlaid with each matched block.
distance reflects the degree of resemblance between two objects (under translation). We
have discussed how to compute the Hausdorff distance under translation efficiently for
binary image data. The method compares a 32 \Theta 32 model bitmap with a 256 \Theta 256
image bitmap in a fraction of a second on a SPARCstation 2.
The computation of the directed Hausdorff distance under translation is in many
ways similar to binary correlation. The method is more tolerant of perturbations in the
locations of points than correlation because it measures proximity rather than exact
superposition. This is supported by empirical evidence as well as by the theoretical
formulation of the problem. The partial Hausdorff 'distance' between a model and
an image has been illustrated to work well on examples where correlation fails. We
have also extended our algorithms to work with rigid motion. It is an open problem
to develop efficient methods, both theoretically and in practice, for computing the
Hausdorff distance under other transformation groups.
A Proof of Claim 1
Proof. Let A 0 be the points in A for which there is some point in B within d:
Similarly, let B 0 be the points in B for which there is some point in A within d. We
must have jA th
that there are at least L points in A which are closer than d to some point in B (and
similarly there are K points in B which are closer than d to some point in A). Also,
since k \Delta k is symmetrical (i.e. ka \Gamma all the 'neighbors' of any point in A 0
must be in B 0 and vice versa.
The problem now reduces to finding A L ' A 0 and BK ' B 0 having min(K; L) points
each such that We will show that this is possible by building A L and
BK one element at a time, while maintaining the invariant that
that for each element of A L there is some element of BK within d and vice versa).
Base case: Pick any point a from A 0 . Put it into A L . Find any point b in B 0 such that
There must be at least one. Put b into BK . Then
Induction step: Suppose that A L and BK each have n ! min(K; L) elements and
that d. There are now two cases:
ffl Suppose that there exist a 2 A
Then if we add a to A L and b to BK we will have increased the size of each
set to n maintaining the invariant.
ffl Suppose that no such a and b exist. Then pick any point a 2 A
consider its 'neighbors': points in B 0 \Phi t within d. It must have at least one
since it is a member of A 0 . All the neighbors must be in BK already, so it
has at least one neighbor in BK . Similarly, every point has at
least one neighbor in A L . Picking any point in A 0 \Gamma A L and any point in
adding them to A L and BK respectively increases the size of
each set to n maintains the invariant, since every point in the new
A L has a neighbor in the new BK and vice versa.
Since there are at least L elements in A 0 and K elements in B 0 , we will not run
out of elements in A achieving min(K; L) elements in
A L and BK .
This process builds A L and BK by ensuring that whenever a pair of points are added,
they will each have neighbors in the augmented sets, which maintains the desired invariant
B Proof of Claim 2
Proof. We will be using A 0 , B 0 , A L and BK from the proof of Claim 1 to construct A 0
K . Suppose (without loss of generality) that K - L. Now, pick any K \Gamma L points
from (there must be at least this many points since jBK
K be the union of BK and these points. For each of the new points, pick
one of its neighbors from A 0 . Let A 0
L be the union of A L and these neighboring points.
will be at most K and at least L, since jA L L, and all these neighbors might
have been members of A L . Thus, L - jA 0
every point in B 0
has a neighbor (within d) in A 0
L and vice versa, we know that
we must have equality since if
strictly less than d, then HLK (A; B)
would also be strictly less than d, since A 0
K would then be minimizing subsets
of sizes at least L and K respectively.
C Proof of Claim 3
Proof. We can see from the definitions of D and D 0 that if x and y are integers, and
and so the
minimum value of F [x; y] is no smaller than the minimum value of f(t): F [x
is a norm, it satisfies the triangle inequality. Let a be any point. d(a) is
the distance from a to the nearest point of B, and so there is a point b in B such that
a 0 be any other point. Then
Similarly, d 0 (b any two points b and b 0 .
If t and t 0 are any two translations, then
1 ) be the grid point closest to
1 and y 0
1 are the integers which
is an L p norm,
D Proof of Claim 4
Proof. From the proof of claim 3, we know that
We know that
of the nonzero model pixels are within v 1 of some nonzero image pixel: there
exist
Now consider the computation of v 2 . We will be 'probing' the values of D 0 [b x
(among others). But since
there are at least K nonzero model pixels which are at most v 1
away from some nonzero image pixel when the model is translated by
the computation of v 2 computes the K-th ranked value of these distances, we will have
--R
Data Structures and Algorithms.
Measuring the resemblance of polygonal shapes.
An efficiently computable metric for comparing polygonal shapes.
Distance transforms in digital images.
A computational approach to edge detection.
General Topology.
Euclidean distance mapping.
Grimson with T.
Efficiently computing the Hausdorff distance for point sets under translation.
On dynamic Voronoi diagrams and the minimum hausdorff distance for point sets under Euclidean motion in the plane.
The upper envelope of Voronoi surfaces and its applications.
Distance transforms: Properties and machine vision applica- tions
Computational Geometry.
Digital Picture Processing
--TR
Three-dimensional object recognition
Computational geometry: an introduction
Model-based recognition in robot vision
A computational approach to edge detection
Distance transformations in digital images
Computing the minimum Hausdorff distance for point sets under translation
Object recognition by computer
An Efficiently Computable Metric for Comparing Polygonal Shapes
The upper envelope of Voronoi surfaces and its applications
Distance transforms
On dynamic Voronoi diagrams and the minimum Hausdorff distance for point sets under Euclidean motion in the plane
Data Structures and Algorithms
Digital Picture Processing
--CTR
Istvn Szatmri , Csaba Rekeczky , Tams Roska, A Nonlinear Wave Metric and its CNN Implementation for Object Classification, Journal of VLSI Signal Processing Systems, v.23 n.2-3, p.437-447, Nov.&slash;Dec. 1999
Haikel Salem Alhichri , Mohamed Kamel, Multi-resolution image registration using multi-class Hausdorff fraction, Pattern Recognition Letters, v.23 n.1-3, p.279-286, January 2002
Daniel P. Huttenlocher , Ryan H. Lilien , Clark F. Olson, View-Based Recognition Using an Eigenspace Approximation to the Hausdorff Measure, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.9, p.951-955, September 1999
Haikel Salem Alhichri , Mohamed Kamel, Multi-resolution image registration using multi-class Hausdorff fraction, Integrated image and graphics technologies, Kluwer Academic Publishers, Norwell, MA, 2004
Robert W. Frischholz , Ulrich Dieckmann, BioID: A Multimodal Biometric Identification System, Computer, v.33 n.2, p.64-68, February 2000
Robust Hausdorff distance measure for face recognition, Pattern Recognition, v.40 n.2, p.431-442, February, 2007
Sugata Mukhopadhyay , Brian Smith, Passive capture and structuring of lectures, Proceedings of the seventh ACM international conference on Multimedia (Part 1), p.477-487, October 30-November 05, 1999, Orlando, Florida, United States
Klara Kedem , Yana Yarmovski, Curve based stereo matching using the minimum Hausdorff distance, Proceedings of the twelfth annual symposium on Computational geometry, p.415-418, May 24-26, 1996, Philadelphia, Pennsylvania, United States
Mirela Tanase , Remco C. Veltkamp, Part-based shape retrieval, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Clark F. Olson, Maximum-Likelihood Image Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.6, p.853-857, June 2002
Wang , Yan Zhang , Jufu Feng, On the Euclidean Distance of Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1334-1339, August 2005
Longin Jan Latecki , Rolf Lakmper, Shape Similarity Measure Based on Correspondence of Visual Parts, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.10, p.1185-1190, October 2000
Herbert Ramoser , Josef Birchbauer , Horst Bischof, Computationally efficient and reliable fingerprint mosaicking on embedded hardware using minutiae, Machine Graphics & Vision International Journal, v.13 n.4, p.401-415, October 2004
Muhammad Saleem , Adil Masood Siddiqui , Imran Touqir, Efficient feature correspondence for image registration, Proceedings of the 6th WSEAS International Conference on Signal, Speech and Image Processing, p.101-104, September 22-24, 2006, Lisbon, Portugal
R. J. Lpez-Sastre , S. Lafuente-Arroyo , P. Siegmann , P. Gil-Jimnez , A. Vazquez-Reina, Recognition of mandatory traffic signs using the Hausdorff distance, Proceedings of the 5th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision, p.216-221, September 15-17, 2005, Malta
Thomas P. Moran , Eric Saund , William Van Melle , Anuj U. Gujar , Kenneth P. Fishkin , Beverly L. Harrison, Design and technology for Collaborage: collaborative collages of information on physical walls, Proceedings of the 12th annual ACM symposium on User interface software and technology, p.197-206, November 07-10, 1999, Asheville, North Carolina, United States
Gang Wei , Ishwar K. Sethi, Omni-face detection for video/image content description, Proceedings of the 2000 ACM workshops on Multimedia, p.185-189, October 30-November 03, 2000, Los Angeles, California, United States
Oscar Firschein, Defense Applications of Image Understanding, IEEE Expert: Intelligent Systems and Their Applications, v.10 n.5, p.11-17, October 1995
Sergey Brin, Near Neighbor Search in Large Metric Spaces, Proceedings of the 21th International Conference on Very Large Data Bases, p.574-584, September 11-15, 1995
Ramin Zabih , Justin Miller , Kevin Mai, A feature-based algorithm for detecting and classifying production effects, Multimedia Systems, v.7 n.2, p.119-128, March 1999
Thomas Kmpke, Convex Translation Estimation, Journal of Intelligent and Robotic Systems, v.21 n.3, p.287-300, March 1998
Chyuan-Huei Thomas Yang , Shang-Hong Lai , Long-Wen Chang, Hybrid image matching combining Hausdorff distance with normalized gradient matching, Pattern Recognition, v.40 n.4, p.1173-1181, April, 2007
Yannis E. Ioannidis , Viswanath Poosala, Histogram-Based Approximation of Set-Valued Query-Answers, Proceedings of the 25th International Conference on Very Large Data Bases, p.174-185, September 07-10, 1999
Huanfeng Ma , David Doermann, Adaptive Hindi OCR using generalized Hausdorff image comparison, ACM Transactions on Asian Language Information Processing (TALIP), v.2 n.3, p.193-218, September
Kwok-Wai Cheung , Dit-Yan Yeung , Roland T. Chin, Bidirectional Deformable Matching with Application to Handwritten Character Extraction, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.8, p.1133-1139, August 2002
Didier Coquin , Philippe Bolon, Quantitative assessment of image filtering: comparison of objective metrics, Imaging and vision systems: theory, assessment and applications, Nova Science Publishers, Inc., Commack, NY, 2001
Christina de Juan , Bobby Bodenheimer, Cartoon textures, Proceedings of the 2004 ACM SIGGRAPH/Eurographics symposium on Computer animation, August 27-29, 2004, Grenoble, France
Dawn Lawrie , Daniela Rus, A self-organized file cabinet, Proceedings of the eighth international conference on Information and knowledge management, p.499-506, November 02-06, 1999, Kansas City, Missouri, United States
Zhenfeng Zhu , Ming Tang , Hanqing Lu, A new robust circular Gabor based object matching by using weighted Hausdorff distance, Pattern Recognition Letters, v.25 n.4, p.515-523, March 2004
Xilin Yi , Octavia I. Camps, Line-Based Recognition Using A Multidimensional Hausdorff Distance, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.9, p.901-916, September 1999
Michail Vlachos , Zografoula Vagena , Philip S. Yu , Vassilis Athitsos, Rotation invariant indexing of shapes and line drawings, Proceedings of the 14th ACM international conference on Information and knowledge management, October 31-November 05, 2005, Bremen, Germany
Giancarlo Iannizzotto , Massimo Villari , Lorenzo Vita, Hand tracking for human-computer interaction with Graylevel VisualGlove: turning back to the simple way, Proceedings of the 2001 workshop on Perceptive user interfaces, November 15-16, 2001, Orlando, Florida
Nicolae Duta , Anil K. Jain , Marie-Pierre Dubuisson-Jolly, Automatic Construction of 2D Shape Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.5, p.433-446, May 2001
Daniela Rus , James Allan, Structural Queries in Electronic Corpora, Multimedia Tools and Applications, v.6 n.2, p.153-169, March 1998
Baofeng Guo , Kin-Man Lam , Kwan-Ho Lin , Wan-Chi Siu, Human face recognition based on spatially weighted Hausdorff distance, Pattern Recognition Letters, v.24 n.1-3, p.499-507, January
Claudio Uras , Alessandro Verri, Computing Size Functions from Edge Maps, International Journal of Computer Vision, v.23 n.2, p.169-183, June 1997
Alignment Using Distributions of Local Geometric Properties, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.10, p.1031-1043, October 1999
Charng-da Lu , Daniel A. Reed, Compact application signatures for parallel and distributed scientific codes, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-10, November 16, 2002, Baltimore, Maryland
Bageshree Shevade , Hari Sundaram , Lexing Xie, Modeling personal and social network context for event annotation in images, Proceedings of the 2007 conference on Digital libraries, June 18-23, 2007, Vancouver, BC, Canada
Yaoyao Gu , Yuan Tian , Eylem Ekici, Real-time multimedia processing in video sensor networks, Image Communication, v.22 n.3, p.237-251, March, 2007
Philip L. Worthington , Edwin R. Hancock, Object Recognition Using Shape-from-Shading, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.5, p.535-542, May 2001
J. Rucklidge, Efficiently Locating Objects Using the Hausdorff Distance, International Journal of Computer Vision, v.24 n.3, p.251-270, Sept./Oct. 1997
Frank Hoffmann , Klaus Kriegel , Carola Wenk, Matching 2D patterns of protein spots, Proceedings of the fourteenth annual symposium on Computational geometry, p.231-239, June 07-10, 1998, Minneapolis, Minnesota, United States
Maylor K. Leung , Yee-Hong Yang, First Sight: A Human Body Outline Labeling System, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.17 n.4, p.359-377, April 1995
Hyungjun Park, An error-bounded approximate method for representing planar curves in B-splines, Computer Aided Geometric Design, v.21 n.5, p.479-497, May 2004
Lihua Zhang , Wenli Xu , Cheng Chang, Genetic algorithm for affine point pattern matching, Pattern Recognition Letters, v.24 n.1-3, p.9-19, January
Stphane Derrode , Faouzi Ghorbel, Shape analysis and symmetry detection in gray-level objects using the analytical Fourier-Mellin representation, Signal Processing, v.84 n.1, p.25-39, January 2004
Haikel Salem Alhichri , Mohamed Kamel, Virtual circles: a new set of features for fast image registration, Pattern Recognition Letters, v.24 n.9-10, p.1181-1190, 01 June
Davi Geiger , Tyng-Luh Liu , Robert V. Kohn, Representation and Self-Similarity of Shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.1, p.86-99, January
T. M. Breuel, On the use of interval arithmetic in geometric branch and bound algorithms, Pattern Recognition Letters, v.24 n.9-10, p.1375-1384, 01 June
C. Gope , N. Kehtarnavaz, Affine invariant comparison of point-sets using convex hulls and hausdorff distances, Pattern Recognition, v.40 n.1, p.309-320, January, 2007
Steven M. Seitz , Charles R. Dyer, View-Invariant Analysis of Cyclic Motion, International Journal of Computer Vision, v.25 n.3, p.231-251, Dec. 1997
Gun Park , Kyoung Mu Lee , Sang Uk Lee , Jin Hak Lee, Recognition of partially occluded objects using probabilistic ARG (attributed relational graph)-based matching, Computer Vision and Image Understanding, v.90 n.3, p.217-241, June
Ercan E. Kuruoglu , Vern T. Tan, Document image retrieval without OCRing using a video scanning system, Proceedings of the 2000 ACM workshops on Multimedia, p.233-236, October 30-November 03, 2000, Los Angeles, California, United States
Giancarlo Iannizzotto , Antonio Puliafito , Lorenzo Vita, Using the Median Distance to Compare Object Shapes in Content-BasedImage Retrieval, Multimedia Tools and Applications, v.8 n.2, p.197-217, March 1999
Helmut Alt , Oswin Aichholzer , Gnter Rote, Matching shapes with a reference point, Proceedings of the tenth annual symposium on Computational geometry, p.85-92, June 06-08, 1994, Stony Brook, New York, United States
Daniela Rus , Devika Subramanian, Multi-media RISC informatics: retrieving information with simple structural components, Proceedings of the second international conference on Information and knowledge management, p.283-294, November 01-05, 1993, Washington, D.C., United States
Lixin Fan , Kah Kay Sung, Model-based varying pose face detection and facial feature registration in video images, Proceedings of the eighth ACM international conference on Multimedia, p.295-302, October 2000, Marina del Rey, California, United States
N. Bourbakis , P. Yuan , S. Makrogiannis, Object recognition using wavelets, L-G graphs and synthesis of regions, Pattern Recognition, v.40 n.7, p.2077-2096, July, 2007
Daming Shi , Robert I. Damper , Steve R. Gunn, Offline handwritten Chinese character recognition by radical decomposition, ACM Transactions on Asian Language Information Processing (TALIP), v.2 n.1, p.27-48, March
Zhengrong Ying , David Castaon, Partially Occluded Object Recognition Using Statistical Models, International Journal of Computer Vision, v.49 n.1, p.57-78, August 2002
Pavel Zezula , Pasquale Savino , Giuseppe Amato , Fausto Rabitti, Approximate similarity retrieval with M-trees, The VLDB Journal The International Journal on Very Large Data Bases, v.7 n.4, p.275-293, December 1998
David M. Mount , Nathan S. Netanyahu , Jacqueline Le Moigne, Improved algorithms for robust point pattern matching and applications to image registration, Proceedings of the fourteenth annual symposium on Computational geometry, p.155-164, June 07-10, 1998, Minneapolis, Minnesota, United States
Gilbert Pradel , Philippe Hoppenot, Symbolic Trajectory Description in Mobile Robotics, Journal of Intelligent and Robotic Systems, v.45 n.2, p.157-180, February 2006
Vincent Oria, Robust and fast similarity search for moving object trajectories, Proceedings of the 2005 ACM SIGMOD international conference on Management of data, June 14-16, 2005, Baltimore, Maryland
Paolo Ciaccia , Marco Patella , Pavel Zezula, A cost model for similarity queries in metric spaces, Proceedings of the seventeenth ACM SIGACT-SIGMOD-SIGART symposium on Principles of database systems, p.59-68, June 01-04, 1998, Seattle, Washington, United States
Thomas B. Sebastian , Benjamin B. Kimia, Curves vs. skeletons in object recognition, Signal Processing, v.85 n.2, p.247-263, February 2005
Ross Cutler , Larry S. Davis, Robust Real-Time Periodic Motion Detection, Analysis, and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.8, p.781-796, August 2000
Md. Al-Amin Bhuiyan , Hiromitsu Hama, 3D Reconstruction of parametric curves: recovering the control points, Machine Graphics & Vision International Journal, v.13 n.4, p.307-328, October 2004
Aaron Wallack , Dinesh Manocha, Robust Algorithms for Object Localization, International Journal of Computer Vision, v.27 n.3, p.243-262, May 1, 1998
Byeong Hwan Jeon , Kyoung Mu Lee , Sang Uk Lee, Face detection using a first-order RCE classifier, EURASIP Journal on Applied Signal Processing, v.2003 n.1, p.878-889, January
Haesevoets , Bart Kuijpers, Time-dependent affine triangulation of spatio-temporal data, Proceedings of the 12th annual ACM international workshop on Geographic information systems, November 12-13, 2004, Washington DC, USA
Yoram Gdalyahu , Daphna Weinshall, Flexible syntactic matching of curves and its application to automatic hierarchal classification of silhouettes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.12, p.1313-1328, December 1999
Thomas M. Breuel, Implementation techniques for geometric branch-and-bound matching methods, Computer Vision and Image Understanding, v.90 n.3, p.258-294, June
Davi Geiger , Tyng-Luh Liu , Michael J. Donahue, Sparse Representations for Image Decompositions, International Journal of Computer Vision, v.33 n.2, p.139-156, Sept. 1999
Masaki Hilaga , Yoshihisa Shinagawa , Taku Kohmura , Tosiyasu L. Kunii, Topology matching for fully automatic similarity estimation of 3D shapes, Proceedings of the 28th annual conference on Computer graphics and interactive techniques, p.203-212, August 2001
Xiaofeng Ren , Charless C. Fowlkes , Jitendra Malik, Learning Probabilistic Models for Contour Completion in Natural Images, International Journal of Computer Vision, v.77 n.1-3, p.47-63, May 2008
Rafael Murrieta-Cid , Carlos Parra , Michel Devy, Visual Navigation in Natural Environments: From Range and Color Data to a Landmark-Based Model, Autonomous Robots, v.13 n.2, p.143-168, September 2002
D. -G. Sim , R. -H. Park , R. -C. Kim , S. U. Lee , I. -C. Kim, Integrated Position Estimation Using Aerial Image Sequences, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.1-18, January 2002
Giuseppe Amato , Fausto Rabitti , Pasquale Savino , Pavel Zezula, Region proximity in metric spaces and its use for approximate similarity search, ACM Transactions on Information Systems (TOIS), v.21 n.2, p.192-227, April
Michael Lindenbaum , Shai Ben-David, VC-Dimension Analysis of Object Recognition Tasks, Journal of Mathematical Imaging and Vision, v.10 n.1, p.27-49, Jan. 1999
Carlos Orrite , J. Elias Herrero, Shape matching of partially occluded curves invariant under projective transformation, Computer Vision and Image Understanding, v.93 n.1, p.34-64, January 2004
Facundo Mmoli , Guillermo Sapiro, Comparing point clouds, Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, July 08-10, 2004, Nice, France
Claire Kenyon , Yuval Rabani , Alistair Sinclair, Low distortion maps between point sets, Proceedings of the thirty-sixth annual ACM symposium on Theory of computing, June 13-16, 2004, Chicago, IL, USA
Pedro F. Felzenszwalb , Daniel P. Huttenlocher, Pictorial Structures for Object Recognition, International Journal of Computer Vision, v.61 n.1, p.55-79, January 2005
Vassilis Athitsos , Marios Hadjieleftheriou , George Kollios , Stan Sclaroff, Query-sensitive embeddings, ACM Transactions on Database Systems (TODS), v.32 n.2, p.8-es, June 2007
Andrew B. Kahng, Classical floorplanning harmful?, Proceedings of the 2000 international symposium on Physical design, p.207-213, May 2000, San Diego, California, United States
J. Y. Kaminski , Amnon Shashua, Multiple View Geometry of General Algebraic Curves, International Journal of Computer Vision, v.56 n.3, p.195-219, February-March 2004
Daniela Rus , Devika Subramanian, Customizing information capture and access, ACM Transactions on Information Systems (TOIS), v.15 n.1, p.67-101, Jan. 1997
David W. Jacobs , Daphna Weinshall , Yoram Gdalyahu, Classification with Nonmetric Distances: Image Retrieval and Class Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.6, p.583-600, June 2000
Jing Huang , S. Ravi Kumar , Mandar Mitra , Wei-Jing Zhu , Ramin Zabih, Spatial Color Indexing and Applications, International Journal of Computer Vision, v.35 n.3, p.245-268, Dec. 1999
Gun Park , Kyoung Mu Lee , Sang Uk Lee, Color-based image retrieval using perceptually modified Hausdorff distance, Journal on Image and Video Processing, v.2008 n.1, p.1-10, January 2008
Anarta Ghosh , Nicolai Petkov, A cognitive evaluation procedure for contour based shape descriptors, International Journal of Hybrid Intelligent Systems, v.2 n.4, p.237-252, December 2005
S. Belongie , J. Malik , J. Puzicha, Shape Matching and Object Recognition Using Shape Contexts, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.4, p.509-522, April 2002
David Jacobs , Ronen Basri, 3-D to 2-D Pose Determination with Regions, International Journal of Computer Vision, v.34 n.2-3, p.123-145, Nov. 1999
Yongsheng Gao , S. C. Hui , A. C. M. Fong, A Multi-View Facial Analysis Technique for Pervasive Computing, v.2 n.1, p.38-45, January
Haili Chui , Anand Rangarajan, A new point matching algorithm for non-rigid registration, Computer Vision and Image Understanding, v.89 n.2-3, p.114-141, February
Gilbert Pradel , Philippe Hoppenot, Symbolic environment representation by means of frescoes in mobile robotics, Robotica, v.23 n.4, p.527-537, July 2005
Manuele Bicego , Umberto Castellani , Vittorio Murino, A hidden Markov model approach for appearance-based 3D object recognition, Pattern Recognition Letters, v.26 n.16, p.2588-2599, December 2005
Ronen Basri , David W. Jacobs, Recognition Using Region Correspondences, International Journal of Computer Vision, v.25 n.2, p.145-166, Nov. 1997
Philip Fast and Effective Retrieval of Medical Tumor Shapes, IEEE Transactions on Knowledge and Data Engineering, v.10 n.6, p.889-904, November 1998
Anarta Ghosh , Nicolai Petkov, Robustness of Shape Descriptors to Incomplete Contour Representations, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.11, p.1793-1804, November 2005
Jianping Fan , Xingquan Zhu , Kayvan Najarian , Lide Wu, Accessing Video Contents through Key Objects over IP, Multimedia Tools and Applications, v.21 n.1, p.75-96, September
Schmid , Andrew Zisserman, The Geometry and Matching of Lines and Curves Over Multiple Views, International Journal of Computer Vision, v.40 n.3, p.199-233, Dec. 2000
Seong G. Kong , Jingu Heo , Faysal Boughorbel , Yue Zheng , Besma R. Abidi , Andreas Koschan , Mingzhong Yi , Mongi A. Abidi, Multiscale Fusion of Visible and Thermal IR Images for Illumination-Invariant Face Recognition, International Journal of Computer Vision, v.71 n.2, p.215-233, February 2007
Congxia Dai , Yunfei Zheng , Xin Li, Pedestrian detection and tracking in infrared imagery using shape and appearance, Computer Vision and Image Understanding, v.106 n.2-3, p.288-299, May, 2007
Thomas Funkhouser , Patrick Min , Michael Kazhdan , Joyce Chen , Alex Halderman , David Dobkin , David Jacobs, A search engine for 3D models, ACM Transactions on Graphics (TOG), v.22 n.1, p.83-105, January
Yefeng Zheng , Huiping Li , David Doermann, A Parallel-Line Detection Algorithm Based on HMM Decoding, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.777-792, May 2005
Mark A. Ruzon , Carlo Tomasi, Edge, Junction, and Corner Detection Using Color Distributions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.11, p.1281-1295, November 2001
Ming-Hsuan Yang , David J. Kriegman , Narendra Ahuja, Detecting Faces in Images: A Survey, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.1, p.34-58, January 2002
A. K. Jain , M. N. Murty , P. J. Flynn, Data clustering: a review, ACM Computing Surveys (CSUR), v.31 n.3, p.264-323, Sept. 1999
Pankaj K. Agarwal , Micha Sharir, Efficient algorithms for geometric optimization, ACM Computing Surveys (CSUR), v.30 n.4, p.412-458, Dec. 1998 | shape comparison methods;image comparison;image processing;binary image;translation;position error tolerance;hausdorff distance;rigid motion |
628552 | Stability of Phase Information. | This paper concerns the robustness of local phase information for measuring image velocity and binocular disparity. It addresses the dependence of phase behavior on the initial filters as well as the image variations that exist between different views of a 3D scene. We are particularly interested in the stability of phase with respect to geometric deformations, and its linearity as a function of spatial position. These properties are important to the use of phase information, and are shown to depend on the form of the filters as well as their frequency bandwidths. Phase instabilities are also discussed using the model of phase singularities described by Jepson and Fleet. In addition to phase-based methods, these results are directly relevant to differential optical flow methods and zero-crossing tracking. | Introduction
An important class of image matching techniques has emerged based on phase information; that is,
the phase behaviour in band-pass filtered versions of different views of a 3-d scene [3, 7, 8, 11, 13, 15,
17, 18, 25, 27, 30, 32, 33]. These include phase-difference and phase-correlation techniques for discrete
two-view matching, and the use of the phase gradient for the measurement of image orientation and
optical flow.
Numerous desirable properties of these techniques have been reported: For two-view matching, the
disparity estimates are obtained with sub-pixel accuracy, without requiring explicit sub-pixel signal
reconstruction or sub-pixel feature detection and localization. Matching can exploit all phase values
(not just zeros), and therefore extensive use is made of the available signal so that a dense set of
estimates is often extracted. Furthermore, because phase is amplitude invariant, the measurements
are robust with respect to smooth shading and lighting variations. With temporal sequences of images,
the phase difference can be replaced by a temporal phase derivative, thereby producing more accurate
1 Published in IEEE Trans. PAMI, 15(12): 1253-1268, 1993
measurements. These computations are straightforward and local in space-time, yielding efficient
implementations on both serial and parallel machines. There is the additional advantage in space-time
that different filters can be used initially to decompose the local image structure according to velocity
and scale. Then, the phase behaviour in each velocity-tuned channel can be used independently to
make multiple measurements of speed and orientation within a single image neighbourhood. This
is useful in the case of a single, densely textured surface where there exist several oriented image
structures with different contrasts, or multiple velocities due to transparency, specular reflection,
shadows, or occlusion [6, 7, 19]. In a recent comparison of several different optical flow techniques,
phase-based approaches often produced the most accurate results [1].
But despite these advantages, we lack a satisfying understanding of phase-based techniques and the
reasons for their success. The usual justification for phase-based approaches consists of the Fourier
shift theorem, an assumption that the phase output of band-pass filters is linear as a function of
spatial position, and a model of image translation between different views. But because of the local
spatiotemporal support of the filters used in practice, the Fourier shift theorem does not strictly apply.
For example, when viewed through a window, a signal and a translated version of it , W (x)s(x) and
are not simply phase-shifted versions of one another with identical amplitude spectra.
For similar reasons, phase is not a linear function of spatial position (and time) for almost all inputs,
even in the case of pure translation. The quasi-linearity of phase often reported in the literature
depends on the form of the input and the filters used, and has not been addressed in detail. Also
unaddressed is the extent to which these techniques produce accurate measurements when there are
deviations from image translation. Fleet et al. [7, 8] suggested that phase has the important property
of being stable with respect to small geometric deformations of the input that occur with perspective
projections of 3-d scenes. They showed that amplitude is sensitive to geometric deformation, but they
provided no concrete justification for the stability of phase.
This paper addresses several issues concerning phase-based matching, the behaviour of phase in-
formation, and its dependence on the band-pass filters. It presents justification for the claims of phase
stability with respect to geometric deformations, and of phase linearity. By the stability of some image
property, we mean that a small deformation of the input signal causes a similar deformation in the
image property, so that the behaviour of that property reflects the structure of the input that we wish
to measure. Here, we concentrate on small affine deformations like those that occur between left and
right stereo views of a slanted surface. For example, scale variations between left and right binocular
views of a smooth surface are often as large as 20% [24]. Although phase deformations do not exactly
match input deformations, they are usually close enough to provide reasonable measurements for many
vision applications.
We address these issues in the restricted case of 1-d signals, where the relevant deformations are
translations and dilations. Using a scale-space framework, we simulate changes in the scale of the
input by changing the tuning of a band-pass filter. In this context our concerns include the extent to
which phase is stable under small scale perturbations of the input, and the extent to which phase is
generally linear through space. These properties are shown to depend on the form of the filters used
and their frequency bandwidths. Situations in which phase is clearly unstable and leads to inaccurate
matching are also discussed, as are methods for their detection. Although we deal here with one
dimension, the stability analysis extends to affine deformations in multiple dimensions.
We begin in Section 2 with a brief review of phase-based matching methods and several comments
on the dependence of phase on the initial filters. Section 3 outlines the scale-space framework used in
the theoretical development that follows in Sections 4 and 5, and Section 6 discusses phase instabilities
in the neighbourhoods of phase singularities. Although the majority of the theoretical results assume
white noise as input, Section 7 briefly discusses the expected differences that arise with natural images.
Finally, Sections 8 and 9 draw conclusions and outline some topics that require further research.
Phase Information from Band-Pass Filters
Phase, as a function of space and/or time, is defined here as the complex argument of a complex-valued
band-pass signal. The band-pass signal is typically generated by a linear filter with a complex-valued
impulse response (or kernel), the real and imaginery components of which are usually even and odd
symmetric. Gabor functions (sinusoidally-modulated Gaussian windows [10]) are perhaps the most
commonly used filter [7, 8, 15, 18, 27, 30, 33]. Other choices of filter include: sinusoidally-modulated
square-wave (constant) windows [30]; filters with nonlinear phase that cycles between \Gamma- and -
only once within the support width of the kernel [31, 32, 33]; quadrature-pair steerable pyramids
constructed with symmetric kernels and approximations to their Hilbert tranforms [9, 28]; log-normal
filters [5, 31]; and derivative of Gaussian filters, and real and imaginary parts of which are the first
and second directional derivatives of a Gaussian envelope [31]. Phase can also be defined from a single
real-valued band-pass signal by creating a complex signal, the real and imaginary parts of which are
the original band-pass signal and its Hilbert transform. In all these cases, the expected behaviour of
phase depends on the filter.
Given the initial filters, phase-based optical flow techniques define image velocity in terms of the
instantaneous velocity of level (constant) phase contours (given by the spatiotemporal phase gradient).
Phase-based matching methods, based on two images, define disparity as the shift necessary to align
the phase values of band-pass filtered versions of the two signals [13, 27, 30]. In this case, a disparity
predictor can be constructed using phase differences between the two views: For example, let R l (x)
and R r (x) denote the output of band-pass filters applied to left and right binocular signals (along
lines). Their respective phase components are then written as OE l
arg[R r (x)], and binocular disparity is measured (predicted) as
(1)
where [OE] 2- denotes the principal part of OE, that is, [OE] 2- 2 (\Gamma-], and k(x) denotes some measure
of the underlying frequency of the signal in the neighbourhood of x. The frequency k(x) may be
approximated by the frequency to which the filter is tuned, or measured using the average phase
derivative (i.e., (OE 0
l
r
(x))=2) from the left and right filter outputs [8, 18]. The phase derivative
is often referred to as the instantaneous frequency of a band-pass signal [2, 26]. Note that linear phase,
and therefore constant instantaneous frequency, implies a sinusoidal signal.
One reason for the success of the predictor in (1) is the fact that phase is often nearly linear over
relatively large spatial extents, with instantaneous frequencies close to the filter tuning. In other words,
the filter output is nearly sinusoidal, so that phase and instantaneous frequency at a single location
provide a good model for the local phase structure of the filter response, in terms of a complex signal
winding sinusoidally (with some amplitude variation) about the origin of the complex plane. When the
left and right signals are shifted versions of one another, and phase is precisely linear over a distance
larger than the shift, then the predictor produces an exact measurement. That is, displacements of a
linear function f(x) are given by differences in function value divided by the derivative of the function:
In more general cases the predictor produces estimates of disparity, and may be used iteratively
to converge to accurate matches of the two local phase signals [8]. The size of the neighbourhood
within which phase is monotonic (and therefore unique) determines the range of disparities that may
be handled correctly by the predictor, and therefore its domain of convergence. Of course, for a
sinusoidal signal the predictor can handle disparities up to half the wavelength of the signal. Because
the domain of convergence depends on the wavelength of the band-pass signal, and hence on the
scale of the filter, small kernels tuned to high frequencies have small domains of convergence. This
necessitates some form of control strategy, such as coarse-to-fine propagation of disparity estimates.
A detailed analysis of the domain of convergence, and control strategies are beyond the scope of this
paper.
Within the domain of convergence, the accuracy of the predictor, and hence the speed with which it
converges, depends on the linearity of phase. As discussed in [8], predictor errors are a function of the
magnitude of disparity and higher-order derivatives of phase. The accuracy of the final measurement,
to which the predictor has converged, will also depend critically on phase stability (discussed below
in detail).
Phase linearity and monotonicity are functions of both the input signal and the form of the filters.
With respect to the filters, it is generally accepted that the measurement of binocular disparity and
optical flow should require only local support, so that the filters should be local in space-time as well as
band-pass. For example, although filter kernels of the form exp[ik 0 x] (i.e. Fourier transforms) produce
signals with linear phase, they are not local. It is also important to consider the correlation between
real and imaginary components of the kernel, as well as differences in their amplitude spectra. To the
extent that they are correlated the phase signal will be nonlinear because the filter response will form
an elliptical path, elongated along orientations of \Sigma-=4 in the complex plane. If they have different
amplitude spectra, then the real and imaginary parts will typically contain different amounts of power,
causing elliptical paths in the complex plane elongated along the real or imaginary axes. Finally, it
is natural to ensure that the filters have no dc sensitivity, for this will often mean that the complex
filter response will not wind about the origin, thereby also introducing nonlinearities, and smaller
domains of convergence [30, 31]. Fortunately, we can deal with these general concerns relatively easily
by employing complex-valued filters with local support in space-time, the imaginary parts of which
are Hilbert transforms of the real parts (i.e. quadrature-pair filters). The real and imaginary parts
are then uncorrelated with similar amplitude spectra and no dc sensitivity.
Gabors functions have Gaussian amplitude spectra with infinite extent in the frequency domain,
and therefore some amount of dc sensitivity. Because of the substantial power in natural signals at
low frequencies this dc sensitivity often introduces a positive bias in the real part of the response.
This problem can be avoided for the most part with the use of small bandwidths (usually less than
one octave), for which the real and imaginary parts of Gabor functions are close approximations to
quadrature pairs. It is also possible to modify the real (symmetric) part of the Gabor kernel by
subtracting its dc sensitivity from it (e.g., [7]). Log-normal kernels have Gaussian amplitude spectra
in log-frequency space, and are therefore quadrature-pair filters. Modulated square-wave filters [30]
and derivative-based real and imaginary parts may have problems because their real and imaginary
parts do not have the same amplitude spectra. In both cases, the odd-symmetric components of the
are more sensitive to low frequencies than the even-symmetric component. This introduces a
bias towards phase values of \Sigma-=2.
3 Scale-Space Framework
To address questions of phase stability and phase linearity, we restrict our attention to band-pass
filters with complex-valued kernels K(x; -), the real and imaginary parts of which form quadrature
pairs (i.e. they are Hilbert transforms of one another [26]). Let - ? 0 denote a scale parameter that
determines the frequency pass-band to which the filter is tuned. Also, let the kernels be normalized
such that
which is defined by
where f denotes the complex conjugate of f . For convenience, we also assume translational invariance
and self-similarity across scale (i.e., wavelets [20]), so that K(x; -) satisfies
Wavelets are convenient since, because of their self-similarity, their octave bandwidth remains constant,
independent of the scale to which they are tuned. Our results also extend to filters other than wavelets
such as windowed Fourier transforms for which the spatial extent of the effective kernels is independent
of -.
The convolution of K(x; -) with an input signal I(x) is typically written as
Because K(x; -) is complex-valued, the response S(x;
complex-valued, and can be expressed using amplitude and phase as in S(x;
where
The first main concern of this paper is the expected stability of phase under small scale perturbations
of the input. If phase is not stable under scale variations between different views, then the
phase-based measurements of velocity or binocular disparity will not be reliable. To examine this, we
simulate changes in the scale of the input by changing the scale tuning of the filter; if one signal is a
translation and dilation of another, as in
I
because of the filters' self-similarity, the responses S 0
will satisfy
a
That is, if filters tuned to - 0 and - 1 were applied to I 0 (x) and I 1 (x), then the structure extracted
from each view would be similar (up to a scalar multiple) and the filter outputs would be related by
precisely the same deformation a(x). Of course, in practice we apply the same filters to I 0 (x) and
I 1 (x) because the scale factor a 1 that relates the two views is unknown. For phase-matching to yield
accurate estimates of a(x), the phase of the filter output should be insensitive to small scale variations
of the input.
The second concern of this paper is the extent to which phase is linear with respect to spatial
position. Linearity affects the ease with which the phase signal can be differentiated in order to estimate
the instantaneous frequency of the filter response. Linearity also affects the speed and accuracy of
disparity measurement based on phase-difference disparity predictors, as well as the typical extent of
the domain of convergence; if the phase signal is exactly linear, then the disparity can be computed
in just one step, without requiring iterative refinement [8, 18, 27, 30], and the domain of convergence
is \Sigma-. In practice, because of deviations from linearity and monotonicity, the reliable domain of
convergence is usually less than \Sigma-.
For illustration, let K(x; -) be a Gabor kernel 2 [10], Gabor(x; oe(-); k(-)), where
The peak tuning frequency of the Gabor filter is given by
Let the extent of the Gaussian envelope be measured at one standard deviation, and let the bandwidth
fi be close to one octave. Then, the standard deviation of the amplitude spectra oe
From this, it is straightforward to show that the radius of spatial
support is
Figure
1A shows a signal composed of a sample of white Gaussian noise concatenated with a sample
of red Gaussian noise. 3 The two middle images show the amplitude and phase components of the scale-space
Gabor response, ae(x; -) and OE(x; -) ; spatial position is shown on the horizontal axis, and log
scale is shown on the vertical axis (over two octaves). The bottom images show their level contours.
In the context of the scale-space expansion, an image property is said to be stable for image matching
where its level contours are vertical. Figure 1 shows that ae(x; -) depends significantly on scale as its
level contours are not generally vertical. By contrast, the phase structure is generally stable, except
for several isolated regions to be discussed below. Gradient-based techniques applied to low-pass
or band-pass filtered images produce inaccurate velocity estimates, in part, because they implicitly
require that both amplitude and phase behaviour be stable with respect to scale perturbations.
The response S(x; -) defined in (5) is referred to as a scale-space expansion of I(x). It is similar
to band-pass expansions defined by the Laplacian of a Gaussian (r 2 G), but it is expressed in terms
of complex-valued filters (cf. [16, 20, 34, 35]). Interestingly, zero-crossings of filters derived from
directional derivatives of Gaussian envelopes output are equivalent to crossings of constant phase of
complex-valued band-pass filters, the imaginary parts of which are Hilbert transforms of the corresponding
Gaussian derivatives. Here we use the scale-space framework to investigate the effects of
small perturbations of input scale on image properties that might be used for matching. We are
not proposing a new multi-scale representation, nor are we interpreting phase behaviour in terms of
specific image features such as edges.
As mentioned above, although the real and imaginary parts of Gabor kernels do not have identical amplitude spectra,
they are a good appoximation to a quadrature pair for small bandwidths (e.g. less than one octave, measured at one
standard deviation of the Gaussian spectrum).
3 That is, a sample of white noise smoothed with an exponential kernel exp(\Gammajxj=5).
-200200Pixel Location
Input
Intensity
x
l
log
(D) (E)
Figure
1. Gabor Scale-Space Expansion: The input signal (A) consists of a sample of white
Gaussian noise (left side), and a sample of red Gaussian noise (right side). The remaining panels
show the amplitude and phase components of S(x; -) generated by a Gabor filter with a bandwidth of
pixels. The horizontal and vertical axes represent spatial position
and log scale. (B) Amplitude response ae(x; t); (C) Phase response OE(x; t); (D) Level contours of
ae(x; t); (E) Level contours of OE(x; t).
4 Kernel Decomposition
The stability and linearity of phase can be examined in terms of the differences in phase between
an arbitrary scale-space location and other points in its neighbourhood. Towards this end, let S j j
denote the filter response at scale-space position p j
be expressed using inner products instead of convolution:
Phase differences in the neighbourhood of an arbitrary point p 0 , as a
function of relative scale-space position of neighbouring points p 1 , with
be written as
Phase is perfectly stable when \DeltaOE is constant with respect to changes in scale \Delta-, and it is linear
with respect to spatial position when \DeltaOE is a linear function of \Deltax.
To model the behaviour of S(x; -) and \DeltaOE in the neighbourhood of p 0 we write the scale-space
response at p 1 in terms of S 0 and a residual term R(p that goes to zero as k
is,
Equation (14) is easily derived if the effective kernel at p 1 , that is K 1 (x), is written as the sum of two
orthogonal terms, one which is a scalar multiple of K 0 (x), and the other orthogonal to K 0
where the complex scalar z(p
and the residual kernel H(x;
Equation (14) follows from (15) with . The scalar z reflects the
cross-correlation of the kernels K 0 (x) and K 1 (x). The behaviour of R(p related to the signal
structure to which K 1 (x) responds but K 0 (x) does not.
For notational convenience below, let z
Remember that z 1 , H 1 , and R 1 are functions of scale-space position p 1 in relation to p 0 .
Re
Im
R
z S
Figure
2. Sources of Phase Variation: This shows the formation of S 1 in terms of S 0 , the complex
scalar z 1 j z(p; p 0 ), and the additive residual R 1
Equation (14), depicted in Figure 2, shows that the phase of S 1 can differ from the phase of S 0
because of the additive phase shift due to z 1 , and the phase shift caused by the additive residual term
R 1 . Phase will be stable under small scale perturbations whenever both the phase variation due to z 1
as a function of scale and jR 1 j=jz 1 S 0 j are reasonably small. If jR 1 j=jz 1 S 0 j is large then phase remains
stable only when R 1 is in phase with S 0 , that is, if arg[R 1 phase may vary
wildly as a function of either spatial position or small scale perturbations.
5 Phase Stability Given White Gaussian Noise
To characterize the stability of phase behaviour through scale-space we first examine the response of
K(x; -) and its phase behaviour to stationary, white Gaussian noise. Using the kernel decomposition
(15), we derive approximations to the mean phase difference E[\DeltaOE] , and the variation about the mean
expectation. The mean provides a prediction for
the phase behaviour, and the expected variation about the mean amounts to our confidence in the
prediction. These approximations can be shown to depend only on the cross-correlation z 1 (16), and
are outlined below; they are derived in further detail in Appendix A.
Given white Gaussian noise, the two signals R 1 and z 1 S 0 are independent (because the kernels
are orthogonal), and the phase of S 0 is uniformly distributed over (\Gamma-]. If we
also assume that arg[R 1 are uncorrelated and that arg[R 1
the residual signal R 1 does not affect the mean phase difference. Therefore we approximate the mean
E[\DeltaOE] by
where z 1 is a function of scale-space position. Then, from (13) and (18), the component of \DeltaOE about
the approximate mean is given by (cf. Figure 2)
The expected magnitude of \DeltaOE \Gamma -(z 1 ) measures of the spread of the distribution of \DeltaOE about
the mean; 4 it is a function of the magnitude of z 1 S 0 and the magnitude of the component of R 1
perpendicular to the direction of z 1 S 0 in the complex plane (see Figure 2). With the assumptions 5
that uniformly distributed, it is shown in Appendix A that an
approximate bound, b(z 1 ), on E[ j\DeltaOE \Gamma -(z 1 )j ] is given by
It is tightest for small variations about the mean, that is, small values of \DeltaOE \Gamma -(z 1 ) .
5.1 Gabor Kernels
For illustrative purposes we apply these results to Gabor filters. Although they only approximate
quadrature-pair filters for small bandwidths, they admit simple analytic derivation for z 1 which is the
basis for the stability measures. For many other filters, it is more convenient to derive z 1 numerically
from discrete kernels.
For is given by (see Appendix C.1)
oe
defines the filter tunings (11a), oe defines the support widths (11b),
1 .
From (21), the approximate mean phase difference, that is arg[z 1 ], is given by
\Deltakk
The expected magnitude of \DeltaOE about the mean (20) can also be determined from (21) straightfor-
wardly. In particular, from (20) we expect b(z 1 ) to behave linearly in the neighbourhood of p 0 , because
for sufficiently small k (\Deltax; \Delta-) k it can be shown from (21) that jz 1
Figure
3 illustrates this behaviour in the restricted cases of pure dilation and pure translation
between points p 1 and p 0 . Figure 3A shows -(z 1 ) with error bars (the expected deviation about the
4 The expected value of the j\DeltaOEj is one possible measure of the spread of the probability density function. Compared
to the standard deviation it is less sensitive to outliers [12], and in this case, it yields analytic results while the second
moment does not (see Appendix A).
5 The assumption that means that p 0
is not in the immediate neighbourhood of a singular point, where
jS0 j is very small. Singularity neighbourhoods are discussed below in Section 6.
mean, b(z 1 )) as a function of scale for a vertical slice through a Gabor scale-space expansion with
bandwidth Figure 3B shows -(z 1 ) and b(z 1 ) as a function of spatial position for a
horizontal slice through the same scale-space expansion. This shows clearly that the mean is constant
with respect to scale changes and linear as a function of spatial position. The behaviour of b(z 1 ), the
variation about the mean, shows that the mean provides a very good model for the expected phase
behaviour for small translations and dilations. For larger translations and dilations we expect a larger
variation about the mean. This has a direct impact on disparity measurement when dilation occurs
between two views, suggesting that errors will increase with the amount of dilation due to spatial drift
of the phase contours.
In the case of translation, the increased variation about the mean is caused by two factors, namely,
phase nonlinearities, and the distribution of the instantaneous frequencies in the filter output, which
is a function of the bandwidth of the filter. 6 When the tuning frequency of the filter is used to
approximate the instantaneous frequency in the denomenator of the disparity predictor (1) as in
[27, 13, 15], then b(z 1 ) provides a direct measure of the expected predictor errors. Otherwise, if the
instantaneous frequency is measured explicitly as in [8, 18], then b(z 1 ) can be used for an upper bound
on expected predictor errors. The variation about the mean b(z 1 ) can also be used indirectly as an
indication of the range of disparities that a band-pass channel will reliably measure before aliasing
occurs (when phase wraps between - to \Gamma-). This is important to consider when developing a control
strategy to handle relatively large disparities with predictors at several scales and spatial positions.
A more complete illustration of the expected phase behaviour in the scale-space neighbourhood of
p 0 is given in Figure 4. Figure 4A shows level contours of -(z 1 ) for a Gabor filter. The point
lies at the centre, and represents a generic location away from phase singularities (discussed
below). Log scale and spatial position in the neighbourhood of p 0 are shown on vertical and horizontal
axes. These contours illustrate the expected mean phase behaviour, which has the desired properties
of stability through scale and linearity through space. The contours are essentially vertical near p 0 ,
and for fixed \Deltak , the mean phase behaviour -(z 1 ) is a linear function of \Deltax. It is also interesting to
note that the mean phase behaviour does not depend significantly on the bandwidth of the filter; for
Gabor filters it is evident from (21) that -(z 1 ) is independent of fi.
Figures
4B and 4C show level contours of b(z 1 ) for Gabor kernels with bandwidths fi of 0.8 and
1.0 octaves. In both cases, b(z 1 ) is monotonically decreasing as one approaches p 0 in the centre.
As decreases so does the relative magnitude of R 1 , and therefore so does the expected
fluctuation in the phase of S 1 about the phase of z 1 S 0 . By design, b(z 1 ) is a measure of the distribution
of phase differences about the mean. The contours in Figure 4 show that the expected deviation from
small in the vicinity of p 0 . Since -(z 1 ) has the properties we desire (stability through scale
and linearity through space near p 0 ), we can also view b(z 1 ) as a direct measure of phase stability
and phase linearity.
6 The expected distribution of instantaneous frequencies for a given filter is discussed in [6]. A significant proportion
of phase variability is due to the variation in instantaneous frequency in addition to phase nonlinearities. However, the
separation of these two causes is beyond the scope of this paper, as it is not the principal issue.
Relative Scale (in octaves)
Phase
Difference
(mod 2p)
Spatial Location (in l 0 )
Phase
Difference
(mod 2p)
Figure
3. Approximate Mean and Variation About the Mean of \DeltaOE for 1-D Slices of Scale-
Space: The expected behaviour of \DeltaOE is shown for vertical and horizontal slices through the middle
of the scale-space in Figure 4 for Gabor filters with plots show the approximate mean
and the variation about the mean b(z 1 ) (as error bars). (A) Phase behaviour as a function of
\Delta- while Phase behaviour as a function of \Deltax for
Figure
4. Phase Stability with Gabor Kernels: Scale-space phase behaviour near
shown with log scale on the vertical axis over two octaves with
position
on the horizontal axis, x is in the centres of the figures. (A) Level
contours 4, as a function of scale-space position. These contours are
independent of the bandwidth. (B) and (C) Level contours of b(z 1 ) are shown for
Each case shows the contours b(z 1 5; the innermost contours correspond to
It is also evident from Figures 4B and 4C that b(z 1 ) depends on the bandwidth of the filter. This
can be explained from the dependence of oe 0 and oe 1 in (21) on bandwidth as given by (11b). As the
bandwidth increases the amplitude spectra of filters tuned to nearby scales will overlap to a greater
extent, and phase is therefore stable for larger scale perturbations of the input. On the other hand, an
increase in bandwidth implies a decrease in the spatial extent of the kernels, and therefore a decrease
in the spatial extent over which phase is generally linear. The smallest contours in Figures 4B and
correspond to b(z 1 which amounts to a phase difference of about
\Sigma 5% of a wavelength. For contour encloses approximately \Sigma 20% of an octave vertically
and \Sigma 20% of a wavelength spatially. In this case, relative scale changes of 10% and 20% are typically
accompanied by phase shifts of less than 3.5% and 6.6% of a wavelength respectively.
5.2 Verification of Approximations
There are other ways to illustrate the behaviour of phase differences as a function of scale-space posi-
tion. Although they do not yield as much explanatory insight they help to validate the approximations
discussed above.
First, following Davenport and Root [4] it can be shown 7 that the probability density function for
at scale-space positions p 0 and p 1 , for a quadrature-pair kernel and white Gaussian noise
7 See [6] for details.
Figure
5. E[\DeltaOE] and E[ Inputs: Scale-space phase behaviour
based on (23) for Gabor kernels. As above, is the centre, with \Gamma- 0 - x . The same level contours as in Figure 4 are superimposed for comparison. (A) E[\DeltaOE]
for
input is
for \Gamma- \DeltaOE ! -, where A and B are given by
. Given the density function,
we can use numerical integration to find its mean behaviour and the expected variation about the
mean.
Figures
5A and 5B show the behaviour of the mean, and the absolute variation about the
mean, of \DeltaOE as functions of scale-space position for Gabor kernels with a bandwidth of 0.8 octaves.
Figure
5C shows the expected variation of \DeltaOE about the mean for Gabor filters of 1.0 octave. The
mean behaviour in this case is not shown as it is almost identical to Figure 5A. In all three cases level
contours have been superimposed to better illustrate the behaviour. Intensity in Figure 5A reflects
values between \Gamma- and -. Values in the other two range from 0 in the centre to -=2 at the edges
where the distribution of \DeltaOE becomes close to uniform. Comparing Figures 4 and 5, notice that the
bound b(z 1 ) in (20) is tightest for smaller values of \DeltaOE (see Appendix A); the two smallest contours
in
Figures
4 and 5 are extremely close.
It is also instructive to compare the phase behaviour predicted by -(z 1 ) and b(z 1 ) with actual
statistics of phase differences gathered from scale-space expansions of different input signals. Figure
6A shows behaviour of \DeltaOE predicted by -(z 1 ) and b(z 1 ) for one octave Gabor filters as in Figure
3.
Figures
6B and 6C show statistical estimates of E[\DeltaOE] and E[j\DeltaOE \Gamma E[\DeltaOE]j] measured from scale-
Scale Variation (in octaves)
Predicted
Phase Behaviour
Scale Variation (in octaves)
Experimental
Phase Behaviour
Scale Variation (in octaves)
Experimental
Phase Behaviour
Figure
6. Predicted Versus Actual Phase Behaviour: (A) Predicted behaviour of \DeltaOE based
on -(z 1 ) and b(z 1 ) for a Gabor filter with Statistical estimates of E[\DeltaOE] and
that were extracted from Gabor scale-space expansions of white noise (middle) and
scanlines from natural images (bottom).
space expansions of white noise and of scanlines of natural images. In both cases the observed phase
behaviour is in very close agreement with the predicted behaviour. The statistical estimates of E[j\DeltaOE \Gamma
E[\DeltaOE]j] in these and other cases were typically only 1-5% below the bound b(z 1 ) over the scales shown.
5.3 Dependence on Filter
These quantitative measures of expected phase behaviour can be used to predict the performance of
phase matching as a function of the deformation between the input signals. The same measures can
also be used to compare the performance of different filters; for although Gabor filters (8) have been
popular [27, 7, 8, 15, 18], several alternatives have been suggested in the context of phase information.
Weng [30] used a self-similar family of kernels derived from a square-wave (or constant) window:
e ik(-)x if jxj -=2
x
l
log
Figure
7. Modulated Square-Wave Windows: Scale-space phase behaviour is shown for modulated
square-wave windows (24) in comparison to Gabor filters. (A) Level contours b(z 1
for the square-wave kernel. (B) Similar level contours of b(z 1 ) for a Gabor kernel with
The superposition of level contours from square-wave and Gabor kernels.
Other alternatives, common to phase-correlation techniques, are families of filters with fixed spatial
extents but tuned to different scales, also called windowed Fourier transforms [11, 17, 20]. This section
several issues in addition to those in Section 2 that are related to the choice of filter.
The effect of bandwidth on phase stability was illustrated in Figure 4. This dependence can be
explained from the form of z 1 (16) and Parseval's theorem:
K(k) denotes the Fourier transform of K(x). As the bandwidth increases the extent of the
amplitude spectrum increases, and so does the range of scales \Delta- over which -
may
remain highly correlated. Similar arguments hold in space with respect to the extent of spatial support
and phase linearity. For wavelets, the stability of phase with respect to input scale changes is constant
across filters tuned to different scales, but the spatial extent over which phase is expected to be nearly
linear decreases for higher scales as a function of the support width (and therefore the wavelength)
of the filters. For windowed Fourier transforms, the support width is constant for different scales,
and hence so is the expected extent of linearity, but the stability of phase with respect to input scale
perturbations decreases at higher scales as the octave bandwidth decreases.
But does a simultaneous increase in the extent of the kernels' support in space and its amplitude
spectrum produce better scale stability or more extensive linearity through space? For example, compared
to the Gaussian which minimizes a measure of simultaneous extent (the uncertainty relation),
the modulated square-wave kernel (24) has relatively large simultaneous extents in space and frequency
domain. Its amplitude spectrum is particularly broad, and therefore we might expect better stability
than with comparable Gabor filters (e.g., with fi - 1:5).
The expected phase behaviour for the modulated square-wave filter (24) is shown in Figure 7. Its
mean phase behaviour, given by arg[z 1 ] , is very much like that exhibited by Gabor filters in Figure 4A
and is not shown. Figure 7A shows the scale-space behaviour of b(z 1 ) for the modulated square-wave
kernel (24), for which z 1 is given by (see Appendix C.2)
e i\Deltak a
(\Deltak
where \Delta- and \Deltak are defined above, 8 and
\Delta-
\Delta-
For comparison, Figure 7B shows level contours of b(z 1 ) for a Gabor kernel with a bandwidth of
Figure
7C shows the superposition of the level contours from Figures 7A and 7B. These
figures show that the distribution of \DeltaOE about the mean for the modulated suare-wave kernel is two
to three times larger near p 0 , which suggests poorer stability and poorer linearity. Note that the
innermost contour of the Gabor filter clearly encloses the innermost contour of the square-wave filter.
This Gabor filter handles scale perturbations of 10% with an expected phase drift of up to \Sigma 3.6% of
a wavelength, while a perturbation of 10% for the modulated square-wave kernels gives b(z 1
which amounts to a phase difference of about \Sigma7:5%.
The poorer phase stability exhibited by the modulated square-wave kernel implies a wider distribution
of measurement errors. Because of the phase drift due to scale changes, even with perfect phase
matching, the measurements of velocity and disparity will not reflect the projected motion field and
the projected disparity field as reliably. The poorer phase linearity affects the accuracy and speed of
the disparity predictor, requiring more iterations to match the phase values between views. Moreover,
we find that the larger variance also causes a reduction in the range of disparities that can be measured
reliably from the predictor. The poorer linearity exhibited in Figure 7 also contradicts a claim in [30]
that modulated square-wave filters produce more nearly linear phase behaviour.
Wider amplitude spectra do not necessarily ensure greater phase stability. Phase stability is the
result of correlation between kernels at different scales. The shapes of both the amplitude and phase
spectra will therefore play significant roles. The square-wave amplitude spectra is wide, but with
considerable ringing so that jz 1 j falls off quickly with small scale changes.
Another issue concerning the choice of filter is the ease with which phase behaviour can be accurately
extracted from a subsampled encoding of the filter output. It is natural that the outputs
of different band-pass filters be quantized and subsampled to avoid an explosion in the number of
bits needed to represent filtered versions of the input. However, because of the aliasing inherent in
subsampled encodings, care must be taken in subsequent numerical interpolation/differentiation. For
example, we found that, because of the broad amplitude spectrum of the modulated square-wave and
its sensitivity to low frequencies, sampling rates had to be at least twice as high as those with Gabors
(with comparable bandwidths, to obtain reasonable numerical differentiation. If these issues
8 As \Deltak ! 0, this expression for z1 converges to exp[i\Deltaxk 0
are not considered carefully, they can easily cause greater problems in phase-based matching than
differences in stability or linearity between kernels. To alleviate some of these problems Weng [30]
presmoothed the input signal with a Gaussian.
5.4 Amplitude Stability
Although our main concern is phase behaviour, it is also of interest to consider the expected scale-space
behaviour of amplitude. Towards this end, using the same arguments as above for white noise inputs,
it is shown in Appendix B that the expected (mean) amplitude variation as a function of scale-space
position is constant, independent of the direction of . The expected absolute magnitude of
amplitude differences, like phase variations about the mean, will depend on the relative magnitudes
of z 1 S 0 and R 1 . We expect the size of amplitude variations to increase for greater differences in
scale-space distance.
This implies that amplitude often varies slowly through scale-space. However, while level phase contours
exhibit predominantly vertical structure, level amplitude contours will occur at all orientations,
and are therefore not consistently stable with respect to dilations between inputs. This variability is
evident in Figure 1D compared to Figure 1E.
5.5 Multiple Dimensions
Finally, although beyond the scope of the current work, it is important to note that this basic frame-work
can be extended to consider the stability of multidimensional filters with respect to other types
of geometric deformation. In particular, we are interested in the phase behaviour of 2-d oriented filters
with respect to small amounts of shear and rotation as well as scale changes. This analysis can be
done, as above, using the cross-correlation between a generic kernel and a series of deformations of it.
In this way, quantitative approximations can be found to predict the expected degree of phase drift
under different geometric deformations of the input.
6 Singularity Neighbourhoods
The above analysis gives quantitative bounds on the expected stability of phase through scale and its
linearity through space. But from Figure 1 it is clear that phase stability is not uniform throughout
some regions exhibit much greater instability in that the phase contours are nearly horizontal
and not vertical as desired. Jepson and Fleet [14] explained that this phase instability occurs
in the neighbourhoods of phase singularities (locations in space-time where the filter output passes
through the origin in the complex plane). In terms of (14), S 0 is zero at a singularity, and the response
singularity neighbourhood is dominated by the residual term R 1 . Zeros of S(x; -) appear as
black spots in Figure 1B.
From the analysis described in [6, 14] the singularity neighbourhoods and the nature of phase
instability can be characterized in terms of properties of the complex logarithm of the filter response
Figure
8. Detection of Singularity Neighbourhoods: (A) Phase contours that survive the
constraints. (B) Phase contours in regions removed by the constraints. The stability constraint in
was used with
log its x-derivative:
@
@x
log
ae x (x; -)
ae(x; -)
The imaginary part OE x (x; -) gives the local (instantaneous) frequency of the response [2, 26], that
is, the instantaneous rate of modulation of the complex signal. The real part ae x (x;
the relative amplitude derivative. It can be shown that the instantaneous frequency of the filter
output OE x (x; -) is expected to be within the passband of the filter, and the amplitude derivative
ae x (x; is expected to be near zero [6]. The behaviour exhibited in singularity neighbourhoods
is, however, quite different. As jS(x; -)j decreases to zero, j log S(x; -)j increases without bound,
as do the magnitudes of OE x (x; -) and/or ae x (x; This leads to a simple method for
detecting singularity neighbourhoods so that unreliable measurements of binocular disparity and image
velocity can be detected. Jepson and Fleet [14] used two constraints, one on the local frequency
of response, and one on the magnitude of the amplitude derivative. Here we combine them into one:oe k (-)
@
@x
log
where k(-) and oe k are given in (11a) and (11b). As - decreases, this constraint detects
larger neighbourhoods about the singular points. In effect, the left-hand side of (28) reflects the inverse
scale-space distance to a singular point.
Figure
8 shows the application of (28) to the scale-space in Figure 1. Figure 8A shows the phase
contours that survive the constraint, which are predominantly stable; the large black regions are the
singularity neighbourhoods detected by (28). Figure 8B shows those removed by the constraint, which
amounts to about 20% of the entire scale-space area. With respect to the quantitative approximations
to phase behaviour presented in Section 4, we reported that statistics of mean phase differences and
the absolute variation about the mean agreed closely with the bounds. When the phase behaviour in
singularity neighbourhoods is ignored, so that the statistics are gathered only from outside of such
singularity neighbourhoods, we find that the magnitude of the variation of \DeltaOE about the mean is
generally less than half of that predicted by the bound in (20). This detection of unstable regions
is essential to the reliable performance of phase-based matching techniques, and it can be used to
improve the performance of zero-crossing and phase-correlation techniques [11, 17, 21, 22, 29].
7 Natural Images
Unlike white noise, the Fourier harmonics of natural images are often correlated across scales, and
their amplitude spectra typically decay something like 1=k [5]. Both of these facts affect our results
concerning phase stability, the accuracy of phase matching, and instability due to singularity
neighbourhoods.
First, because of amplitude spectra decay with spatial frequency, the filter responses will be biased
to lower frequencies (as compared to white noise). As a result, care is required to ensure that the
filter outputs do not contain too much power at low frequencies. Otherwise, there may be a) more
distortion due to aliasing in a subsampled representation of the response, and b) larger singularity
neighbourhoods, and hence a sparser set of reliable measurements. These problems are evident when
comparing the modulated square-wave filters with Gabor filters (or similar bandwidths) because the
former have greater sensitivity to low frequencies. Second, without the assumption of white noise we
should expect z 1 S 0 and R 1 in (14) to be correlated. This will lead to improved phase stability when
R 1 and S 0 remain in phase, and poorer stability when they become systematically out of phase.
Although we lack a sufficient model of natural images, in terms of local structure, to provide a
detailed treatment of phase stability on general images, several observations are readily available: For
example, with many textured image regions the phase structure appears much like that in Figure
1. We find that the noise-based analysis provides a good model of the expected phase behaviour for
complex structures that regularly occur in natural images.
Moreover, it appears that phase is even more stable in the neighbourhoods of salient image features,
such as those that occur in man-made environments. To see this, note that the output of a filter in a
small region can be viewed as a weighted sum of harmonics. In the vicinity of localized image features
such as edges, bars and ramps, we expect greater phase stability because the phases of the input
harmonics (unlike the white noise) are already coincident. This is clear from their Fourier transforms.
Therefore changing scales slightly, or adding new harmonics at the high or low ends of the passband
will not change the phase of the output significantly.
It is also worth noting that this phase coincidence at the feature locations coincides with local
maxima of the amplitude response [23]. When different harmonics are in phase their amplitudes
combine additively. When out of phase they cancel. Therefore it can be argued that neighbourhoods
of local amplitude maxima correspond to regions in which phase is maximally stable (as long as
the signal-to-noise-ratio is sufficiently high). This is independent of the absolute phase at which the
different harmonics coincide, 9 and is significant for stable phase-based matching.
As one moves away from salient features, such as edges, the different harmonics may become
increasingly out of phase, the responses from different features may interfere, and the amplitude of
response decreases. This yields two main types of instability: 1) where interference from nearby
features causes the total response to disappear at isolated points; and 2) where large regions have
very small amplitude and are dominated by noise. The first case amounts to a phase singularity and
is detectable using the stability constraint (28). In the second case, the phase behaviour in different
views may be dominated by uncorrelated noise, but will not necessarily violate the stability constraint.
For these situations a signal-to-noise constraint is necessary.
To illustrate these points, Figure 9 shows the Gabor scale-space expansion of a signal containing
several step edges (Figure 9A). There are two bright bars, apart.
The scale-space plots were generated by Gabor filters with
Figures
9A and 9B show the amplitude response and its level contours superimposed on the input
signal (replicated through scale) to show the relationship between amplitude variation and the edges.
Figures
9C and 9D show the scale-space phase response and its level contours, and Figures 9E and
9F show the contours that remain after the detection of singularity neighbourhoods using (28) with
1:25, and the phase contours in the neighbourhoods detected by (28). As expected, phase is stable
near the edge locations where the local Fourier harmonics are in phase, and arg[R 1
a wide range of scales. The similarity of the interference patterns between the different edges to the
singularity neighbourhoods shown in Figures 1 and 8 is clear, and these regions are detected using the
stability constraint. Also detected are the regions relatively far from the edges where the amplitude
and phase responses of the filter both go to zero.
However, as explained above, regions in which the filter response decreases close to zero are also
very sensitive to noise. These regions can become difficult to match since uncorrelated noise between
two views can dominate the response. To illustrate this, we generated a different version of the scale-space
in which uncorrelated noise was added to the input independently before computing each scale
(to simulate uncorrelated noise added to different views). The response to the independent noise
patterns satisfied the stability constraint much of the time, but the phase structure was unstable
(uncorrelated) between scales. Figure 9H shows the regions detected by the stability constraint in this
case. As discussed, the regions of low amplitude in the original are now dominated by the response to
the noise and are no longer detected by (28). Another constraint on the signal-to-noise ratio of the
filter output appears necessary. For example, Figure 9I shows the regions in which the amplitude of
the filter output is 5% or less of the maximum amplitude at that scale. This constraint in conjunction
9 Morrone and Burr [23] argue that psychophysical salience (of spatial features) correlates well with phase coincidence
only for certain absolute values of phase, namely, integer multiples of -=2, which are perceived as edges and bars of
different polarities.
with those discussed earlier are sufficient to obtain phase stability comparable to that in the noiseless
case (
Figure
9J).
This also demonstrates the fact that an amplitude constraint alone does not serve our purposes, for
the constraints on instantaneous frequency and amplitude derivative detect different regions. Although
the regions detected by a single threshold applied directly to ae(x; -) will eventually include the regions
detected by (28) if the threshold is large enough, they will also enclose regions of stable phase behaviour.
A single amplitude constraint will remove more of the signal than necessary if relied on to detect all
the instabilities.
8 Further Research
Finally, it is clear that not all variations between two views of a scene are accounted for by affine
geometric deformation and uncorrelated Gaussian noise. Other factors include contrast variations due
to shading and shadows, specular anomalies, effects of anisotropic reflectance, and occlusion boundaries
[6]. Although the behaviour of phase in these cases is not examined here, below we outline some of
the main problems that require investigation.
Many contrast variations are smooth compared to surface texture. Thus, we expect the phase
information from filters tuned to higher frequency to be insensitive to the contrast variations. In other
cases, the contrast becomes a source of significant amplitude modulation in the input, which manifests
itself in the phase behaviour of the filter output. Thus, some illumination changes will perturbation
the disparity measurements in a non-trivial way.
Another difficulty for phase-based techniques, as well as most other matching techniques, is that
the local intensity structure in one view of a 3d scene may be very different from that in another view
of the same scene. Two obvious examples are occlusions, in which a surface may be visible in only
one of the two views, and specular phenomena, in which case highlights may be visible from only one
of two views. In both of these cases we should not expect the two views to be highly correlated in
the usual sense; the difference between views is not well modelled by uncorrelated Gaussian noise.
Moreover, we do not expect these cases to be detected by our stability constraint.
Another topic that is left for further research concerns measures of confidence for phase-based
disparity measures. In our work we have found several ways of detecting unreliable measurements.
One is the stability measure, and another is some form of SNR constraint. A further constraint that
is useful in detecting poor matches, such as those due to occlusion is a correlation measure between
matched regions. The use of these measures, possibly combining them into one measure has not been
addressed in sufficient detail.
9 Discussion and Summary
This paper examines the robustness of phase-based techniques for measuring image velocity and binocular
disparity [7, 8, 11, 13, 15, 17, 18, 25, 27, 30, 32, 33]. Our primary concerns are the effects of
200100Pixel Location
Input
Intensity
Figure
9. Gabor Scale-Space Expansion With Step-Edge Input: The input signal (A) consists
of two bars. Vertical and horizontal axes of the Gabor scale-space represent log scale and spatial
position. Level contour have been superimposed on the input to show the relative location of the edges.
its level contours; (D) and (E) OE(x; -) and its level contours; (F) and (G)
Level phase contours that survive the stability constraint (28), and those detected by it; (H) Regions
of the "noisy-scale-space" that were detected by (28); (I) Regions detected by a simple amplitude
constraint; (J) The level phase contours that survive the union of constraints in (H) and (I).
the filters and the stability of phase with respect to typical image deformations that occur between
different views of 3-d scenes. Using a scale-space framework it was shown that phase is generally
stable with respect to small scale perturbations of the input, and quasi-linear as a function of spatial
position. Quantitative measures of the expected phase stability and phase linearity were derived for
this purpose. From this it was shown that both phase stability and linearity depend on the form of
the filters and their frequency bandwidths. For a given filter type, as the bandwidth increases, the
extent of the phase stability increases, while the spatial extent over which phase is expected to be
linear decreases. In the context of disparity measurement, the bandwidth of the filters should therefore
depend, in part, on the expected magnitude of deformation between left and right views, since the
potential accuracy of phase-based matching depends directly on phase stability.
One of the main causes of instability is the occurrence of phase singularities, the neighbourhoods of
which exhibit phase behaviour that is extremely sensitive to input scale perturbations, small changes
in spatial position, and small amounts of noise. Phase behaviour in these neighbourhoods is a source of
significant measurement error for phase-difference and phase-gradient techniques, as well as gradient-based
techniques, zero-crossing techniques, and phase-correlation techniques. Fortunately, singularity
neighbourhoods can be detected automatically using a simple constraint (28) on the filter output.
This stability constraint is an essential component of phase-based methods. A second constraint is
also needed to ensure a reasonable signal-to-noise ratio.
This basic approach can also be used to examine the stability of multi-dimensional filters to other
types of geometric deformation, such as the stability of 2-d oriented filters with respect to local affine
deformation (rotation, shear, and dilation). As explained here, we may consider the behaviour of
phase information using the cross-correlation of deformed filter kernels z 1 as a function of rotation
and shear in addition to the case of dilation on which we concentrated in this paper. In this way,
quantitative approximations can be found to predict the expected degree of phase drift under different
geometric deformations of the input.
Acknowledgements
We are grateful to Michael Langer for useful comments on earlier drafts of this work. This research
has been supported in part by the Natural Sciences and Engineering Research Council of Canada, and
the Ontario Government under the ITRC centres.
--R
Performance of optical flow techniques.
Estimating and interpreting the instantaneous frequency of a signal.
Object tracking with a moving camera.
Introduction to the Theory of Random Signals and Noise
Relations between the statistics of natural images and the response properties of cortical cells.
Measurement of Image Velocity
Computation of component image velocity from local phase information.
The design and use of steerable filters.
Theory of communication.
Direct estimation of displacement histograms.
Robust Statistics
The measurement of binocular disparity.
Phase singularities in scale-space
Fast computation of disparity from phase differences.
The structure of images.
The phase correlation image alignment method.
Vertical and horizontal disparities from phase.
Multiple motion from instantaneous frequency.
Multifrequency channel decomposition of images and wavelet models.
A computational theory of human stereo vision.
Computational studies toward a theory of human stereopsis.
Feature detection in human vision: a phase-dependent energy model
Research in Binocular Vision
Stereo disparity computation using Gabor filters.
Shiftable multiscale transforms.
Convected activation profiles: Receptive fields for real-time measurement of short-range visual motion
A theory of image matching.
Preattentive gaze control for robot vision.
Hierarchical phase-based disparity estima- tion
A multiresolution stereopsis algorithm based on the Gabor representation.
Scaling theorems for zero-crossings
--TR
Scaling theorems for zero crossings
Computational processes in human vision
Vertical and horizontal disparities from phase
Computation of component image velocity from local phase information
Phase-based disparity measurement
The Design and Use of Steerable Filters
Performance of optical flow techniques
Measurement of Image Velocity
--CTR
Computational model for neural representation of multiple disparities, Neural Networks, v.16 n.1, p.25-37, January
Jun Zhou , Yi Xu , Xiaokang Yang, Quaternion wavelet phase based stereo matching for uncalibrated images, Pattern Recognition Letters, v.28 n.12, p.1509-1522, September, 2007
Tatiana Jaworska, Amplitude elimination for stereo image matching based on the wavelet approach, Machine Graphics & Vision International Journal, v.14 n.1, p.103-120, January 2005
Phillipe Burlina , Rama Chellappa, Analyzing Looming Motion Components From Their Spatiotemporal Spectral Signature, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.10, p.1029-1033, October 1996
Sherif El-Etriby , Ayoub K. Al-Hamadi , Bernd Michaelis, Dense depth map reconstruction by phase difference-based algorithm under influence of perspective distortion, Machine Graphics & Vision International Journal, v.15 n.3, p.349-361, January 2006
Wang , Song-Chun Zhu, Analysis and Synthesis of Textured Motion: Particles and Waves, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.10, p.1348-1363, October 2004
Bernd Porr , Bernd Nrenberg , Florentin Wrgtter, A VLSI-Compatible Computer Vision Algorithm for Stereoscopic Depth Analysis in Real-Time, International Journal of Computer Vision, v.49 n.1, p.39-55, August 2002
Horst W. Haussecker , David J. Fleet, Computing Optical Flow with Physical Models of Brightness Variation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.661-673, June 2001
David J. Fleet , Michael J. Black , Oscar Nestares, Bayesian inference of visual motion boundaries, Exploring artificial intelligence in the new millennium, Morgan Kaufmann Publishers Inc., San Francisco, CA,
Djamal Boukerroui , J. Alison Noble , Michael Brady, On the Choice of Band-Pass Quadrature Filters, Journal of Mathematical Imaging and Vision, v.21 n.1, p.53-80, July 2004 | local phase information;geometric deformations;frequency bandwidths;initial filters;image processing;robustness;image recognition;binocular disparity;filtering and prediction theory;3D scene;image velocity;zero-crossing tracking;spatial position;phase singularities;frequency stability;phase stability;optical flow |
628676 | Cooperative Robust Estimation Using Layers of Support. | AbstractWe present an approach to the problem of representing images that contain multiple objects or surfaces. Rather than use an edge-based approach to represent the segmentation of a scene, we propose a multilayer estimation framework which uses support maps to represent the segmentation of the image into homogeneous chunks. This support-based approach can represent objects that are split into disjoint regions, or have surfaces that are transparently interleaved. Our framework is based on an extension of robust estimation methods that provide a theoretical basis for support-based estimation. We use a selection criteria derived from the Minimum Description Length principle to decide how many support maps to use in describing an image. Our method has been applied to a number of different domains, including the decomposition of range images into constituent objects, the segmentation of image sequences into homogeneous higher-order motion fields, and the separation of tracked motion features into distinct rigid-body motions. | Introduction
Real-world perceptual systems must deal with complicated and cluttered environments. To
succeed in such environments, a system must be able to recover salient parameters that are
relevant for the task at hand. However many of the techniques developed to estimate these
parameters are designed for homogeneous signals, in which all the data comes from a single
source that can be described by a single model. They often perform poorly on data which
intermingles several sources with different underlying models or model parameters. To apply
these techniques to complex environments, we must decompose the heterogeneous sensory
signal into its constituent homogeneous chunks either before, or during, the estimation
process.
This paper addresses the heterogeneous estimation task, and presents an approach to the
problem using a support-based model of segmentation. In our approach, explicit boolean
masks, called "support maps" or "support layers", are used to represent the extent of regions
in an image. The use of this representation has several advantages over more traditional
edge-based segmentation models, since it allows an unrestricted model of how regions may
be occluded.
We will assume that we have a set of models capable of describing all regions of the
signal, and that we can find the residual error of a particular model over a given region
This work performed under ONR contract N00014-93-J-0172
of the signal. Our method uses the residual error values of several estimates to determine
where each estimate is accurately approximating the signal, and from this computes a set
of support maps.
The following section motivates the use of support maps as a framework for segmentation,
generalizing the concept of support found in the robust estimation literature. We will then
discuss the Minimum Description Length (MDL) criteria, which provides a mechanism for
deciding how many objects or processes exist in a signal, and thus how many support maps
to use in estimating that signal. Finally, we will show results using this approach in several
applications for which appropriate image models are available, including shape and motion
segmentation tasks.
Background
A method for heterogeneous description must have a model of how objects or surfaces are
combined in the image, as well as models of the object or surface processes themselves. The
issue of how to fully represent the former, the segmentation of a scene, is often neglected
in computer vision; in practice, having a expressive segmentation model is as important as
having a good model of the underlying data.
Traditionally, edge-based approaches to segmentation have been used. Often a fixed
"edge-detector" is run to find the edges of an image as a first stage of processing, which
are then used to mark the boundaries of regions from which parameters are estimated. The
line process, introduced by Geman and Geman [10] and later expanded upon by [28, 32, 6],
merged these two steps into a single regularization/reconstruction framework. This approach
simultaneously estimates an interpolated surface, which regularizes the data according to
a prior model of surface smoothness, and a discontinuity field, which indicates allowed departures
from the smoothness constraint. This discontinuity field, called the "line-process",
allows for the successful reconstruction of scenes that contain simple occlusion boundaries,
and has been used in several different domains of visual processing.
With complicated occlusion, however, an edge-based segmentation model is inadequate:
when an object is split into disconnected regions on the image plane an edge-based segmentation
prevents the integration of information across the entire object during the segmentation
process. A good example of this type of phenomena is the transparent motion display developed
by Husain, Treue and Andersen for psychophysical experimentation [16]. In these
examples, dots are placed on an otherwise transparent cylinder which rotates around its major
axis. Two populations of intermingled random dots are seen, one corresponding to the
foreground surface of the cylinder and the other corresponding to the background. When
presented with such transparent displays, an algorithm that extracts motion information
using an edge-based segmentation method will not be able to group the two populations
of random dots. As the results of Husain et. al. show, human and animal observers have
no difficulty in grouping the two motions. The same phenomena can be found with static
imagery: an object viewed through a fence or trees may be broken into several disjoint
regions on the image plane. The regions of this image could not be grouped together using
a line-process or other mechanism that relies solely on an edge map for segmentation.
Because of these limitations, we believe an edge-based segmentation model is insufficient
for realistic natural scenes. Instead, we have explored a more descriptive approach in which
we explicitly represent the shape of a region using a support map. A support map places
no restrictions on the connectivity or shape of a region, so it can handle the cases described
above. As we shall see in the next section, the use of a single support map to deal with
occlusion and sensor noise has already been developed in the robust statistics literature.
However, a single support map can only represent a single object, and we wish to work
with scenes with multiple objects and complex occlusions. Our contribution is to extend
this robust estimation paradigm to encompass multiple processes, using a different support
map for each object in the scene. In our approach, we represent the segmentation of a
scene as a set of support maps, each corresponding to a distinct homogeneous (but possibly
disconnected) region.
Other authors have proposed approaches to segmentation which go beyond single edge-map
representations. Nitzberg and Mumford propose a multi-layer signal representation
which they call "The 2.1D sketch", in which the edges of each object occupy a distinct
layer [24]. Adelson and Anandan proposed multi-layer representations for the modeling of
static transparency [1]. Leclerc [19] has developed a region grouping strategy using MDL
theory which can link disjoint regions but is dependent on an initial edge-based description
stage to find candidate regions. Marroqiun [22] has extended the Markov Random Field
formulation to include a notion of disconnected support, and has shown results using simple
piecewise-constant models.
Our method has much in common with these approaches: the notion of multi-layer
representations, the use of some mechanism to manage model complexity, and the direct use
of support in the estimation process. Our work provides a direct connection between layers
and robust estimation, can handle cases of transparency not addressed in the above work,
and allows for the integration of different description strategies via the MDL framework.
Our approach is derived from the robust statistics and estimation literature, based on the
M-estimation framework and the notion of finding segmentation though minimal length
description.
2.1 Robust estimation
Robust estimation methods have become popular for image processing, since they have
been found to be tolerant to occlusion and other outlier contamination [23, 5]. The use
of a support map for estimation is simply an instance of outlier rejection, which is a well
known robust estimation method. In this approach a confidence factor is used to weight
the contribution of each point to the estimation. The confidence value is itself iteratively
updated based on the residual error of the current fit. Formally, this type of estimation is
known as M-estimation [15].
M-estimators are maximum likelihood estimators which allow an arbitrary error norm.
Given an image data vector d, and a model, y(x), we wish to find parameters x which are
most likely to have generated the observed data, d Assuming a probability
distribution where each observation is independent:
Y
and where ae() is specified a priori, an M-estimator attempts to find the optimal x that
maximizes P (djy(x)). With the L 2 norm, corresponding to the Gaussian
distribution, finding the maximum likelihood estimate is a linear problem. To be insensitive
to outliers, an error norm which decreases the influence of high-error points more rapidly
than a squared error norm must be used. However, estimating parameters for robust norms
is much more difficult, since in general there are no analytic solutions to the normal equations
for an arbitrary ae().
One solution technique which has been presented for solving the M-estimation problem is
Iteratively Reweighted Least Squares (IRLS) [4]. This algorithm approximates an arbitrary
norm by the iterative application of a least-squares norm with a dynamic weighting factor
on each point. A support vector s (which we call a support map when working with image
data) contains these weights and is computed based on the residual error at each point:
where r j is the residual error of the approximation at point j, and /() is a
weighting function computed by differentiating the error norm:
Given a model of linear basis functions in a matrix B, a weighted least squares estimate of
the parameter vector x can be obtained by solving the equation
where W is a diagonal matrix with s on the diagonal. In the IRLS method, each time a
new x is estimated, the residual error and support weights are recomputed.
With a robust error norm, points that deviate excessively from an initial fit have their
significance down-weighted in subsequent iterations. A well-known robust norm is Huber's
which blends L 1 and L 2 norms. For a segmentation problem, a redescending
norm that completely excludes outlier points is appropriate. For reasons of computational
efficiency, we have used a "censored" robust norm that combines L 2 weighting for points
whose residual error is less than a given threshold ', and zero weight for "outlier" points
that exceed that threshold:
Collapsing Eqs. (2) and (5), we can concisely express the support threshold rule as:
2.2 The breakdown point of traditional M-estimators
The initial conditions of an M-estimator are critical for its success. In particular, if more
than a certain number of "outlier" points are in its initial support, the M-estimator will fail.
The breakdown point, b, of an M-estimator characterizes this limit of robust performance; it
is defined as the smallest fraction of contamination which will force the value of an estimate
outside an arbitrary range [23]. If there is more contamination than the breakdown point,
the estimate will not necessarily converge to the optimal value. Thus to recover an optimal
estimate using an M-estimator, at least (1 \Gamma b) of the initial support must cover a single
homogeneous region.
The support of an M-estimator is often initialized to cover the entire image (in the
absence of any a priori knowledge); in this case the breakdown point places a severe limit
on the amount of occlusion the estimator can handle. Li [21] has shown M-estimators have
breakdown points that are less than 1=(p is the number of parameters in
the regression. Thus even a planar regression fit (with reliably fit a surface
when it becomes more than 25% contaminated with occluding data. Other techniques for
robust estimation can improve this number, but for a single robust estimator it must be less
than 1. Even at this upper limit, a robust estimator will fail on any object that is more
than half occluded in the image. In Section 3, we will present our approach to overcoming
the breakdown point of M-estimators, by integrating information across multiple robust
estimates.
2.3 Estimating Model Complexity
A critical issue in finding a heterogeneous description of a signal is deciding the appropriate
level of model complexity to use in the representation. In our case, when we allow multiple
estimators to describe a signal, we need to address the fundamental question of how many
estimators to use for a given signal? Maximum likelihood estimation provides a means for
finding the optimal parameters when the model complexity is fixed, but will not help in
deciding how many models to use, or how to compare the performance of models of various
order. We need a method for balancing model complexity with model accuracy.
In the Minimum Description Length (MDL) paradigm [33], this is formalized by the
notion that the optimal representation for a given image is found by minimizing the combined
length of encoding the representation and the residual error. (See also [37] for the related
Minimum Message Length criteria.) The theory of information laid out by Claude Shannon,
provides the motivation for this approach [31]. Shannon defined "entropy" to be to the lack
of predictability between elements in a representation; if there is some predictability from
one element to another, then entropy is not at its maximum, and a shorter encoding can
be constructed. When the encoding cannot be compressed further, the resulting signal
consists of "pure information." Thus if we find the representation with the shortest possible
encoding, in some sense we have found the information in the image. 1
In many cases the MDL criteria can be derived from Bayesian methods. Bayesian probabilistic
inference has a long tradition in statistics as well as computer vision and artificial
intelligence, and also incorporates the complexity vs. accuracy trade-off. Under this
paradigm, we seek to find the representation (process type and parameters) that is most
likely given some image data and prior knowledge. Each region of data has a certain probability
P (djy(x)) of being generated by a particular representation, and each representation
occurs with a certain a priori probability P (y(x)). Through Bayes' theorem
we compute the probability P (y(x)jd) that a representation accounts for a given region.
The Maximum a Posteriori (MAP) principle dictates that we should pick the representation
Similarly, simplicity and parsimony have been recognized as essential to notions of representation since
the pioneering work of the Gestalt psychologists in the early twentieth century [17]. The minimum principle
[11] holds that the best representation is the one that accounts for the data with the simplest model. The
simplest representation is defined as the one with the shortest representation given a set of transformation
rules allowed on the data; the search for rules that agreed with human perception was a central focus of
their work. Recent researchers in this tradition have used structural rules to define simplicity [20] as well as
process models [3].
that maximizes this Bayesian likelihood.
The MAP choice is also the minimal encoding of the image when we use an optimal
code. Given prior probabilities of various representations, the optimal encoding of each
representation can be shown to use \Gamma log 2 P (y(x)) bits, and the deviation of a region from
a representation can similarly be encoded in \Gamma log 2 P (djy(x)) bits. Selecting the minimal
encoding is thus determined by:
Thus, when we have "true priors" to use, and can thus find the Shannon-optimal codes,
minimal encoding is equivalent to MAP. When these conventional priors are not obvious,
the minimal encoding framework provides us with a method of approximating them: we
pick the best practical representation we have available. As pointed out by Leclerc, this
method is useful in vision problems because it gives us a way to produce estimates using
image models that are too complex for calculation of direct priors [18].
We adopt the MDL approach for evaluating a representation, since we have no direct
access to the relevant priors. Using the encoding cost of a representation is a natural bias for
an information processing system that has finite resources; when used with the appropriate
scene models, this framework provides an intuitive and powerful method for recovering
non-trivial representations.
Robust Estimation with Multiple Models
Our goal is to make accurate estimates of the parameters and segmentation of objects in
a scene despite complicated occlusion. As we have seen, in the case of a single object
with a relatively small amount of occlusion this can be accomplished by M-estimation, a
well known robust method. Unfortunately, the breakdown point limit of an M-estimator
restricts its utility-it will only segment out relatively small amounts of a contaminating
process. We overcome this limit by explicitly modeling the occluding processes in the signal
and sharing this information between the estimates of each process. In our approach, called
"Cooperative Robust Estimation", we use multiple robust estimators to describe a scene,
and reallocate the support between them in a cooperative fashion. Occlusion is not solely
treated as a additional noise source in the image signal; instead, occluding processes can be
modeled as "real" signals, each with a separate support map and robust estimator.
Most importantly, having multiple estimators allows us to use different models and
initial conditions in each estimator. Instead of having initial support which covers the
entire signal, an estimator can begin with a small window of support, or with a specific
set of parameters. Whereas with uniform support the breakdown-point limit prevents an
estimator from excluding more than a small fraction of the image as unsupported, with
specific initial conditions no such limit exists. For example, if an object covers only 20%
of the image, no single robust estimator which initially considers the entire image equally
could possibly estimate the object correctly. However if we allow multiple hypotheses, with
multiple initial conditions, than a hypothesis whose parameters are estimated from an initial
window of support that is localized over the object will be able to accurately recover that
object's parameters. With a sufficient number of initial hypotheses, there will be at least
one such hypothesis for each object in the scene.
To manage multiple estimates, we use a hypothesize and test paradigm, in which the
different estimators can compete to describe the signal. We feel there is intuitive appeal in
this if a scene contains several processes, then it seems natural that our model of
the scene should incorporate multiple estimators.
3.1 Problem Statement
We assume the signal is composed of one or more components, each of which can be described
by a model (or models) known a priori. We do not know how many components are in the
signal, nor what their parameters or segmentation are. Formally, we assume the image d,
can be described as a sum of K masked individual components, with support masks s (k) ,
a model M (k) with parameters x (k) , plus an additive noise term j, with distribution N (r),
which for the examples in this paper we assume is normally distributed with zero mean and
variance
Each model M () is some function which can be applied to parameters x to generate an
estimated image, and for which an estimator is available to perform a weighted estimation
using a support map. For a model composed of a set of linear basis functions, we can use
an M-estimator for this task. For other models a more complicated estimator is needed
(such as the recursive structure from motion estimator used in the example in Section 5.3);
however, most estimation methods can be adopted for use with a support map without
much difficultly.
3.2 Method Overview
We tackle the problem in two steps. First, we attempt to estimate the number of components
in the signal, e.g. to estimate K. To accomplish this also requires finding coarse estimates
of the parameters, model, and support for each component. Second, once we have obtained
an estimate of K, we revise the parameter and support estimates for each component, using
an iterative refinement procedure.
3.3 Estimation of K
We have advocated the use of a Hypothesize, Test, and Select approach to estimating the
number of objects in a signal [7, 26]. In our method a set of hypothesizes is generated, each
is tested against the observed data to compute a set of support maps, and the subset of
hypotheses which optimally (in an MDL sense) accounts for the data is selected. We will
describe each of these stages in turn.
3.3.1 Initialization of Hypotheses
Hypotheses are comprised of a chosen model and associated parameters. A initial set of
hypotheses,
is constructed either by sampling the space of possible models and parameters, and/or by
selecting models and small windows of the data and using these to estimate a set of model
parameters. In general terms, for our system to be effective the initial set of hypotheses must
have at least one hypothesis in rough correspondence with each real object or process in the
scene. Redundant and/or completely erroneous hypotheses in the initial set are tolerated
by our method, and are expected.
Depending on how the initial set is constructed, different constraints apply to the number
of hypotheses needed in the set. If the initial set is specified by sampling parameter space,
then the sampling must be fine enough to guarantee some hypothesis will support each
object that may be in the scene. This will depend on the choice of support threshold, ',
described below. If the initial set is specified via estimates formed from windows on the
data, then the windows must be chosen to guarantee (to some sufficient probability) that for
each object in the scene, at least one hypothesis has approximately homogeneous support
for that object, e.g. that percentage of points not from that object is less than breakdown
point of the esimtator.
3.3.2 Hypothesis Support Testing
Given this set of initial hypotheses, with models and associated parameters, we compute a
support map for each hypothesis, using Eq. (6). This step essentially entails the computation
of an estimated surface for each hypothesis by applying its model to the parameters,
computing the difference between the estimated surface and the observed data, and then
thresholding the residual error according to the threshold parameter '. 2
The residual threshold ' is bound by two constraints. First, it must be large enough that
the hypothesis closest to a given object in parameter space will support a majority of the
points on that object. This ensures that a unique hypothesis can be found for each object.
If ' is too low, then multiple hypothesis may be selected for a given object, creating "false
alarms" (K will be over-estimated). Second, ' must be small enough so that the support
for any estimate that supports a majority of points on one object will not also support a
majority of points from another object. If ' is too large, a single hypothesis may account
for two real objects in the scene, providing both a corrupted estimate and a "miss" (K will
be under-estimated.)
The selection of ' is domain-dependent, and so should be considered a free parameter of
our system. In practice we have found that the above heuristics are sufficient to determine
knowledge about the distribution of processes in the scene, and the
standard-deviation of the noise process, oe.
3.3.3 Selection of Minimal Description subset
Once we have a set of hypotheses and their regions of support in the image, we find the
subset of these estimates which best describes the entire scene. In the initial set there
2 The framework allows different models to use different threshold parameters, as larger thresholds are
counterbalanced by lower entropy savings in the selection stage. However in the examples presented here
we have kept ' constant across models to simplify the presentation.
will be much redundancy: each object may be covered by several hypotheses, and many
hypotheses will be covering regions of the scene that do not correspond to an single object.
Our approach to this problem is to find the set of hypotheses which most parsimoniously
describes the image signal.
Given the set of initial hypotheses, we search for the subset which is the minimal length
encoding of the signal. Ideally, this solution will consist of one hypothesis for each object in
the scene. For a subset H ae I which represent opaque regions of the image, the description
length function L(H) can be defined as the trivial point-by-point encoding of the image
minus the encoding savings offered by that subset:
Since L(d) is constant with respect to H in Eq. (12), the minima of L(djH) will correspond
to the maxima of S(djH), the encoding savings of all hypotheses in H.
We define the encoding savings for a hypotheses to be the sum of the savings over the
supported points in that hypothesis, less an overhead term. Assuming H is a partition and
thus there is no overlap in the support of member hypotheses, we define the savings for a
set a hypothesis to be the sum of the savings of the individual hypotheses:
where h (i)
j is the encoding saved at point j by hypothesis H i . If we have the probabilities
then the Bayesian interpretation of h and ff is simply:
Without such probabilities, we can use the encoding cost of the image in its current
representation. We assume that residual error values are uniformly distributed, and thus
save log 2
G
bits, where G is the number of grey levels per pixel. The overhead is interpreted
in an MDL context as the cost of storing the model index, model parameters, and support
map in their current representation (e.g. the number of bits used).
If the models are not a perfect approximation of the data, and/or the image has considerable
noise, the support maps in the initial hypothesis set will not perfectly reflect the
shape of objects in the scene. In this case the overhead term is useful as a measure of
how much support a hypothesis should have to be considered viable. It should be bounded
above by the smallest amount of savings we expect in a hypothesis, otherwise a real object
may be missed in the selection process and K underestimated. In addition, it acts as a
counterbalance to prevent the degenerate solution of has one hypothesis per pixel in the
image. It should thus be bounded below by the largest spurious hypothesis we expect, e.g.,
the amount of savings that be gathered from an object despite another selected hypothesis
which already covers a majority of the support of that object. If ff is too low, multiple
hypotheses may be selected for a single object, over-estimating K.
Eq. (13) expresses the encoding savings for a set of hypotheses with no overlap in
support. If a given hypothesis set is not a partition, and has elements with overlapping
support, this equation will over-estimate the joint savings of the set. As support maps will
often overlap during the intermediate states of the selection procedure, we need to augment
this expression with a term to account for support overlap.
This is a type of "credit assignment" problem: in this case overlapping support means
two different models are allowed to each take full credit for the same data point. There are
a number of possible remedies: one could arbitrarily assign the credit for a point to one of
the overlapping hypotheses, normalize the credit between them, or discount the credit for
contested points altogether. We experimented with each of these strategies, and found the
last was most reliable. (Arbitrary assignment yields unpredictable results, and normalizing
credit leads to stable solutions with duplicated hypotheses.)
Our modification to Eq. (13) is to thus introduce a term to discount the effect of
overlapping support. Our revised encoding savings function only takes credit for points
which are supported by exactly one hypothesis in the selected set:
l6=i;H l
s (l)
where [x] positive x and 0 elsewhere, e.g. half rectification. Points which are
contested (have multiple overlap) are not counted in the support of any hypothesis during
the selection optimization.
Our task, then, is to find a subset H of our initial set of hypotheses I which maximizes
S(djH). To achieve this, we could employ exhaustive search to test all possible combinations
of hypotheses in our candidate set, but this would be computationally infeasible. In
practice we have found that it is sufficient to use a gradient descent method to perform
this optimization. This optimization can be embedded in a continuation method when local
minima are a problem.
3.3.4 Numerical solution
To find a maxima, we need to express S(djH) in a differentiable form. First, we represent
H by enumerating which elements of I are in H using a vector a. The sign of each element
a i indicates the membership of an element positive value means it is in H, and a
negative value means it is not. Initially a i is set to zero for each description, representing
an "unknown" condition.
We define f(a) to be a selection function and transforms the real a values into a multiplicative
weight between 0 and 1. Ideally, f() would be the unit step function centered
at the origin. However, this hard non-linearity makes it very difficult to maximize S(dja).
Instead, we use the "softer" sigmoid function,
With this, we can re-express Eq. (16) as
f(a l )s (l)
We update a with
dt
dS(dja)
f(a l )s (l)
This rule increments a i whenever the hypothesis H i is generating enough encoding savings
to offset the overhead cost penalty ff, discounting any contested points. We implement this
optimization using forward-Euler discretization, with C serving as an integration constant
[30]. As shown in Appendix B, Eq. (19) will converge to a local maxima of S(dja).
In most cases, including all the examples presented in Section 4, we have found that the
local maxima found via this gradient ascent procedure corresponded to correct estimates of
K, and perceptually salient decompositions of the scene. However, if the space of support
maps is quite large and complex it is possible that spurious local maxima may exist. In
previous work [25] we have employed a continuation method on the overhead cost ff when
local maxima were a problem. This continuation method initially biases the method to first
find descriptions that cover large regions of the image, and then find smaller descriptions.
Use of this continuation method has shown itself useful at avoiding spurious minima, and
in addition can improve the convergence rate.
3.4 Refinement of estimates for fixed K
The previous sections describe a method for initializing a set of robust estimators with different
initial states and selecting the subset that provides the most parsimonious description
of the image. Roughly, these steps estimate "how many" processes are in the image signal,
and coarsely determine their support and parameters.
Once we have this estimate of the number of processes, we refine the parameter and
support estimates in the hypothesis set using a cooperative update rule. In this step we
assume that current set of hypotheses consists of one hypothesis for each actual process in
the scene, e.g. that the estimate of K is correct. We then apply a stronger rule for support
determination than that used by single robust M-estimators, by taking into account the
residual error of all the models. Generalizing the robust estimation reweighting function,
Eq. (2), we use a reweighting function that depends on the entire set of residuals in H for
a point j, as well as the local residual at that point for description i, r (k)
r
r
defined as
Since the only form of overlap we are considering is occlusion, we limit the cooperative /
function to only depend on the minimum residual at a particular image point. We can
minimize the total residual error in all description estimates by only allowing the estimator
with the lowest residual to keep a given point. We use
r if (r j
and (r j
where ' MAX is the largest possible residual error value generated by the noise model N ().
With this function, each estimator iteratively performs a least-squares estimation over the
points for which it has lowest residual error, subject to the maximum residual threshold
' MAX . In the case of Gaussian N () the distribution has infinite extent, so '
is thus ignored. However for distributions with compact distributions and/or when there
are true outliers which cannot be modeled by any hypothesis, the use of ' MAX can improve
the segmentation result.
The information provided to an estimator by the cooperative / function allows the ex-
clusion/segmentation of outliers based not just on their deviation from the prior model, but
also on the ability of some other estimator to account for the point in question. Combining
Eqs. (20) and (22), our cooperative update rule is:
and (r (k)
The problem of reconstructing a signal containing several homogeneous but disjoint chunks
has broad application in image processing and perception. We have experimented with our
framework both in the domain of shape-based segmentation of range images, and in the
domain of motion-based segmentation of image sequences.
4.1 Segmentation of range images using piecewise-polynomial model
A straightforward application of our method is the approximation of images with disconnected
regions consisting of piecewise polynomial patches. The model used in this example
consisted of linear basis functions for global approximation combined with thin-plate regularization
for local smoothing. We used the M-estimator described in section 2, with
polynomial basis functions. The image was modeled by
where x (k) is the parameter vector for hypothesis H k . A matrix B is constructed with
columns corresponding to the polynomials of the model, so that when the image is expressed
as a data vector d the model can be expressed as
Since even poor polynomial fits will usually have some points where the local error of
approximation is zero (in particular at any point where the the data and the estimated
surface accidentally cross), we regularize the squared error signal using a thin plate model.
This will cause very small regions of support, such as those from accidental zeros, to be
reduced. Parameters are estimated from the data via Eq. (4), and residual error is computed
by
r
is a thin-plate regularization function [28]. In these examples we used
We show the results of our system on several different piecewise-polynomial images. First
we show an example using constant regions, to illustrate pedagogically the stages in process-
ing. (Since in this example the parameter space is scalar and the edges relatively large-scale,
we do not wish to suggest conventional techniques could not be successfully applied to this
image.) The subsequent examples utilize models that generate image regions of considerable
complexity, which would be difficult or impossible to segment using conventional edge-based
methods.
Figure
1(a) shows an image with six constant regions, two of which are spatially
disjoint. We constructed an initial hypothesis set uniformly sampling
the (scalar) parameter space. Figure 1(b) shows the support maps computed for the
initial hypotheses, using to ensure that the full range of parameters would be
"covered" by the hypothesis set. As can be seen, most hypotheses have "found" at
least one object. However, there is redundancy in the set, as each object is covered by
more than one hypothesis. Also, the segmentation at this stage is not perfect, since
the regularization has "smoothed" out the edges.
The next stage in processing is to select the subset of hypotheses which constitutes
a minimum lengh description of the signal, as described in Section 3.3.3. In all the
examples shown in this paper, the model overhead term used in the selection method,
ff, was kept constant across models, and was set to be 1% of the total entropy of the
image being processed. The selection stage selected 6 hypotheses, shown in Figure
1(c), corresponding to the actual regions in the image. Parameters were re-estimated
for these hypotheses and the cooperative support update rule (Eq. (23)) applied,
yielding the final support maps shown in Figure1(d). The segmentation is perfect,
as the cooperative rule eliminated the distortion in support shape induced by the
smoothing process.
Figure
2 shows the results of repeating this example for differing levels of additive
Gaussian noise, from To avoid the "false alarm" situation described
in section 3.3.2, ' was set to be the inter-hypothesis distance, or the standard deviation
of the noise, whichever was greater oe)). The solution was insensitive to
small variations in ff; perturbations of up to 40% of ff were tolerated without impairing
the correct estimation of both K in these experiments. While the precision of the
estimated support maps degrades as it becomes increasingly difficult to separate the
different populations with a threshold value, the important result is that in each case
K and the coarse model parameters were correctly estimated. This would allow for a
later stage of processing to apply more strict domain knowledge, if available.
ffl Next, we tested our method on second-order polynomial surfaces with high-order dis-
continuities. Figure 3 shows the construction of an image that contains three different
underlying surfaces, with a sinusoidally shaped discontinuity between two regions,
and a straight but high-order discontinuity between the other two. Note that the
high-order discontinuity is (usually) not visible to human observers.
An initial set of hypotheses was created by fitting parameters to 8x8 patches of the
image.
Figure
4 shows the results for the noise-free case Figure 5 for
the case where 5. (Since hypotheses were initialized using windows rather
than parameters, no minimum ' was needed.) Despite the complicated edge structure,
nearly perfect segmentation is achieved in the noise-free case. And as in the previous
examples, the noisy case degrades the support map estimates, but does not prevent
the correct estimation of K.
(a)
(b)
(c) (d)
Figure
1: Segmentation of image with constant regions. (a) image (b) initial hypothesis
set (c) selected hypotheses (d) selected hypotheses after support was reallocated using
cooperative support update rule.
(a)
50 100 150 200 2502040(c)
Figure
2: Segmentation of image with constant regions for varying amounts of additive
Gaussian nose (a) image (b) histogram of image (c) computed support maps
Figure
3: Construction of synthetic image with complicated occlusion boundaries.
Figure
4: Segmentation of image with second-order regions. (a) image (b) initial hypoth-
Figure
5: Segmentation of image with second-order regions and additive Gaussian noise
5). (a) image (b) initial hypothesis set (c) selected hypotheses (d) selected hypotheses
Figure
(a) Range data for an arch sitting on a block. (b) hypotheses found based on
fitting second order polynomial surfaces.
Figure
6(a) shows a range image of an arch sitting on a block, obtained from a laser
range finder. 3 We applied our method as in the previous example. As shown in
Figure
6(b), two hypotheses were selected to account for the image. For the planar
hypothesis, several small groups of points which are disjoint from the main support
were successfully grouped together, despite being far (in the image) from the main
region of support. However, because the polynomial basis functions are not a perfect
model for the actual surfaces in the scene, a small number of points are misclassified
in the recovered support map. Better models of the surface (such as modal models
[27]) would help to alleviate this.
ffl To test the stability of segmentations using this method, we applied it to a series of
different views of a 3D figure. Figure 7(a) shows synthetic range data of a human
figure taken from three different viewpoints. Figure 7(b) shows the segmentation
produced for each view. Note that the system was able to find the major parts in
each view, and was able to distinguish the chest part from the torso part. Since the
second order surface model was only a coarse approximation of the actual shapes,
we used a relatively large threshold value, As a result some of the smaller
parts (e.g. hands and feet) were not described successfully, and certain aligned parts
were incorrectly merged into a single hypothesis (e.g. the leg parts.) Nonetheless the
important result is that the segmentations were stable across different views, recovering
the same essential part-based description to represent the figure.
We were able to use the resulting segmentations to recover an 3-D model from the
range data. A modal model part was fit to data for a corresponding part in each view
using the ThingWorld modeling system [27]. Camera position was known for the views
used in this example, so the part correspondences were straightforward to compute.
The recovered 3-D model is shown in Figure 7(c).
3 Range image obtained via anonymous FTP from the MSU Pattern Recognition and Image Processing
Lab.
(b)
(c)
Figure
7: (a) Synthetic range data for a human figure, taken from three views. (b) Color-coded
segmentations for each view obtained using second-order polynomial shape models.
(c) Segmentations from previous figure were used as input to a 3-D shape estimation process.
Two views are shown of the recovered 3-D model.
4.2 Image sequence segmentation using global velocity field models
We have also applied our method to the domain of motion segmentation using global velocity
field models [8]. In this case the task is to decompose an image sequence into a set of layers
corresponding to homogeneous motions, based on a global model of coherent flow and a
gradient-based model of local velocity measurements. We adopt a global model of velocity
fields in the scene, using a linear combination of horizontal and vertical shifts together with
a looming field:
where a; b; c; d; e are the parameters of the model. This model is appropriate for a large
class of common sequences, e.g. those that predominantly contain translations in 3-D. For
more complicated sequences higher-order velocity field models may be appropriate, such as
the affine model [29, 36].
In our model, each hypothesis is a global velocity field, which will imply a local velocity
at every point in the scene. To test the support of a hypothesis, we thus need to estimate
whether a particular velocity is present in a local region. We compute support by thresholding
a residual error signal (via Eqs. (6),(23)) which is small at points where the predicted
velocity is likely to be present, given the information in the image. According to the gradient
constraint, the spatial and temporal image derivatives at a point obey the equation
@
@x
@
@y
@t
Simoncelli [35] has derived a probabilistic model for optic flow computation, which we use
to define the residual error of a velocity hypothesis. Assuming there is only a single motion
in the neighborhood, 4 we can use:
Image derivatives were computed using 5-tap linear filters, and the threshold value ' set to
be the smallest value for which there were layers covering all regions of the scene which had
some motion or texture.
Here we show the results of our system on three different image sequences:
ffl We first constructed a transparent, flickering random dot image sequence similar to
the one used in the Husain, Treue, and Andersen experiment described in Section 2.
Our image sequence had two interleaved populations of moving dots, one undergoing
a looming motion, and the other undergoing a translating motion, as shown in Figure
8(a). Since our approach recovers a global motion estimate for each region in each
frame, no explicit pixel-to-pixel correspondences are needed over long sequences. After
each frame 10% of the dots died off and randomly moved to a new point on the 3-D
surface. We rendered ten frames of this motion sequence; the first frame is shown in
Figure
8(b).
4 for the extension of our model to the case of motion with additive transparency, see [9]
We adopted an initial set of velocity field hypotheses corresponding to 8 planar shifts:
plus a full field looming hypothesis f(0; 0; 1)g with fixed offsets at the center of the
image
We applied our method independently to each frame in the sequence, and in each
case two hypotheses were found to represent for the motion information in the scene.
The hypotheses recovered for a particular frame are shown in Figure 8(c). One was
estimating the looming motion in the sequence, and the other the translating motion.
The recovered motion parameter estimates were quite stable across different frames,
despite the random appearance and disappearance of sample points. An edge-based
segmentation method would have yielded nonsensical results on this image sequence,
indicating many "edges" (and thus many regions) in a scene which in fact contains
only two regions, transparently combined.
ffl Next, we ran our model on real intensity sequences with complicated occlusion, using
the same initialization conditions as in the previous example. Figure 9(a) shows a
frame of an image sequence containing two plants in the foreground with a person
moving behind them. The person is occluded by the plant's leaves in a complex
manner, so that the visible surface is broken up into many disconnected regions. Figure
9(b) shows the segmentation provided by our method when run on this sequence. Two
regions were found, one for the person and one for the two plants. Most of the person
has been correctly grouped together despite the occlusion caused by the plant's leaves.
Note that there is a cast shadow moving in synchrony with the person in the scene,
and is thus grouped with that description.
Figure
shows a frame from a similar image sequence, taken from the MPEG test
suite. The image sequence shows a house and a flower garden with an occluding
tree, and the camera motion is panning from right to left. We applied our method
as described above to the frame shown in the figure, except support windows rather
than parameter sampling were used to define the initial hypothesis set. We estimated
motion parameters from 256 8x8 windows, and used these as the initial hypotheses. A
support map over the entire image was computed for each hypothesis. The selection
stage recovered three layers corresponding to different depth planes in the sequence.
Both the house and garden regions are split in two by the occluding tree; our method
correctly grouped both of these in their respective layers. Note that areas without
significant spatial or temporal variation (such as the sky and center of the tree) are
not assigned to any layer.
(c)
Figure
8: (a) Overview of two-motion transparent, flickering random dot stimulus; (b) first
frame of sequence. (c) Two layers were found by our system for each frame, corresponding
to the two coherent motions in the sequence (see text for details).
(a)
(b)
Figure
9: (a) First frame from image sequence and (b) final hypotheses.
Figure
10: (a) First frame from image sequence and (b) final hypotheses.
4.3 Segmentation of Tracked Points using a Recursive Structure
From Motion Model
Finally, we have applied our segmentation scheme to segment tracked points using a rigid-body
motion model. We use the recursive estimation approach to estimating Structure From
Motion (SFM) of Azarbayejani and Pentland [2]. This method formulates the estimation
problem as an instance of an Extended Kalman Filter (EKF) which solves for the optimal
estimate of the relative orientation and pointwise structure of an object at each time step,
given a set of explicitly tracked features which belong to a single coherent object. Our
extension is thus to handle the multiple-motion case, where there are an unknown number
of rigid-body motions in the scene.
First we briefly review the recursive structure from motion model presented in [2]. This
formulation embeds the constraints of the well-known relative orientation approach to structure
from motion [13, 14] directly into an EKF measurement relationship. Motion is defined
as the 3-D translation and rotation of the camera (or object) with respect to the first frame
in the sequence. Similarly, structure is represented as the 3-D locations of points seen in
the first camera frame. Since features are chosen from the image, there is actually only one
unknown parameter for each feature point: the depth along the perspective ray established
in the first camera frame.
The state vector for the EKF contains six motion parameters and N structure parameters
CO
where c j is the depth for point j and the rotational vector ae(t) contains three Euler angles
which describe the (small) rotation between time \Deltat and time t. After every time
step, the small rotation is composed into a global rotation quaternion which is maintained
external to the EKF.
Appendix
C presents the derivation of the EKF measurement and update equations,
which compute the best linear least squares estimate and the error covariance. Figure 11
shows the results of this method applied to a moving soccer ball, when appropriate features
were selected and tracked by hand.
Given a sequence of features with an unknown number of motions and unknown seg-
mentation, we apply our hypothesize, test, and select method. We generate hypotheses by
taking spatially localized random samples of sets of features, and using those to compute
a motion estimate. We then compute a support map for each hypothesis; the residual error
function is refined to be the difference between the predicted feature locations for each
tracked feature and the feature locations actually observed. This can be computed by estimating
a structure parameter c given the motion parameters in the hypothesis, and then
generating the synthetic feature track predicted by those motion and structure parameters.
We use the sum of squared differences of the velocities (instantaneous differences) between
the predicted feature track and the observed feature track. The predicted feature track
is computed by fixing the motion parameters (setting their initial covariances to 0) and
running the SFM estimator.
We show results of this method on two examples with multiple rigid-body motions.
Figure
12 shows the construction of a synthetic sequence of feature tracks from two
rotating spheres. For this example, we generated 200 hypotheses, taking random samples
of 4 feature tracks. Figure 13 shows the support maps for 8 of these hypotheses.
Many of the initial hypotheses yield degenerate motion/structure estimates, and accordingly
have few supported points, since their initial support will have come from
-0.050.05-0.2
-0.050.05-0.2
-0.050.05-0.2
-0.050.05-0.2
-0.050.05-0.2
-0.050.05-0.2
Figure
11: Image with (hand-selected) features and recovered 3-D structure. From [2].
Figure
12: Construction of synthetic sequence with two rigid motions, and no static segmentation
cues. Dots were placed on two spheres with different rotations; in the center image
the sphere is rotating right, in the right image the sphere is rotating up. The left image
shows the combined sequence (only the first frame is displayed)
heterogeneous or degenerate feature sets. However a few will exist that have initial
support that covers only a single object (with known probability). These will have
motion estimates that coarsely match the actual object motion, and will consequently
have support maps that cover a majority of the object. The selection method chose two
such hypotheses, which corresponded to the two original motions used to synthetically
create the sequence: their support maps are shown in Figure 14.
Figure
15 shows a real image sequence with two motions (egomotion and the motion
of the soccer ball). Features were tracked on this sequence using the (human-assisted)
method described in [2]. Hypotheses were constructed by taking random selections of
4 points and computing a motion estimates based only on these points. Support maps
were then computed as described above, to see which other points in the scene agreed
with the hypothesized motion. 500 hypotheses were generated; Figure 16 shows 8 of
these support maps. The selection stage chose two hypotheses, corresponding to the
actual motions in the scene. Figure 17 shows the support after refinement using the
cooperative update rule Eq. (23).
Figure
13: Eight (of 200) support maps for rigid-body motion hypotheses. Each hypotheses
was based on computing a motion estimate using 4 randomly selected features from the
sequence. Most solutions are degenerate, but a few have "locked on" to a real object in the
scene.
(a)
(b)
Figure
14: (a) Input sequence and (b) Selected hypotheses for synthetic rotation example
Figure
15: Three frames from a sequence of intensity images with tracked features from a
image sequence with two motions: a rotating soccer ball and a moving ground plane due to
camera motion.
Figure
Eight support maps for different rigid-body motion hypotheses for the rolling ball
sequence. 500 such support maps were generated in total, by randomly selecting features
from the sequence, making a motion estimate with them, and then thresholding the residual
error of the predicted feature locations for all other features. Since the random selection
process will usually generate hypotheses which intermingle features from more than one
object most solutions are degenerate, but a few are based on homogeneous initial features
and have "locked on" to a real object in the scene.
(b)
Figure
17: (a) Input sequence and (b) Final Hypotheses for rolling soccer ball example
5 Conclusion
We have presented an approach to the problem of describing images that contain multiple
objects or surfaces. We have argued that edge-based representations are insufficient for
images that contain transparent or complex occlusion phenomena, and instead propose a
multi-layer estimation framework which uses support maps to represent the segmentation
of the image into homogeneous (but possibly disconnected) chunks. This support-based
framework is an extension of the M-estimation methods found in the robust estimation
literature, which deal with the use of a single support map (inverse outlier map). Our
method utilizes many of these estimators in parallel, and the Minimum Description Length
principle to decide how many robust estimators are needed to describe a particular image.
A major advantage of this approach is that it allows the simultaneous use of multiple
models within a robust estimation framework. The description length formulation can consider
an arbitrary number of candidate support maps, each generated by a robust estimator
with a different initial condition or a different model. We have found that minima of the
description length function correspond to perceptually salient segmentations of the image,
and demonstrated our results on shape and motion segmentation tasks.
6
Acknowledgment
The authors would like to acknowledge the anonymous reviewers of this paper, whose comments
greatly helped the clarity and consistency of the presentation.
A Convergence of Selection Method
Under Eq. (19), S(dja) will converge to a local maxima. We can see this by noting the
equality
dS(dja)
dt
dS(dja)
dt
and substituting dS(dja)
via Eq. 19,
dS(dja)
dt
dt
dt
But a
dt
dt
dt
and thus substituting back into Eq. 33
dS(dja)
dt
dt
monotonically increasing,
doe
dt
and thus dS(dja)
dt will never decrease under Eq. 19. Since the values of f i are bounded,
S(dja) is bounded, and thus must converge.
Extended Kalman Filter for Rigid-body Structure
from Motion
B.1 Single motion formalism
The following presentation of the EKF is adapted from [2]. Given the state vector described
by Eq. (31), the linear dynamics model is
where
I 0 I \Deltat 0 0
and -(t) is an error term, modeled as multi-variate Gaussian distributed white noise.
The observation at each time step is a set of N 2D measurements which is a nonlinear
function of the state vector:
The imaging geometry outlined in Section 2 (see [2] for details) define the nonlinear function
h(x(t)), with an uncertainty term j(t), modeled as Gaussian distributed white noise.
The notation -
represents the linear least squares estimate (LLSE) of x(t 1 ) based
upon y(- t 2 . The notation
represents the error covariance of the LLSE.
The EKF update equations are
where the Kalman gain is defined as
and
@x
The prediction step is
where
B.2 Robust Formalism
Given a support map, we need to make an estimate of structure and motion parameters
based on only those features in the subset specified by the support map. In practice we
can do this constructing a new filter with only the state variables corresponding to the
set of supported features. However there is also a direct way to do this in the full EKF
formulation, by adjusting the measurement noise covariance term R. (This approach also
has the advantage that it can be more easily generalized to the case of non-boolean support.)
For each feature track, R specifies the confidence of the measurement, and weights the
contribution of that feature to the estimation process through the Kalman gain matrix.
When R is 0, there is implicitly no noise in the measurement, and the feature vector contributes
fully to the estimation. If R is large, the contribution of the feature vector is
minimized.
If we wish to specify that a feature track is an outlier and should not be included in
the estimation, we can set its measurement noise model to be infinitely large, and it's
contribution to the estimation will be infinitesimal. Thus, if we have a support map s
which indicates which points ought to be supported in the estimate, we can simply set the
measurement covariance to be the inverse support:
run the full EKF structure-from-motion estimator. 5
5 In this case it can be advantageous to use the "information" form of the Kalman filter, since it is not
adversely effected by numerical stability problems with infinite covariances.
--R
"Ordinal characteristics of transparency"
"Pragnanz and soap-bubble systems: A theoretical exploration"
"The fitting of power series, meaning polynomials, illustrated on band-spectroscopic data"
"Robust Window Operators"
Visual Reconstruction.
"Segmentation by Minimal Description"
"Robust Estimation of a Multi-Layer Motion Repre- sentation"
"Separation of Transparent Motion into Layers using Velocity-Tuned Mechanisms"
"Stochastic relaxation, Gibbs distribution, and the Bayesian restoration of images"
"A quantitative approach to figure goodness"
Neural computation of decisions in Optimization Problems.
Robot Vision.
Relative orientation.
Robust Statistical Procedures SIAM CBMS-NSF series in Appl
"Surface interpolation in three-dimensional structure-from-motion perception"
Principles of Gestalt Psychology.
"Constructing Simple Stable Descriptions for Image Partitioning"
"Region Grouping using the Minimum-Description-Length Principle"
"A perceptual coding language for visual and auditory patterns"
"Robust Regression"
"Random Measure Fields and the Integration of Visual Information"
"Robust regression methods for computer vision: A review"
"The 2.1-D Sketch"
"Part Segmentation for Object Recognition"
"Automatic recovery of deformable part models"
"Closed form solutions for physically based shape modeling and recognition"
"Computational vision and regularization theory"
Numerical Recipes in C.
The Mathematical Theory of Communication
"The computation of visible surface representations"
"A universal prior for integers and estimation by minimum description length"
Stochastic Complexity in Statistical Inquiry.
"Probability Distributions of Optical Flow"
"Layered Representations for Image Sequence Coding"
"An Information Measure for Classification"
--TR
--CTR
Cheng Bing , Wang Ying , Zheng Nanning , Bian Zhengzhong, Object based segmentation of video using variational level sets, Machine Graphics & Vision International Journal, v.14 n.2, p.145-157, January 2005
Hichem Frigui , Raghu Krishnapuram, A Robust Competitive Clustering Algorithm With Applications in Computer Vision, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.5, p.450-465, May 1999
Michael S. Langer , Richard Mann, Optical Snow, International Journal of Computer Vision, v.55 n.1, p.55-71, October
Han , Wei Xu , Yihong Gong, Video object segmentation by motion-based sequential feature clustering, Proceedings of the 14th annual ACM international conference on Multimedia, October 23-27, 2006, Santa Barbara, CA, USA
E. P. Ong , M. Spann, Robust Optical Flow Computation Based on Least-Median-of-Squares Regression, International Journal of Computer Vision, v.31 n.1, p.51-82, Feb. 1999
Harpreet S. Sawhney , Serge Ayer, Compact Representations of Videos Through Dominant and Multiple Motion Estimation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.8, p.814-830, August 1996
Nelson L. Chang , Avideh Zakhor, Constructing a Multivalued Representation for View Synthesis, International Journal of Computer Vision, v.45 n.2, p.157-190, November 2001
Stan Sclaroff , Lifeng Liu, Deformable Shape Detection and Description via Model-Based Region Grouping, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.5, p.475-489, May 2001
Charles Kervrann , Alain Trubuil, Optimal Level Curves and Global Minimizers of Cost Functionals in Image Segmentation, Journal of Mathematical Imaging and Vision, v.17 n.2, p.153-174, September 2002
Michael J. Black , Allan D. Jepson, Estimating Optical Flow in Segmented Images Using Variable-Order Parametric Models With Local Deformations, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.10, p.972-986, October 1996
Charles Kervrann , Mark Hoebeke , Alain Trubuil, Isophotes Selection and Reaction-Diffusion Model for Object Boundaries Estimation, International Journal of Computer Vision, v.50 n.1, p.63-94, October 2002
David J. Fleet , Michael J. Black , Yaser Yacoob , Allan D. Jepson, Design and Use of Linear Models for Image Motion Analysis, International Journal of Computer Vision, v.36 n.3, p.171-193, Feb.-March 2000
Mohamed Ben Hadj Rhouma , Hichem Frigui, Self-Organization of Pulse-Coupled Oscillators with Application to Clustering, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.2, p.180-195, February 2001 | transparency;robust estimation;structure from motion;range segmentation;segmentation;perceptual organization;multiple models;motion segmentation |
628679 | Linear and Incremental Acquisition of Invariant Shape Models From Image Sequences. | AbstractWe show how to automatically acquire Euclidian shape representations of objects from noisy image sequences under weak perspective. The proposed method is linear and incremental, requiring no more than pseudoinverse. A nonlinear, but numerically sound preprocessing stage is added to improve the accuracy of the results even further. Experiments show that attention to noise and computational techniques improves the shape results substantially with respect to previous methods proposed for ideal images. | Introduction
In model-based recognition, images are matched against stored libraries of three-dimensional object
representations, so that a good match implies recognition of the object. The recognition process
is greatly simplified if the quality of the match can be determined without camera calibration,
namely, without having to compute the pose of each candidate object in the reference system of the
camera. For this purpose, three-dimensional object representations have been proposed [34] that are
invariant with respect to similarity transformations, that is, rotations, translations, and isotropic
scaling. These are exactly the transformations that occur in the weak perspective projection model
[1], where images are scaled orthographic projections of rotated and translated objects. Because of
its linearity, weak perspective strikes a good balance between mathematical tractability and model
generality.
In this paper, we propose a method for acquiring a similarity-invariant representation from a
sequence of images of the objects themselves. Automatic acquisition from images avoids the tedious
and error prone process of typing three-dimensional coordinates of points on the objects, and
makes expensive three-dimensional sensors such as laser rangefinders unnecessary. However, model
recognition techniques such as geometric hashing have been shown [9] to produce false positive
This work was done at IBM T.J. Watson Research Center, Yorktown Heights, NY.
y This work was supported by Grant No. IRI-9201751 from the NSF.
matches with even moderate levels of error in the representations or in the images. Consequently,
we pay close attention to accuracy and numerical soundness of the algorithms employed, and derive
a computationally robust and efficient counterpart to the schemes that previous papers discuss
under ideal circumstances.
To be sure, several systems have been proposed for computing depth or shape information
from image sequences. For instance, [28, 32, 23, 18, 27, 16] identify the minimum number of
points necessary to recover motion and structure from two or three frames, [2, 21] recover depth
from many frames when motion is known, [17, 14, 3, 6, 11] consider restricted or partially known
motion, [30] solves the complete multiframe problem under orthographic projection, and [26, 13]
propose multiframe solutions under perspective projection.
Conceivably, one could use one of these algorithms to determine the complete three-dimensional
shape and pose of the object in an Euclidean reference system, and process the results to achieve
similarity invariance. However, a similarity-invariant representation is weaker than a full representation
with pose, since it does not include the orientation of the camera relative to the object.
Consequently, the invariant representation contains less information, and ought to be easier to
compute. This intuition is supported by experiments with complete calibration and reconstruction
algorithms, which, given a good initial guess of the shape of the object, spend a large number
of iterations modifying the parameters of the calibration and pose matrices, without affecting the
shape by much 1 .
In this paper, we show that this is indeed the case. (We assume weak perspective projection;
A few recent papers described structure from motion algorithms without camera calibration and
with perspective projection [7, 22].) Specifically, we compute a similarity-invariant representation
of shape both linearly and incrementally from a sequence of weak perspective images. This is a
very important gain. In fact, a linear multiframe algorithm avoids both the instability of two- or
three-frame recovery methods and the danger of local minima that nonlinear multiframe methods
must face. Moreover, the incremental nature of our method makes it possible to process images
one at a time, moving away from the storage-intensive batch methods of the past.
Our acquisition method is based on the observation that the trajectories that points on the
object form in weak perspective image sequences can be written as linear combinations of three of
the trajectories themselves, and that the coefficients of the linear combinations represent shape in
an affine-invariant basis. This result is closely related to, but different from, the statement that
any image in the sequence is a linear combination of three of its images [31].
In this paper, we also show that the optional addition of a nonlinear but numerically sound
stage, which selects the most suitable basis trajectories, improves the accuracy of the representation
even further. This leads to an image-to-model matching criterion that better discriminates between
new images that depict the model object and those that do not. In order to compare our method to
existing model acquisition (or structure from motion) methods, we describe a simple transformation,
by which we compute a depth representation from the similarity-invariant representation computed
by our algorithm.
In the following, we first define the weak perspective imaging model (Section 2). We review the
similarity-invariant shape representation and the image-to-model matching measure (Section 3).
We then introduce our linear and incremental acquisition algorithm, as well as the nonlinear pre-processing
procedure (Section 4). Finally, we evaluate performance with some experiments on real
1 B. Boufama, personal communication.
image sequences (Section 5).
Multiframe Weak Perspective
Under weak perspective, a point on an object can be related to the corresponding
image point by a scaling, a rotation, a translation, and a
projection:
where Rm is an orthonormal 3 \Theta 3 matrix, - m is a three-dimensional translation vector, s m is a
scalar and P is the projection operator that simply selects the first two rows of its argument. The
two components of wmn are thus:
(2)
where the orthonormal vectors i T
m are the first two rows of Rm , and am , b m are the first two
components of - m . In a sequence of images, feature points can be extracted and tracked with one
of the methods described in [24, 29, 8] (see the experiments in section 5 below). If N points are
tracked in M frames, the equations (2) are repeated MN times, and can be written in matrix form
as
a 1
aM
that is,
where 1 is a vector of N ones. Thus, -
W collects the image measurements, R represents both scaling
and rotation, P is shape, and t is translation. In Section 4, we show that R and P need not in fact
be computed explicitly in order to compute a similarity-invariant representation.
3 Review of the Similarity-Invariant Representation
Starting with Eq. (3) as a multiframe imaging model, we now describe how to define a shape representation
that is invariant with respect to similarity transformations, that is, rigid transformations
and isotropic scalings [34]. Specifically, we work towards similarity invariance in three steps:
1. invariance to translation (Section 3.1);
2. invariance to affine transformations (Section 3.2);
3. invariance to similarity transformations (Section 3.3).
For performance evaluation only, we will also discuss the
4. computation of depth (Section 3.4).
Using the invariant representations, we describe an image-to-model matching measure (Sec-
tion 3.5). For completeness, we outline how to compute the pose of the camera (Section 3.6),
although this is not used in this paper.
3.1 Translation Invariance
Invariance with respect to translation is easily achieved by measuring the coordinates in P using
some linear combination of the points themselves as a reference origin: if the object translates, so
does the reference point, and the representation does not change. In [16], it is suggested to pick an
object point as the reference origin. In [30], the centroid of all the points is used instead (this is
the optimal translation in a least-square sense [11]). In practice, our experiments show that there
is little difference between these two choices.
Let now t be a 2M-dimensional vector that collects the two coordinates of the projection of
the reference point in each frame. In a suitable reference system where the reference point is the
origin, this vector is the same as the vector t of Eq. (3). The image measurements in the matrix
W can now be centered by subtracting t from each column to yield
so that Eq. (3) becomes
3.2 Affine Transformation Invariance
The M \Theta 3 matrix R in Eq. (5) is built from 3 \Theta 3 orthonormal matrices and isotropic scaling
factors (see Eq. (2)). Therefore, corresponding rows in the upper and lower halves of R (that is,
rows m and m+M for must be mutually orthogonal and have the same norm s m .
If these orthogonality constraints are satisfied, we say that
and the corresponding P represents Euclidean shape. In particular, the columns of P are the three-dimensional
coordinates of the object points with respect to some orthonormal reference basis.
Invariance with respect to affine transformations is achieved by replacing this basis by one that
is more intimately related to the shape of the object. Specifically, the basis is made by three
of the object points themselves, that is, by the vectors from the reference origin to the three
points, assumed not to be coplanar with the origin. This basis is no more orthonormal. The new
coordinates were called affine in [16]. If now the object undergoes some affine transformation, so
do the basis points, and the affine coordinates of the N object points do not change.
The choice of the three basis points can be important. In fact, the requirement that the points
be noncoplanar with the origin is not an all-or-nothing proposition. Four points can be almost
coplanar, and with noisy data this is almost as bad as having exactly coplanar points. We discuss
this issue in Section 4, where we propose a method that selects a basis as far away as possible from
being coplanar with the origin. In the experiments of Section 5, this additional nonlinear stage is
shown to improve markedly the quality of the shape representation.
Notice that in the new affine basis the three selected basis points have coordinates (1; 0; 0),
(0; 1; 0), and (0; 0; 1), so that the new 3 \Theta N matrix A of affine coordinates is related to the
Euclidean matrix P of Eq. (5) by the 3 \Theta 3 linear transformation:
where is the submatrix of P that collects the three selected basis points.
If we substitute Eq. (6) into the centered projection Eq. (5), we obtain
However, because the submatrix A is the identity matrix, we see
that W b is a submatrix of W :
In more geometric terms, Eq. (7) expresses the following key result:
all the image trajectories (W ) of the object points can be written as a linear combination
of the image trajectories (W b ) of three of the points. The coefficients (A) of the linear
combinations are the three-dimensional coordinates of the corresponding points in space
in the affine three-dimensional basis of the points themselves.
Notice the analogy and difference between this result and the statement, made in [31], that
under weak perspective any image of an object is a linear combination of three of its views. We
are saying that any trajectory is a linear combination of three trajectories, while they are saying
that any snapshot is a linear combination of three snapshots. The concise matrix equation in (7)
contains these two statements in a symmetric form: Ullman and Basri read the equation by rows,
we read it by columns. When we discuss the recognition stage in Section 3.5, we consider the case
in which new rows (unfamiliar views) are added to W , and so we revert to Ullman and Basri's
reading of Eq. (7).
3.3 Similarity Invariance
Although conceptually a useful intermediate step, affine transformations are too general. Affine-
invariant representations are also invariant with respect to similarity transformations, since the
latter are a special case of the former. However, a representation that is invariant with respect to
similarity and not, say, shearing (an affine transformation) will be able to discriminate among more
models: for instance, the roman letter 'A' will not be confused with an italic 'A'. In other words,
it is best to have representations that are invariant exactly with respect to the class of possible
transformations between model and viewer reference systems. Under weak perspective, this is the
class of similarity transformations, a proper subset of affine transformations.
To achieve invariance with respect to similarity transformations, we augment the affine representation
introduced above with metric information about the three basis points. Of course, we
cannot simply list the coordinates of the three basis points in a fixed reference system, since these
coordinates would not be invariant with respect to rotation and scaling. Instead, we introduce the
Gramian matrix of the three basis points, defined as follows [34]:
In Section 4.2 we normalize G to make it invariant to scaling.
The Gramian is a symmetric matrix, and is defined in terms of the Euclidean coordinates of the
basis points (see Eq. (1)). However, we show in Section 4 that G can be computed linearly from
the images, without first computing the depth or pose of the object.
The pair of matrices (A; G) is our target representation. Because of centering and the appropriate
selection of the affine basis, A is invariant with respect to translations and affine transformations.
The Gramian G, on the other hand, can be normalized for scale invariance. Furthermore, except
for scale, its entries are cosines of the angles between the vectors of P b (see Eq. (8)). This makes G
invariant to rotations and, because of centering, to translations, but sensitive to affine transformations
that change angles between points. In the next subsection we show constructively that the
contains complete information about the object's shape, but not directly about its pose
in each image.
3.4 Depth Map
Determining the depth of the object requires to express its shape in an orthonormal system of
reference, that is, to compute the matrix P of Eq. (5). We now show that the shape Gramian G of
Eq. (8) contains all the necessary information. In fact, let W b be the matrix of the basis trajectories
introduced in Eq. (7), and let P b be the coordinates of the corresponding basis points in space in
an orthonormal reference system (see Eq. (6)). Then, the definition (8) of the Gramian can be
rewritten as
Suppose now that T is the Cholesky factor of the Gramian G. We recall [10] that the Cholesky
factor of a symmetric positive definite matrix G is the unique upper triangular matrix T with
positive diagonal entries such that
Eq. Eq. (10) are formally similar factorizations of G. We claim that P b can differ from
T only by a rotation or a mirror transformation, so that T is in fact the representation of the three
selected basis points on the object in an orthonormal frame of reference. The projection equation
(5) does not specify the particular orientation of the orthonormal axes of the underlying reference
system, so P b and T can be taken to be the same matrix:
The remaining ambiguity due to the possibility that T is a mirror reflection of the "true" P b is
intrinsic in the weak perspective model (Necker reversal), and cannot be eliminated. Similarly, the
overall scale cannot be determined. In fact, we will see in Section 8 that G is the solution of a
homogeneous system of linear equations.
From the triangular structure of T we can easily determine where the new orthonormal basis
vectors are with respect to the points in space: the first basis vector points from the origin to point
1, because the first entry of column 1 in T is its only nonzero entry. Since the third entry of the
second column of T is zero, the second basis vector, orthogonal to the first, must be in the plane
of the origin and points 1 and 2. Finally, the third basis vector is orthogonal to the first two.
To prove our claim that P b and T can only differ by orthonormal transformations, we first
observe that
which implies that T and P b must be related by a linear trans-
formation. In other words, there must be a 3 \Theta 3 matrix Q such that . However, by
replacing this expression in T T
I (the 3 \Theta 3 identity), so Q must be
orthonormal.
In summary, we have the following method for computing the shape matrix P of Eq. (5):
determine the Gramian G by the linear method of Section 4, take its Cholesky factorization
and let T be the transformation of the affine shape matrix A into the new orthonormal basis.
Namely:
Notice that the three basis points i, j, k, whose coordinates in A are the identity matrix, are
transformed into the columns of T .
We emphasize once more that this last decomposition stage need not be performed for the
computation of the similarity-invariant shape representation. Furthermore, this stage can fail in
the presence of noise. In fact, the matrix G can be Cholesky-decomposed only if it is positive
definite. Bad data can cause this condition to be violated.
Recognition
Once the similarity-invariant representation (A; G) has been determined from a given sequence of
images, it can be used on new, unfamiliar views to determine whether they contain the object
represented by (A; G). In fact, from Eq. (5) and Eq. (9) we obtain
If we write out the relevant terms of this equation, we have 8m:
so that in particular
where the vectors x T
are the rows of the upper and
lower half of the centered image measurement matrix W b . Namely, xm and ym are the centered
image measurement of the basis points in frame m.
Eq. (11) provide strong constraints, capturing all the information that can be obtained from
a single image, since all the images that satisfy Eq. (11) are a possible instance of the object
represented by G. The two equations in Eq. (11) can be used in two ways: during recognition, the
given G can be used to check whether new image measurements x T
represent the same three
basis points as in the familiar views, thus yielding a key for indexing into the object library. During
acquisition of the shape representation, on the other hand, G is the unknown, and Eq. (11) can be
solved for G. In this section, we review the use of G for recognition (as discussed at length in [34]).
We discuss its computation, the major point of the present paper, in Section 4.
Essentially, the recognition scheme checks the residues between the left- and the righthand sides
of the equations in (11). The residues, however, must be properly normalized to make the matching
criterion independent of scale. The most straightforward definition of matching measure for the
three basis points is the following [34]:
The function (12) is a quadratic image invariant that can be used to check the basis points.
The matching of all the other points, on the other hand, can be verified by using the affine part
A of the similarity-invariant shape representation. In fact, all the terms of Eq. (7) are known once
a new frame
becomes available, so the following matching criterion suggests itself naturally:
m a l j
m a l j
where are the new image coordinates of the
basis points, and a are the columns of A.
3.6 Pose
The computation of pose is beyond the scope of the present paper. For completeness, we now
briefly outline methods to compute pose.
Since the Gramian matrix G defined in Eq. (8) contains the complete 3D information on the
basis points, and given the x and y coordinates of the basis points in frame m
straightforward to obtain their depth z in the coordinate system M corresponding to the camera's
orientation in frame m. Namely, the coordinate system whose is parallel to the
image plane in frame m, and whose depth axis Z is perpendicular to the image plane. The known
coordinates of the basis points in system M and in the orthonormal system defined in Section 3.4
can be used to compute the transformation between the two coordinates system, e.g., using the
method described in [11]. This transformation gives the pose of the camera in frame m.
Alternatively, it is possible to compute pose directly from the image measurements, using
Eq. (5). Tomasi & kanade [30] described a relatively simple non-linear method, which computes
pose from the Singular Value Decomposition of the centered measurement matrix W .
4 The Algorithm
In this section, we show how to compute the affine shape matrix A (Section 4.1) and the Gramian
G of the basis points (Section 4.2) linearly and incrementally from a sequence of images. We then
show how to choose three good basis points i, j, k (Section 4.3). This algorithm can use as little
as two frames and five points for computing matrix A, and as little as three frames and four points
for computing matrix G. More data can be added to the computation incrementally, if and when
available.
4.1 The Affine Shape Matrix
The affine shape matrix A is easily computed as the solution of the overconstrained linear system
(7), which we repeat for convenience:
Recall that W is the matrix of centered image measurements, and W b is the matrix of centered
image measurements of the basis points.
It is well known from the literature of Kalman filtering that linear systems can be solved
incrementally (see for instance [20]) one row at a time. The idea is to realize that the expression
for the solution
b is the pseudoinverse of
is composed of two parts whose size is independent of the number of image frames, namely, the
so-called covariance matrix
of size 3 \Theta 3, and the 3 \Theta M matrix
Both Q and S can be updated incrementally every time a new row w T is added to W (so the
corresponding row w T
b is also added to W b ). Specifically, the matrices Q+ and S+ after the update
are given by
where I is the 3 \Theta 3 identity matrix. For added efficiency, this pair of equations can be manipulated
into the following update rule for A [20]:
b A
Note that the computation of A requires at least two frames.
4.2 The Gramian
For each frame m, the two equations in (11) define linear constraints on the entries of the inverse
Gramian can be computed as the solution of a linear system. This system, however,
is homogeneous, so H can only be computed up to a scale factor.
To write this linear system in the more familiar form first notice that H is a
so it has six distinct entries h ij , 1 us gather those entries
in the vector
h 22
Furthermore, given two 3-vectors a and b, define the operator
z
Then, the equations in (11) are readily verified to be equivalent to the 2M \Theta 6 system
where
z T
z
z T
z
A unit norm solution to this linear system is reliably and efficiently obtained from the singular
value decomposition of
C as
the sixth column of VC . Because this linear system is overconstrained as soon as M - 3, the
computation of H , and therefore of the Gramian can be made insensitive to noise if sufficiently
many frames are used. Notice that the fact that the vector h has unit norm automatically
normalizes the Gramian.
Alternatively, in order to obtain an incremental algorithm for the computation of the Gramian
G, Eq. (14) can be solved with pseudo-inverse. (The incremental implementation of pseudo-inverse
was discussed in Section 4.1.) However, the method of choice for solving a homogeneous linear
system, which avoids rare singularities, is the method outlined above using SVD.
4.3 Selecting a Good Basis
The computation of the similarity-invariant representation (A; G) is now complete. However, no
criterion has yet been given to select the three basis points i, j, k. The only requirement so far
has been that the selected points should not be coplanar with the origin. However, a basis can be
very close to coplanar without being strictly coplanar, and in the presence of noise this is almost
equally troublesome.
To make this observation more quantitative, we define a basis to be good if for any vector v the
coordinates a in that basis do not change much when the basis is slightly perturbed. Quantitatively,
we can measure the quality of the basis by the norm of the largest perturbation of a that is obtained
as v ranges over all unit-norm vectors. The size of this largest perturbation turns out to be equal
to the condition number of W b , that is, to the ratio between its largest and smallest singular values
[10].
The problem of selecting three columns W b of W that are as good as possible in this sense is
known as the subset selection problem in the numerical analysis literature [10]. In the following, we
summarize the solution to this problem proposed in [10]:
1. compute the singular value decomposition of W ,
2. apply QR factorization with column pivoting to the right factor
The first three columns of the permutation matrix \Pi are all zero, except for one entry in each
column, which is equal to one. The row subscripts of those three nonzero entries are the desired
subscripts i, j, k.
The rationale of this procedure is that singular value decomposition preconditions the shape
matrix, and then QR factorization with column pivoting brings a well conditioned submatrix in
front of -
R. (See [10] for more details, as well as for definitions and algorithms for singular value
decomposition and QR factorization.)
Although heuristic in nature, this procedure has proven to work well in all the cases we considered
(see also [5] for a discussion of possible alternatives). Both the singular value decomposition
and the QR factorization of a M \Theta N matrix can be performed in time O(MN 2 ), so this heuristical
algorithm is much more efficient than the O(MN 3 ) brute-force approach of computing the
condition numbers of all the possible bases.
4.4 Summary of the Algorithm
The following steps summarize the algorithm for the acquisition of the similarity-invariant representation
(A; G) from a sequence -
W of images under weak perspective (see Eq. (3)).
1. Center the measurement matrix with respect to one of its columns or the centroid of all its
columns:
where t is either the first column of W or the average of all its columns.
2. (optional) Find a good basis i, j, k for the columns of W as follows:
(a) compute the singular value decomposition of W ,
(b) apply QR factorization with column pivoting to the right factor
The row subscripts of the three nonzero entries in the first three columns of \Pi are i, j, k.
These are the indices of the chosen basis points.
3. Compute the solution A to the overconstrained system by adding one row at a
time. Specifically, initialize A to a 3 \Theta N matrix of zeros. Let w T be a new row, let w T
collect entries i, j, k of w, and let
. The matrix A is updated to
b A
4. Determine the Gramian G as follows:
(a) construct the 2M \Theta 6 matrix
z T
z
z T
z
where
z
(b) solve the system
which yields the distinct entries of the symmetric matrix H . Compute G as the inverse
of H .
This is the complete acquisition algorithm. When new images become available, the criteria
(12) and (13) can be used to decide whether the new images depict the same object as the old ones.
In order to compute a depth map P from the similarity-invariant representation (A; G), another
optional step is added to the algorithm:
5. (optional) Take the Cholesky factorization of matrix be the transformation
of the affine shape matrix A into an orthonormal basis. Namely:
5 Experiments
We first illustrate the ideas presented in this paper with some experiments performed on a real
image sequence (Section 5.1).
We applied our algorithm, including the depth computation, to two other sequences of images,
originally taken by Rakesh Kumar and Harpreet Singh Sawhney at UMASS-Amherst. The data was
provided by J. Inigo Thomas from UMass, who also provided the solution to the correspondence
problem (namely, a list of the coordinates of the tracked points in all the frames).
For comparison, we received the 3D coordinates of the points in the first frame as ground truth.
We used the algorithm described in [11] to compute the optimal similarity transformation between
the invariant depth map representation computed by our algorithm (step 5), and the given data in
the coordinate system of the first frame. We applied the transformation to our depth reconstruction
to obtain z est at each point, and compared this output with the ground truth data z real . We report
the relative error at each point, namely, zest \Gammaz real
z real
We evaluated the affine shape reconstruction separately. We computed the optimal affine transformation
between the invariant affine representation computed by our algorithm (matrix A computed
in step 3), and the given depth data in the coordinate system of the first frame. We applied
the transformation to the affine shape representation to obtain z aff
est at each point, and compared
this output with the ground truth data z real .
5.1 The Ping Pong Ball:
Figure
1: The first frame of the ping pong ball sequence.
A ping pong ball with clearly visible marks was rotated in front of a Sony camera. Thirty frames
were used in these experiments, and the ball rotated 2 degrees per frame. Fig. 1 shows the first
frame of the sequence. A number of frames (five in some cases, fifteen in others) were used for the
acquisition of the similarity-invariant representation. The quadratic and linear matching criteria
(12) and (13) were then applied to each frame of the entire sequence, old frames and new alike. In
every experiment, the two criteria were also applied to a random sequence. The ratio between the
value of the criterion applied to the actual sequence over that for the random sequence was used
as a performance measure: if the ratio is very small, the algorithm discriminates well between the
"true" object and other "false" comparison objects.
The tracker described in [29] was used to both select and track features from frame to frame.
Only those features that were visible throughout the thirty frames were used in these experiments.
Ninety points on the ball satisfied this requirement. Fig. 2a shows the trajectories of those 90
points. Fig. 2b shows the trajectories of points used as basis and origin. More specifically, the
dashed trajectory is that of point 1, used for centering the measurement matrix. The three solid
trajectories (two of them overlap) correspond to the basis points found by the basis selection
procedure. The three dotted trajectories are a very poor choice, corresponding to three points that
happen to be close to coplanar with the origin. The condition number of the good basis is about
19, that of the poorly selected basis is about 300.
50 100 150 200 250 300 350 400 450
-50(a) (b)
Figure
2: (a) The ninety tracks automatically selected and tracked through thirty frames. (b) The dashed track
was used as the origin; The three solid tracks are the ones chosen by the selection algorithm, while the three dotted
tracks yield a poor basis, since the corresponding points in space are almost coplanar with the origin.
Fig. 3 shows the ratios described above for the quadratic criterion (12) and for the linear criterion
(13), respectively. In particular, the dotted curves correspond to the dotted basis of Fig. 2b, and
frame
discrimination
ratio
frame
discrimination
ratio
Figure
3: Discrimination ratio (true/random sequence) for the quadratic (left) and linear (right) criterion, plotted
in logarithmic scale. The dotted curves use the poor basis, the solid curves use the good basis.
the solid curves correspond to the solid basis.
For the poor basis (dotted curves in Fig. 3), only the first five frames were used for acquisition.
Both the quadratic and the linear criterion functions are good (small) for those five frames, and then
deteriorate gradually for new unfamiliar views (frames 5-30). In particular, the quadratic criterion
becomes basically useless after frame 23 or so, that is, after more than 30 degrees of rotation away
from the familiar views. In fact, the value that the criterion returns for a random sequence is about
the same as the one it returns for the actual object. Up to frame 15 or so, however, the quadratic
criterion has at least a 10:1 discrimination ratio. Similar considerations, but with different numbers
and a more erratic trend, hold for the linear criterion.
The situation changes substantially with the good basis chosen by the selection algorithm. The
trajectories are the solid lines in Fig. 2b, and the corresponding plots are the solid curves in Fig. 3.
For the linear criterion, the situation is undoubtedly improved: the discrimination ratio becomes
an order of magnitude better in most cases. The results with the quadratic criterion are less
crisp. However, the effect on unfamiliar views are clearly beneficial, and make the criterion useful
throughout the sixty degrees of visual angle spanned by the sequence.
Fig. 4 shows the effect of changing the number or distribution of frames used for acquisition.
The good basis is used for all the curves. The solid curves, labeled '1-5', are the same as the
solid curves in Fig. 3: the first five frames were used for acquisition. The dashed curves, labeled
'1-15', are the result of using the first fifteen frames for acquisition. For the quadratic criterion,
the results are at some points two orders of magnitude better, and consistently better throughout
the sequence. The bad effect of moving away from familiar viewing angles, however, is even more
marked: the discrimination ratio is small (good) for the fifteen frames used for acquisition, but
increases rapidly for unfamiliar frames. Even there, however, the value of the quadratic criterion
frame
discrimination
ratio
1,6,.
frame
discrimination
ratio
1,6,.
Figure
4: Discrimination ratio (true/random sequence) for the quadratic (left) and linear (right) criteria, plotted
in logarithmic scale. The solid curve uses the first five frames for acquisition, the dashed curve uses the first fifteen
frames, the dotted curve uses five frames spread throughout the sequence.
is more than ten times smaller for the real object than it is for a random sequence (that is, the
discrimination ratio is smaller than 0.1).
The dotted lines in Fig. 4 ('1,6,: : :') use again five frames during acquisition, but the five frames
are spread throughout the viewing positions. Specifically, frames 1,6,11,16,21 were used. Not
surprisingly, the results are typically worse over the fifteen frames used for the dashed curve, but
they are much better for unfamiliar views. In other words, interpolating between unfamiliar views
is better than extrapolating from them. Similar considerations hold again for the linear criterion.
5.2 Box sequence:
This sequence includes 8 images of a rectangular chequered box rotating around a fixed axis (one
frame is shown in Fig. 5). 40 corner-like points on the box were tracked. The depth values of the
points in the first frame ranged from 550 to 700 mms, therefore weak perspective provided a good
approximation to this sequence. (See a more detailed description of the sequence in [25] Fig. 5, or
[15] Fig. 2.)
We compared the relative errors of our algorithm to the errors reported in [25]. Three results
were reported in [25] and copied to Table 1: column "Rot." - depth computation with their
algorithm, which assumes perspective projection and rotational motion only; column "2-frm" -
depth computation using the algorithm described in [12], which uses 2-frames only; and column
"2-frm, Ave." - depth computation using the 2-frames algorithm, where the depth estimates were
averages over six pairs of frames. Table 1 summarized these results, as well as the results using our
affine algorithm (column "Aff. Invar.") and similarity algorithm (column "Rigid Invar.
Figure
5: One frame from the box sequence.
Pt. Pose Rigid Invar. Aff. Invar. Rot. 2-frm 2-frm, Ave.
4.3 624.9 0.5
4.3 639.6 0.3
9 709.7 708.8 -0.1 706.5 -0.5 700.7 -1.3 744.8 5.0 714.4 0.7
ave. 0:27% 0:23% 0:86% 4:4% 0:35%
Table
1: Comparison of the relative errors in depth computation using our algorithm (rigid and affine shape
separately), with two other algorithms. The average of the absolute value of the relative errors is listed at the bottom
for each algorithm.
5.3 Room sequence
This sequence, which was used in the 1991 motion workshop, includes 16 images of a robotics
laboratory, obtained by rotating a robot arm 120 points were tracked. The depth
values of the points in the first frame ranged from 13 to 33 feet, therefore weak perspective does not
provide a good approximation to this sequence. Moreover, a wide-lens camera was used, causing
distortions at the periphery which were not compensated for. (See a more detailed description of
a similar sequence in [25] Fig. 4, or [15] Fig. 3.)
Table
2 summarizes the results of our invariant algorithm for the last 8 points. Due to the
noise in the data and the large perspective distortions, not all the frames were consistent with rigid
motion. (Namely, when all the frames were used, the computed Gramian was not positive-definite).
We therefore used only the last 8 frames from the available frames.
Pt. Pose Rigid Invar. Aff. Invar.
9 21.6 21.5 -0.5 22.3 2.9
7.3 21.8 4.1
ave. 8:4% 2:9%
Table
2: The relative errors in depth computation using our invariant algorithm, for affine and rigid shape.
We compared in Table 3 the average relative error of the results of our algorithm to the average
relative error of a random set of 3D points, aligned to the ground truth data with the optimal
similarity or affine transformation.
Rigid Invar. Rigid random Aff. Invar Aff. random
8:4% 27:6% 2:9% 23:3%
Table
3: The mean relative errors in depth computation.
5.4 Discussion:
Not surprisingly, our results (Section 5.3 in particular) show that affine shape can be recovered
more reliably than depth. We expect this to be the case since the computation of affine shape
does not require knowledge of the aspect-ratio of the camera, and since it does not require the
computation of the square root of the Gramian matrix G.
The results reported in Section 5.1 illustrate the importance of a good basis selection, the
improved performance when more frames are used for the model acquisition, and the improved
performance when the frames used for acquisition are well chosen (namely, spaced further apart).
The only advantage for the simultaneous analysis of more than five points was the availability of a
better (more independent) basis. A comparison with a SVD based algorithm for the computation of
the decomposition in Eq. (7), which was described in [30], showed a comparable average performance
of both our simple linear algorithm and the SVD based algorithm.
In our simulations of a recognition operator (Fig. 4), we see a marked difference between interpolation
and extrapolation. More specifically, the discrimination ratio between images of the
model and images of random objects is lower (and therefore better) for images that lie between
images used for model acquisition (interpolation) than for images that lie outside the range of images
used for model acquisition (extrapolation). We also see an average lower discrimination ratio
for images used for model acquisition than for other images. This replicates the performance of
human subjects in similar recognition tasks [4]. This demonstration is of particular interest since
the results with human subjects were used to conclude that humans do not build an object-centered
3D representation, which is exactly what our algorithm is doing, but rather use a viewer-centered
2D representation (namely, storing a collection of 2D views of the object).
The sequence discussed in Section 5.2 was taken at a relatively large distance between the camera
and the object (the depth values of the points varied from 550 to 700 mms). The weak perspective
assumption therefore gave a good approximation. This sequence is typical of a recognition task.
Under these conditions, which lend themselves favorably to the weak perspective approximation,
our algorithm clearly performs very well.
The excellent results of all the algorithms with the box sequence are due in part to the reliable
data, which was obtained by the particular tracking method described in [25]. Given this reliable
correspondence, our algorithm gave the best results, although it relies on the weak perspective
approximation. When compared with the other two algorithms, our algorithm is more efficient in
its time complexity, it is simpler to implement, and it does not make any assumption on the type
of motion (namely, it does not use the knowledge that the motion is rotational). A perspective
projection algorithm should be used, however, if the scale of the object, or its actual distance from
the camera, are sought.
The sequence discussed in Section 5.3 had very large perspective distortions (the depth values of
the points varied from 13 to 33 feet). Moreover, the sequence was obtained with a wide-lens camera,
which lead to distortions in the image coordinates of points at the periphery. This sequence is more
typical of a navigation task. Under these conditions, which do not lend themselves favorably to the
perspective approximation, our algorithm is not accurate. The accuracy is sufficient for tasks
which require only relative depth (e.g., obstacle avoidance), or less precise reconstruction of the
environment. Note, however, that even algorithms which use the perspective projection model do
not necessarily perform better with such sequences (compare with the results for a similar sequence
reported in [25]).
In this last sequence, the computation of invariant shape using 8 frames or 16 frames lead
to rather similar results for the affine shape matrix and the Gramian matrix. However, in the
second case the computed Gramian matrix was not positive-definite, and therefore we could not
compute depth. This demonstrates how the computation of depth is more sensitive to errors
than the computation of the similarity invariant representation. For the same reason, the affine
reconstruction was an order of magnitude closer to the ground truth values than a set of random
points, whereas the depth reconstruction had an average error only 3 times smaller than a set of
random points.
6 Summary and Conclusions
We described a simple linear algorithm to compute similarity-invariant shape from motion, requiring
only the closed-form solution of an over-determined linear system of equations. Unlike most
algorithms, our algorithm computes shape without computing explicit depth or the transformation
between images. Depth can be optionally obtained by computing the square root of a 3 \Theta 3 matrix.
Like [33] and unlike most algorithms, it can be implemented in an incremental way, updating the
results with additional data without storing all the previous data. Finally, the algorithm is guaranteed
to converge as the number of images grow, as long as the distribution of the noise in the
images (including errors due to perspectivity) approaches a Gaussian distribution with mean value
Our analysis shows that by computing an invariant representation, rather than depth, and by
ignoring the transformation of the camera between different frames (or camera calibration), the
problem becomes simpler. The non-invariant quantities (such as depth) can be later computed
from the invariant representation, but they need not always be computed, e.g., they need not be
computed at recognition. Thus computing an invariant representation directly promises to save
computation time and to increase robustness.
Acknowledgements
We thank J. Inigo Thomas, who gave us two of the real data sets.
--R
Perspective approximations.
Recursive 3-d motion estimation from a monocular image sequence
Psychophysical support for a 2D interpolation theory of object recognition.
On rank-revealing QR factorizations
A direct data approximation based motion estimation algorithm.
What can be seen in three dimensions with an uncalibrated stereo rig?
Motion displacement estimation using and affine model for matching.
A study of affine matching with bounded sensor error.
Matrix Computations.
Relative orientation.
Visual perception of three-dimensional motion
Direct computation of the focus of expansion.
Sensitivity of the pose refinement problem to accurate estimation of camera parameters.
Affine structure from motion.
Processing translational motion sequences.
A computer algorithm for reconstructing a scene from two projections.
Affine invariant model-based object recognition
Stochastic Models
Kalman filter-based algorithms for estimating depth from image sequences
Computer tracking of objects moving in space.
Visual tracking with deformation models.
Description and reconstruction from image trajectories of rotational motion.
Optimal motion estimation.
Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved surfaces
A rational algebraic formulation of the problem of relative orientation.
Shape and motion from image streams: a factorization method - 3
Shape and motion from image streams under orthography: a factorization method.
Recognition by linear combinations of models.
The Interpretation of Visual Motion.
Maximizing rigidity: the incremental recovery of 3D structure from rigid and rubbery motion.
--TR
--CTR
Z. Sun , A. M. Tekalp , N. Navab , V. Ramesh, Interactive Optimization of 3D Shape and 2D Correspondence Using Multiple Geometric Constraints via POCS, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.4, p.562-569, April 2002
Stphane Christy , Radu Horaud, Euclidean Shape and Motion from Multiple Perspective Views by Affine Iterations, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.11, p.1098-1104, November 1996
Y. Pang , M. L. Yuan , A. Y. C. Nee , S. K. Ong , Kamal Youcef-Toumi, A markerless registration method for augmented reality based on affine properties, Proceedings of the 7th Australasian User interface conference, p.25-32, January 16-19, 2006, Hobart, Australia
Levente Hajder , Dmitry Chetverikov, Weak-perspective structure from motion for strongly contaminated data, Pattern Recognition Letters, v.27 n.14, p.1581-1589, 15 October 2006
Stefan Carlsson , Daphna Weinshall, Dual Computation of Projective Shape and Camera Positions from Multiple Images, International Journal of Computer Vision, v.27 n.3, p.227-241, May 1, 1998
Yakup Genc , Jean Ponce, Image-Based Rendering Using Parameterized Image Varieties, International Journal of Computer Vision, v.41 n.3, p.143-170, February/March 2001
Fred Rothganger , Svetlana Lazebnik , Cordelia Schmid , Jean Ponce, 3D Object Modeling and Recognition Using Local Affine-Invariant Image Descriptors and Multi-View Spatial Constraints, International Journal of Computer Vision, v.66 n.3, p.231-259, March 2006 | structure from motion;linear reconstruction;weak perspective;gramian;factorization method;affine coordinates;affine shape;euclidean shape |
628687 | Modal Matching for Correspondence and Recognition. | AbstractModal matching is a new method for establishing correspondences and computing canonical descriptions. The method is based on the idea of describing objects in terms of generalized symmetries, as defined by each objects eigenmodes. The resulting modal description is used for object recognition and categorization, where shape similarities are expressed as the amounts of modal deformation energy needed to align the two objects. In general, modes provide a global-to-local ordering of shape deformation and thus allow for selecting which types of deformations are used in object alignment and comparison. In contrast to previous techniques, which required correspondence to be computed with an initial or prototype shape, modal matching utilizes a new type of finite element formulation that allows for an objects eigenmodes to be computed directly from available image information. This improved formulation provides greater generality and accuracy, and is applicable to data of any dimensionality. Correspondence results with 2D contour and point feature data are shown, and recognition experiments with 2D images of hand tools and airplanes are described. | Introduction
A key problem in machine vision is how to describe fea-
tures, contours, surfaces, and volumes so that they can
be recognized and matched from view to view. The primary
difficulties are that object descriptions are sensitive
to noise, that an object can be nonrigid, and that an ob-
ject's appearance deforms as the viewing geometry changes.
These problems have motivated the use of deformable models
[6;7;9;14;17;22;34;36;37], to interpolate, smooth, and
warp raw data.
Deformable models do not by themselves provide a
method of computing canonical descriptions for recogni-
tion, or of establishing correspondence between sets of
data. To address the recognition problem we proposed
a method of representing shapes as canonical deformations
from some prototype object [18;22]. By describing
object shape terms of the eigenvectors of the prototype
object's stiffness matrix, it was possible to obtain
a robust, frequency-ordered shape description. Moreover,
these eigenvectors or modes provide an intuitive method
for shape description because they correspond to the ob-
ject's generalized axes of symmetry. By representing objects
in terms of modal deformations we developed robust
Manuscript submitted August 1, 1993. Revised October 3, 1994. This
research was done at the MIT Media Lab and was funded by British
Telecom.
S. Sclaroff is with the Computer Science Department, Boston Univer-
sity, 111 Cummington St., Boston, MA 02215.
A. P. Pentland is with the Media Laboratory, Massachusetts Institute
of Technology, Cambridge, MA 02139.
methods for 3-D shape modeling, object recognition, and
3-D tracking utilizing point, contour, 3-D, and optical flow
data [18;20;22].
However this method still did not address the problem of
determining correspondence betweeen sets of data, or between
data and models. This was because every object had
to be described as deformations from a single prototype ob-
ject. This implicitly imposed an a priori parameterization
upon the sensor data, and therefore implicitly determined
the correspondences between data and the prototype.
In this paper we generalize our earlier method by obtaining
the modal shape invariants directly from the sensor
data. This will allow us to compute robust, canonical
descriptions for recognition and to solve correspondence
problems for data of any dimensionality. For the purposes
of illustration, we will give a detailed mathematical formulation
for 2D problems, and demonstrate it on gray-scale
image and point feature data. The extension to data of
other dimensionality is described in a technical report [28].
To illustrate the use of this method for object recognition
and category classification, we will present an example of
recognizing and categorizing images of hand tools.
II. The Basic Idea
Imagine that we are given two sets of image feature
points, and that our goal is to determine if they are from
two similar objects. The most common approach to this
problem is to try to find distinctive local features that can
be matched reliably; this fails because there is insufficient
local information, and because viewpoint and deformation
changes can radically alter local feature appearance.
An alternate approach is to first determine a body-centered
coordinate frame for each object, and then attempt
to match up the feature points. Once we have the
points described in intrinsic or body-centered coordinates
rather than Cartesian coordinates, it is easy to match up
the bottom-right, top-left, etc. points between the two objects
Many methods for finding a body-centered frame have
been suggested, including moment-of-inertia methods,
symmetry finders, and polar Fourier descriptors (for a review
see [1]). These methods generally suffer from three
difficulties: sampling error, parameterization error, and
non-uniqueness. The main contribution of this paper is
a new method for computation of a local coordinate frame
that largely avoids these three difficulties.
Sampling error is the best understood of the three. Everyone
in vision knows that which features you see and
their location can change drastically from view to view.
The most common solution to this problem is to only use
global statistics such as moments-of-inertia; however, such
methods offer a weak and partial solution at best.
Parameterization error is more subtle. The problem is
that when (for instance) fitting a deformable sphere to 3-D
measurements one implicitly imposes a radial coordinate
system on the data rather than letting the data determine
the correct coordinate system. Consequently, the resulting
description is strongly affected by, for instance, the
compressive and shearing distortions typical of perspective.
The number of papers on the topic of skew symmetry is indicative
of the seriousness of this problem.
Non-uniqueness is an obvious problem for recognition
and matching, but one that is all too often ignored in the
rush to get some sort of stable description. Virtually all
spline, thin-plate, and polynomial methods suffer from this
inability to obtain canonical descriptions; this problem is
due to fact that in general, the parameters for these surfaces
can be arbitrarily defined, and are therefore not invariant
to changes in viewpoint, occlusion, or nonrigid deformations
Our solution to these problems has three parts:
1. We compute a shape description that is robust with
respect to sampling by using Galerkin interpolation,
which is the mathematical underpinning of the finite
element method (FEM).
2. We introduce a new type of Galerkin interpolant based
on Gaussians that allows us to efficiently derive our
shape parameterization directly from the data.
3. We then use the eigenmodes of this shape description
to obtain a canonical, frequency-ordered orthogonal
coordinate system. This coordinate system may be
thought of as the shape's generalized symmetry axes.
By describing feature point locations in this body-centered
coordinate system, it is easy to match corresponding
points, and to measure the similarity of different objects.
This allows us to recognize objects, and to determine if
different objects are related by simple physical transformations
A flow-chart of our method is shown in Figure 1. For
each image we start with feature point locations
use these as nodes in building a finite element
model of the shape. We can think of this as constructing
a model of the shape by covering each feature
point with a Gaussian blob of rubbery material; if we have
segmentation information, then we can fill in interior areas
and trim away material that extends outside of the shape.
We then compute the eigenmodes (eigenvectors) OE i of
the finite element model. The eigenmodes provide an orthogonal
frequency-ordered description of the shape and
its natural deformations. They are sometimes refered to
as mode shape vectors since they describe how each mode
deforms the shape by displacing the original feature loca-
these are FEM nodes build physical model compute eigenmodes
Input: features
Output: strongest
feature correspondences
determine FEM mass
and stiffness matrices
determine FEM mass
and stiffness matrices
solve generalized
eigenproblem
solve generalized
eigenproblem
find correspondence in
generalized feature space
match low-order nonrigid
modes f for both shapes
these are FEM nodes
Input: features
use the matched f
as coordinate system
Fig. 1. System diagram.
(a)
(b)
(c)
(d)
Fig. 2. Similar shapes have similar low order modes. This figure shows the
first five low-order eigenmodes for similar tree shapes: (a) prototypical,
(b) stretched, (c) tilted, and (d) two middle branches stretched.
tions, i.e.,
where a is a scalar.
The first three eigenmodes are the rigid body modes of
translation and rotation, and the rest are nonrigid modes.
The nonrigid modes are ordered by increasing frequency of
vibration; in general, low-frequency modes describe global
deformations, while higher-frequency modes describe more
localized shape deformations. This global-to-local ordering
of shape deformation will prove very useful for shape
matching and comparison.
The eigenmodes also form an orthogonal object-centered
coordinate system for describing feature locations. That
is, each feature point location can be uniquely described in
terms of how it moves within each eigenmode. The transform
between Cartesian feature locations and modal feature
locations is accomplished by using the FEM eigenvectors
as a coordinate basis. In our technique, two groups of
features are compared in this eigenspace. The important
idea here is that the low-order modes computed for two
similar objects will be very similar - even in the presence
of affine deformation, nonrigid deformation, local shape
perturbation, or noise.
To demonstrate this, Figure 2 shows a few of the
low-order nonrigid modes computed for four related tree
(a)
(b)
(c)
Fig. 3. Computing correspondences in modal signature space. Given two
similar shapes, correspondences are found by comparing the direction of
displacement at each node (shown by vectors in figure). For instance,
the top points on the two trees (a, b) have very similar displacement
signatures, while the bottom point (shown in c) has a very different displacement
signature. Using this property, we can reliably compute correspondence
affinities in this modal signature space.
shapes: (a) upright, (b) stretched, (c) tilted, and (d) two
middle branches stretched. Each row in the figure shows
the original shape in gray, and its low-order mode shapes
are overlaid in black outline. By looking down a column
of this figure, we can see how a particular low-order eigenmode
corresponds nicely for the related shapes. This eigenmode
similarity allows us to match the feature locations on
one object with those of another despite sometimes large
differences in shape.
Using this property, feature correspondences are found
via modal matching. The concept of modal matching is
demonstrated on the two similar tree shapes in Figure 3.
Correspondences are found by comparing the direction of
displacement at each node. The direction of displacement
is shown by vectors in figure. For instance, the top points
on the two trees in Figure 2(a, b) have very similar displacements
across a number of low-order modes, while the
bottom point (shown in Figure 2(c)) has a very different
displacement signature. Good matches have similar displacement
signatures, and so the system matches the top
points on the two trees.
Point correspondences between two shapes can be reliably
determined by comparing their trajectories in this
modal space. In the implementation described in this pa-
per, points that have the most similar unambigous coordinates
are matched via modal matching, with the remaining
correspondences determined by using the physical model as
a smoothness constraint. Currently, the algorithm has the
limitation that it cannot reliably match largely occluded
or partial objects.
Finally, given correspondences between many of the feature
points on two objects, we can measure their difference
in shape. Because the modal framework decomposes deformations
into an orthogonal set, we can selectively measure
rigid-body differences, or low-order projective-like defor-
mations, or deformations that are primarily local. Con-
sequently, we can recognize objects in a very flexible and
general manner.
Alternatively, given correspondences we can align or
warp one shape into another. Such alignment is useful for
fusing data from different sensors, or for comparing data
acquired at different times or under different conditions.
It is also useful in computer graphics, where the warping
of one shape to another is known as "morphing." In current
computer graphics applications the correspondences
are typically determined by hand [4;31;44].
III. Background and Notation
A. Eigen-Representations
In the last few years there has been a revival of interest
in pattern recognition methods, due to the surprisingly
good results that have been obtained by combining
these methods with modern machine vision represen-
tations. Using these approaches researchers have built systems
that perform stable, interactive-time recognition of
faces [39], cars [16], and biological structures [6;19], and allowed
interactive-time tracking of complex and deformable
objects [5;8;20;38].
Typically, these methods employ eigen-decompositions
like the modal decomposition or any of a family of methods
descended from the Karhunen-Lo'eve transform. Some
are feature-based eigenshapes [3;8;26;27;30;32], others are
physically-based eigensnakes [5;6;19;22;27], and still others
are based on (preprocessed) image intensity information,
eigenpictures [11;15;16;21;38;39].
In these methods, image or shape information is decomposed
into an ordered basis of orthogonal principal compo-
nents. As a result, the less critical and often noisy high-order
components can be discarded in order to obtain over-
constrained, canonical descriptions. This allows for the selection
of only the most important components to be used
for efficient data reduction, real-time recognition and nav-
igation, and robust reconstruction. Most importantly, the
orthogonality of eigen-representations ensures that the recovered
descriptions will be unique, thus making recognition
problems tractable.
Modal matching, the new method described in this pa-
per, utilizes the eigenvectors of a physically-based shape
representation, and is therefore most closely related to
eigenshapes and eigensnakes. At the core of all of these
techniques is a positive definite matrix that describes the
connectedness between features. By finding the eigenvectors
of this matrix, we can obtain a new, generalized coordinate
system for describing the location of feature points.
One such matrix, the proximity matrix, is closely related
to classic potential theory and describes Gaussian-weighted
distances between point data. Scott and Longuet-Higgins
[30] showed that the eigenvectors of this matrix can be
used to determine correspondences between two sets of
points. This coordinate system is invariant to rotation,
and somewhat robust to small deformations and noise. A
substantially improved version of this approach was developed
by Shapiro and Brady [32;33]. Similar methods have
been applied to the problem of weighted graph matching by
Umeyama [41], and for Gestalt-like clustering of dot stimuli
by van Oeffelen and Vos [42]. Unfortunately, proximity
methods are not information preserving, and therefore cannot
be used to interpolate intermediate deformations or to
obtain canonical descriptions for recognition.
In a different approach, Samal and Iyengar [26] enhanced
the generalized Hough transform (GHT) by computing the
Karhunen-Lo'eve transform for a set of binary edge images
for a general population of shapes in the same family. The
family of shapes is then represented by its significant eigen-
shapes, and a reference table is built and used for a Hough-
like shape detection algorithm. This makes it possible for
the GHT to represent a somewhat wider variation (defor-
mation) in shapes, but as with the GHT, their technique
cannot deal very well with rotations, and it has the disadvantage
that it computes the eigenshapes from binary edge
data.
Cootes, et al. [3;8] introduced a chord-based method
for capturing the invariant properties of a class of shapes,
based on the idea of finding the principal variations of a
snake. Their point distribution model (PDM) relies on representing
objects as sets of labeled points, and examines
the statistics of the variation over the training set. A co-variance
matrix is built that describes the displacement of
model points along chords from the prototype's centroid.
The eigenvectors are computed for this covariance matrix,
and then a few of the most significant components are used
as deformation control knobs for the snake. Unfortunately,
this method relies on the consistent sampling and hand-labeling
of point features across the entire training set and
cannot handle large rotations.
Each of these previous approaches is based directly on
the sampled feature points. When different feature points
are present in different views, or if there are different sampling
densities in different views, then the shape matrix for
the two views will differ even if the object's pose and shape
are identical. In addition, these methods cannot incorporate
information about feature connectivity or distinctive-
ness; data are treated as clouds of identical points. Most
importantly, none of these approaches can handle large deformations
unless feature correspondences are given.
To get around these problems, we propose a formulation
that uses the finite element technique of Galerkin surface
approximation to avoid sampling problems and to incorporate
outside information such as feature connectivity
and distinctiveness. The eigenvectors of the resulting matrices
can be used both for describing deformations and
for finding feature correspondences. The previous work in
physically-based correspondence is described briefly in the
next section.
B. Physically-Based Correspondence and Shape
Comparison
Correspondence has previously been formulated as an
equilibrium problem, which has the attractive feature of allowing
integration of physical constraints [18;22;20;37;36].
To accomplish this, we first imagine that the collection of
feature points in one image is attached by springs to an
elastic body. Under the load exerted by these springs, the
elastic body will deform to match the shape outlined by the
set of feature points. If we repeat this procedure in each
image, we can obtain a feature-to-feature correspondence
by noting which points project to corresponding locations
on the two elastic bodies.
If we formulate this equilibrium problem in terms of
the eigenvectors of the elastic body's stiffness matrix, then
closed-form solutions are available [18]. In addition, high-frequency
eigenvectors can be discarded to obtain overcon-
strained, canonical descriptions of the equilibrium solution.
These descriptions have proven useful for object recognition
[22] and tracking [20].
The most common numerical approach for solving equilibrium
problems of this sort is the finite element method.
The major advantage of the finite element method is that
it uses the Galerkin method of surface interpolation. This
provides an analytic characterization of shape and elastic
properties over the whole surface, rather than just at
the nodes [2] (nodes are typically the spring attachment
points). The ability to integrate material properties over
the whole surface alleviates problems caused by irregular
sampling of feature points. It also allows variation of the
elastic body's properties in order to weigh reliable features
more than noisy ones, or to express a priori constraints on
size, orientation, smoothness, etc. The following section
will describe this approach in some detail.
C. Finite Element Method
Using Galerkin's method for finite element discretiza-
tion, we can set up a system of shape functions that relate
the displacement of a single point to the relative displacements
of all the other nodes of an object. This set of shape
functions describes an isoparametric finite element. By using
these functions, we can calculate the deformations that
spread uniformly over the body as a function of its constitutive
parameters.
In general, the polynomial shape function for each element
is written in vector form as:
where H is the interpolation matrix, x is the local coordinate
of a point in the element where we want to know
the displacement, and U denotes a vector of displacement
components at each element node.
For most applications it is necessary to calculate the
strain due to deformation. Strain ffl is defined as the ratio
of displacement to the actual length, or simply the ratio of
the change in length. The polynomial shape functions can
be used to calculate the strains (ffl) over the body provided
the displacements at the node points are known. Using this
fact we can now obtain the corresponding element strains:
where B is the strain displacement matrix. The rows of B
are obtained by appropriately differentiating and combining
rows of the element interpolation matrix H.
As mentioned earlier, we need to solve the problem of deforming
an elastic body to match the set of feature points.
This requires solving the dynamic equilibrium equation:
where R is the load vector whose entries are the spring
forces between each feature point and the body surface,
and where M, D, and K are the element mass, damping,
and stiffness matrices, respectively.
Both the mass and stiffness matrices are computed directly
R
R
where ae is the mass density, and C is the material matrix
that expresses the material's particular stress-strain law.
If we assume Rayleigh damping, then the damping matrix
is simply a linear combination of the mass and stiffness
matrices:
where ff and fi are constants determined by the desired
critical damping [2].
D. Mode Superposition Analysis
This system of equations can be decoupled by posing
the equations in a basis defined by the M-orthonormalized
eigenvectors of M \Gamma1 K. These eigenvectors and values are
the solution (OE
i ) to the following generalized eigenvalue
problem:
The vector OE i is called the ith mode shape vector and ! i is
the corresponding frequency of vibration.
The mode shapes can be thought of as describing the
object's generalized (nonlinear) axes of symmetry. We can
Equation 7 as
K\Phi
where
As mentioned earlier, each mode shape vector OE i is M-
orthonormal, this means that
This generalized coordinate transform \Phi is then used to
transform between nodal point displacements U and de-coupled
modal displacements ~
U:
U
We can now rewrite Equation 4 in terms of these generalized
or modal displacements, obtaining a decoupled system
of equations:
~
~
where ~
D is the diagonal modal damping matrix.
By decoupling these equations, we allow for closed-form
solution to the equilibrium problem [22]. Given this equilibrium
solution in the two images, point correspondences
can be obtained directly.
By discarding high frequency eigenmodes the amount
of computation required can be minimized without significantly
altering correspondence accuracy. Moreover, such
a set of modal amplitudes provides a robust, canonical description
of shape in terms of deformations applied to the
original elastic body. This allows them to be used directly
for object recognition [22].
IV. A New Formulation
Perhaps the major limitation of previous methods is that
the procedure of attaching virtual springs between data
points and the surface of the deformable object implicitly
imposes a standard parameterization on the data. We
would like to avoid this as much as is possible, by letting
the data determine the parameterization in a natural manner
To accomplish this we will use the data itself to define
the deformable object, by building stiffness and mass matrices
that use the positions of image feature points as the
finite element nodes. We will first develop a finite element
formulation using Gaussian basis functions as Galerkin in-
terpolants, and then use these interpolants to obtain generalized
mass and stiffness matrices.
Intuitively, the interpolation functions provide us with a
smoothed version of the feature points, in which areas between
close-by feature points are filled in with a virtual material
that has mass and elastic properties. The filling-in or
smoothing of the cloud of feature points provides resistance
to feature noise and missing features. The interpolation
functions also allow us to place greater importance on distinctive
or important features, and to discount unreliable or
unimportant features. This sort of emphasis/de-emphasis
is accomplished by varying the "material properties" of the
virtual material between feature points.
A. Gaussian Interpolants
Given a collection of m sample points x i from an im-
age, we need to build appropriate stiffness and mass ma-
trices. The first step towards this goal is to choose a set
of interpolation functions from which we can derive H and
matrices. We require a set of continuous interpolation
functions h i such that:
1. their value is unity at node i and zero at all other
nodes
2.
1:0 at any point on the object
In a typical finite element solution for engineering, Hermite
or Lagrange polynomial interpolation functions are used
[2]. Stiffness and mass matrices K and M are precomputed
for a simple, rectangular isoparametric element, and
then this simple element is repeatedly warped and copied
to tessellate the region of interest. This assemblage technique
has the advantage that simple stiffness and mass matrices
can be precomputed and easily assembled into large
matrices that model topologically complex shapes.
Our problem is different in that we want to examine the
eigenmodes of a cloud of feature points. It is akin to the
problem found in interpolation networks: we have a fixed
number of scattered measurements and we want to find
a set of basis functions that allows for easy insertion and
movement of data points. Moreover, since the position
of nodal points will coincide with feature and/or sample
points from our image, stiffness and mass matrices will need
to be built on a per-feature-group basis. Gaussian basis
functions are ideal candidates for this type of interpolation
problem [23;24]:
where x i is the function's n-dimensional center, and oe its
standard deviation.
We will build our interpolation functions h i as the sum
of m basis functions, one per data point
a ik g k (x) (14)
where a ik are coefficients that satisfy the requirements outlined
above. The matrix of interpolation coefficients can be
solved for by inverting a matrix of the form:
G =4
By using these Gaussian interpolants as our shape functions
for Galerkin approximation, we can easily formulate
finite elements for any dimension. A very useful aspect of
Gaussians is that they are factorizable: multidimensional
interpolants can be assembled out of lower dimensional
Gaussians. This not only reduces computational cost, it
also has useful implications for VLSI hardware and neural-network
implementations [23].
Note that these sum-of-Gaussians interpolants are non-
conforming, i.e., they do not satisfy condition (2) above.
As a consequence the interpolation of stress and strain between
nodes is not energy conserving. Normally this is of
no consequence for a vision application; indeed, most of
the finite element formulations used in vision research are
similarly nonconforming [37]. If a conforming element is
desired, this can be obtained by including a normalization
term in h i in Equation 14,
a ik g k (x)
a jk g k (x)
In this paper we will use the simpler, non-conforming in-
terpolants, primarily because the integrals for mass and
stiffness can be computed analytically. The differences between
conforming and nonconforming interpolants do not
affect the results reported in this paper.
B. Formulating a 2-D Mass Matrix
For the sake of illustration we will now give the mathematical
details for a two dimensional implementation. We
begin by assembling a 2-D interpolation matrix from the
shape functions developed above:
Substituting into Equation 5 and multiplying out we obtain
a mass matrix for the feature data:
Z
A
where the m by m submatrices M aa and M bb are positive
definite symmetric, and M . The elements of M aa
have the form:
k;l
a ik a jl g k (x) g l (x) dx dy: (19)
We then integrate and regroup terms:
k;l
a ik a jl
kl (20)
where is an element of the G matrix in Equation
15.
This can be rewritten in matrix form:
where the elements of G are the square roots of the elements
of the G matrix in Equation 15.
C. Formulating a 2-D Stiffness Matrix
To obtain a 2-D stiffness matrix K we need to compute
a stress-strain interpolation matrix B and material matrix
C. For our two dimensional problem, B is a (3 \Theta 2m)
@
@y hm
@
@y hm @
and the general form for the material matrix C for a plane
strain element is:
This matrix embodies an isotropic material, where the constants
ff, fi, and - are a function of the material's modulus
of elasticity E and Poisson ratio -:
Substituting into Equation 5 and multiplying out we obtain
a stiffness matrix for the 2-D feature data:
Z
A
h K aa K ab
K ba K bb
where each m by m submatrix is positive semi-definite sym-
metric, and K ab . The elements of K aa have the
k;l
a ik a jl
@x
@g l
@x
@y
@g l
@y
dx dy:
Integrate and regroup terms:
k;l
a ik a jl
kl
Similarly, the
elements of K bb have the form:
k;l
a ik a jl
kl
Finally, the elements of K ab have the form:
k;l
a ik a jl
ff
@x
@g l
@y
@y
@g l
@x
dx dy:
When integrated this becomes:
k;l
a ik a jl -
y kl
V. Determining Correspondences
To determine correspondences, we first compute mass
and stiffness matrices for both feature sets. These matrices
are then decomposed into eigenvectors OE i and eigenvalues
described in Section III.D. The resulting eigenvectors
are ordered by increasing eigenvalue, and form the columns
of the modal matrix \Phi:
where m is the number of nodes used to build the finite element
model. The column vector OE i is called the i th mode
shape, and describes the modal displacement (u; v) at each
feature point due to the i th mode, while the row vectors
are called the i th generalized feature vectors, and
together describe the feature's location in the modal coordinate
system.
Modal matrices \Phi 1 and \Phi 2 are built for both images.
Correspondences can now be computed by comparing mode
shape vectors for the two sets of features; we will characterize
each nodal point by its relative participation in several
eigenmodes. Before actually describing how this matching
is performed, it is important to consider which and how
many of these eigenmodes should be incorporated into our
feature comparisons.
A. Modal Truncation
For various reasons, we must select a subset of mode
shape vectors (column vectors OE i ) before computing corre-
spondences. The most obvious reason for this is that the
number of eigenvectors and eigenvalues computed for the
source and target images will probably not be the same.
This is because the number of feature points in each image
will almost always differ. To make the dimensionalities of
the two generalized feature spaces the same, we will need to
truncate the number of columns at a given dimensionality.
Typically, we retain only the lowest-frequency 25% of the
columns of each mode matrix, in part because the higher-frequency
modes are the ones most sensitive to noise. Another
reason for discarding higher-frequency modes is to
make our shape comparisons less sensitive to local shape
variations.
We will also want to discard columns associated with the
rigid-body modes. Recall that the columns of the modal
matrix are ordered in terms of increasing eigenvalue. For
a two-dimensional problem, the first three eigenmodes will
represent the rigid body modes of two translations and a
rotation. These first three columns of each modal matrix
are therefore discarded to make the correspondence computation
invariant to differences in rotation and translation.
In summary, this truncation breaks the generalized
eigenspace into three groups of feature vectors:
rigid body
intermediate
high-order
modes
z -
modes
z -
modes
z -
where m and n are the number of features in each image.
We keep only those columns that represent the intermediate
eigenmodes; thus, the truncated generalized feature
space will be of dimension
We now have a set of mode-truncated feature vectors:
- u
where the two row vectors -
store the displacement
signature for the i th node point, in truncated mode
space. The vector - u i contains the x, and - v i contains the
y, displacements associated with each of the modes.
It is sometimes the case that a couple of eigenmodes
have nearly equal eigenvalues. This is especially true for
the low-order eigenmodes of symmetric shapes and shapes
whose aspect ratio is nearly equal to one. In our current
system, such eigenmodes are excluded from the correspondence
computation because they would require the matching
of eigenmode subspaces.
B. Computing Correspondence Affinities
Using a modified version of an algorithm described by
Shapiro and Brady [33], we now compute what are referred
to as the affinities z ij between the two sets of generalized
feature vectors. These are stored in an affinity matrix Z,
where:
The affinity measure for the i th and j th points, z ij , will
be zero for a perfect match and will increase as the match
worsens. Using these affinity measures, we can easily identify
which features correspond to each other in the two
images by looking for the minimum entry in each column
or row of Z. Shapiro and Brady noted that the symmetry
of an eigendecomposition requires an intermediate sign correction
step for the eigenvectors OE i . This is due to the fact
that the direction (sign) of eigenvectors can be assigned
arbitrarily. Readers are referred to [32] for more details
about this.
To obtain accurate correspondences the Shapiro and
Brady method requires three simple, but important, mod-
ifications. First, only the generalized features that match
with the greatest certainty are used to determine the de-
formation; the remainder of the correspondences are determined
by the deformation itself as in our previous method.
By discarding affinities greater than a certain threshold, we
allow for tokens that have no strong match. Second, as described
earlier, only the low-order twenty-five percent of the
eigenvectors are employed, as the higher-order modes are
known to be noise-sensitive and thus unstable [2]. Lastly,
because of the reduced basis matching, similarity of the
generalized features is required in both directions, instead
of one direction only. In other words, a match between the
th feature in the first image and the j th feature in the second
image can only be valid if z ij is the minimum value for
its row, and z ji the minimum for its column. Image points
for which there was no correspondence found are flagged
accordingly.
In cases with low sampling densities or with large defor-
mations, the mode ordering can vary slightly. Such cases
require an extra step in which neighborhoods of similarly-
valued modes are compared to find the best match.
C. Coping with Large Rotations
As described so far, our affinity matrix computation
method works best when there is little difference in the
orientation between images. This is due to the fact that
the modal displacements are described as vectors (u; v) in
image space. When the aligning rotation for two sets of
features is potentially large, the affinity calculation can be
made rotation invariant by transforming the mode shape
vectors into a polar coordinate system. In two dimensions,
each mode shape vector takes the form
where the modal displacement at the i th node is simply
To obtain rotation invariance, we must transform
c
. u
x
Fig. 4. Transforming a modal displacement vector
The angle ' is computed relative to the vector n from the object's centroid
c to the nodal point x. The radius r is simply the length of u.
each (u; v) component into a coordinate in (r; ') space as
shown in Figure 4. The angle ' is computed relative to
the vector from the object's centroid to the nodal point x.
The radius r is simply the magnitude of the displacement
vector u.
Once each mode shape vector has been transformed into
this polar coordinate system, we can compute feature affinities
as was described in the previous section. In our ex-
periments, however, we have found that it is often more
effective to compute affinities using either just the r components
or just the ' components, i.e.:
In general, the r components are scaled uniformly based
on the ratio between the object's overall scale versus the
Gaussian basis function radius oe. The ' components, on
the other hand, are immune to differences in scale, and
therefore a distance metric based on ' offers the advantage
of scale invariance.
D. Multiresolution Models
When there are possibly hundreds of feature points for
each shape, computing the FEM model and eigenmodes
for the full feature set can become non-interactive. For ef-
ficiency, we can select a subset of the feature data to build
a lower-resolution finite element model and then use the resulting
eigenmodes in finding the higher-resolution feature
correspondences. The procedure for this is as follows.
First, a subset of m feature points is selected to be finite
element nodes. This subset can be a set of particularly
salient features (i.e., corners, T-junctions, and edge
mid-points) or a randomly selected subset of (roughly)
uniformly-spaced features. As before, a FEM model is
built for each shape, eigenmodes are obtained, and modal
truncation is performed as described in Section V.A. The
resulting eigenmodes are then matched and sign-corrected
using the lower-resolution models' affinity matrix.
With modes matched for the feature subsets, we now
proceed to finding the correspondences for the full sets of
features. To do this, we utilize interpolated modal matrices
which describe each mode's shape for the full set of
features:
(a) (b)
Fig. 5. Two flat tree shapes, one upright and one lying flat (a), together
with the obtained correspondence (b). The low-order modes were computed
for each tree and then correspondences were determined using the
algorithm described in the text.
The interpolation matrix -
H relates the displacement at
the nodes (low-resolution features) to displacements at the
higher-resolution feature locations x
where each submatrix H(x i ) is a 2 \Theta 2m interpolation matrix
as in Eq. 17.
Finally, an affinity matrix for the full feature set is computed
using the interpolated modal matrices, and correspondences
are determined as described in the previous
sections.
VI. Correspondence Experiments
In this section we will first illustrate the method on a few
classic problems, and then demonstrate its performance
on real imagery. In each example the feature points are
treated independently; no connectivity or distinctiveness
information was employed. Thus the input to the algorithm
is a cloud of feature points, not a contour or 2-D
form. The mass and stiffness matrices were then computed,
and the M-orthonormalized eigenvectors determined. Fi-
nally, correspondences were obtained as described above.
The left-hand side of Figure 5(a) shows two views of
a flat, tree-like shape, an example illustrating the idea of
skewed symmetry adapted from [13]. The first modes
were computed for both trees, and were compared to obtain
the correspondences shown in Figure 5(b). The fact that
the two figures have similar low-order symmetries (eigen-
vectors) allows us to recognize that two shapes are closely
related, and to easily establish the point correspondences.
Figure
6 shows another classic example [25]. Here we
have pear shapes with various sorts of bumps and spikes.
Roughly 300 points were sampled regularly along the contour
of each pear's silhouette. Correspondences were then
computed using the first modes. Because of the large
number of data points, only two percent of the correspondences
are shown. As can be seen from the figure, reason-able
correspondences were found.
Figure
7(a) illustrates a more complex correspondence
example, using real image data. Despite the differences
between these two hands, the low-order descriptions are
quite similar and consequently a very good correspondence
Fig. 6. Correspondence obtained for bumpy, warty, and prickly pears.
Roughly 300 silhouette points were matched from each pear. Because of
the large number of data points, only two percent of the correspondences
are shown in this figure.
(a)
(b)
(c)
(d)
Fig. 7. (a) Two hand images, (b) correspondences between silhouette
points, (c),(d) correspondences after digital surgery. Roughly 400 points
were sampled from each hand silhouette. Correspondences were computed
for all points using the first 32 modes. For clarity, only correspondences
for key points are shown in this figure.
different views slightly different planes
quite different planes very different planes
Fig. 8. Correspondence obtained for outlines of different types of air-
planes. The first example shows the correspondences found for different
(rotated in 3D) views of the same fighter plane. The others show
matches between increasingly different airplanes. In the final case, the
wing position of the two planes is quite different. As a consequence, the
best-matching correspondence has the Piper Cub flipped end-to-end, so
that the two planes have more similar before-wing and after-wing fuselage
lengths. Despite this overall symmetry error, the remainder of the correspondence
appears quite accurate. Roughly 150 silhouette points were
matched from each plane. Because of the large number of data points,
only critical correspondences are shown in this figure.
is obtained, as shown in Figure 7(b). Roughly 400 points
were sampled from each hand silhouette. Correspondences
were computed for all points using the first 32 modes. As
in the previous example, only two percent of the correspondences
are shown.
Figures
7(c) and (d) show the same hand data after digital
surgery. In Figure 7(c), the little finger was almost
completely removed; despite this, a nearly perfect correspondence
was maintained. In Figure 7(d), the second finger
was removed. In this case a good correspondence was
still obtained, but not the most natural given our knowledge
of human bone structure.
The next example, Figure 8, uses outlines of three different
types of airplanes as seen from a variety of different
viewpoints (adapted from [45]). In the first three cases
the descriptions generated are quite similar, and as a consequence
a very good correspondence is obtained. Again,
only two percent of the correspondences are shown.
In the last pair, the wing position of the two planes is
quite different. As a result, the best-matching correspondence
has the Piper Cub flipped end-to-end, so that the
two planes have more similar before-wing and after-wing
fuselage lengths. Despite this overall symmetry error, the
remainder of the correspondence appears quite accurate.
Our final example is adapted from [40] and utilizes multi-resolution
modal matching to efficiently find correspondences
for a large number of feature points. Figure 9 shows
the edges extracted from images of two different cars taken
from varying viewpoints. Figure 9(a) depicts a view of a
Volkswagen Beetle (rotated 15 o from side view) and Figure
9(b) depicts two different views of a Saab (rotated 15
we take each edge pixel to be a feature, then each
car has well over 1000 feature points.
As described in Section V.D, when there are a large number
of feature points, modal models are first built from
(a) (b)
(c)
(d)
Fig. 9. Finding correspondence for one view of a Volkswagen (a) and a
two views of a Saab (b) taken from [40]. Each car has well over 1000
edge points. Note that both silhouette and interior points can be used
in building the model. As described in the text, when there are a large
number of feature points, modal models are first built from a uniform sub-sampling
of the features as is shown in (c,d). In this example, roughly 35
points were used in building the finite element models. Given the modes
computed for this lower-resolution model, we can use modal matching to
compute feature matches for the higher-resolution. Correspondences between
similar viewpoints of the VW and Saab are shown in (e), while in (f)
a different viewpoint is matched (the viewpoints differ by
of the large number of data points, only a few of the correspondences are
shown in this figure.
a roughly uniform sub-sampling of the features. Figures
9(c) and 9(d) show the subsets of between
that were used in building the finite element models.
Both silhouette and interior points were used in building
the model.
The modes computed for the lower-resolution models
were then used as input to an interpolated modal matching
which paired off the corresponding higher-resolution
features. Some of the strongest corresponding features for
two similar views of the VW and Saab are shown in 9(e).
The resulting correspondences are reasonable despite moderate
differences in the overall shape of the cars. Due to the
large number of feature points, only a few of the strongest
correspondences are shown in this figure.
In
Figure
9(f), the viewpoints differ by
the resulting correspondences are still quite reasonable, but
this example begins to push the limits of the matching al-
gorithm. There are one or two spurious matches; e.g., a
headlight is matched to a sidewall. We expect that performance
could be improved if information about intensity,
color, or feature distinctiveness were included in our model.
VII. Object Alignment, Comparison and
Description
An important benefit of our technique is that the eigenmodes
computed for the correspondence algorithm can also
be used to describe the rigid and non-rigid deformation
needed to align one object with another. Once this modal
description has been computed, we can compare shapes
simply by looking at their mode amplitudes or - since
the underlying model is a physical one - we can compute
and compare the amount of deformation energy needed to
align an object, and use this as a similarity measure. If the
modal displacements or strain energy required to align two
feature sets is relatively small, then the objects are very
similar.
Recall that for a two-dimensional problem, the first three
modes are the rigid body modes of translation and rotation,
and the rest are nonrigid modes. The nonrigid modes are
ordered by increasing frequency of vibration; in general,
low-frequency modes describe global deformations, while
higher-frequency modes describe more localized shape de-
formations. Such a global-to-local ordering of shape deformation
allows us to select which types of deformations are
to be compared.
For instance, it may be desirable to make object comparisons
rotation, position, and/or scale independent. To
do this, we ignore displacements in the low-order or rigid
body modes, thereby disregarding differences in position,
orientation, and scale. In addition, we can make our comparisons
robust to noise and local shape variations by discarding
higher-order modes. As will be seen later, this
modal selection technique is also useful for its compact-
ness, since we can describe deviation from a prototype in
terms of relatively few modes.
But before we can actually compare two sets of features,
we first need to recover the modal deformations ~
U that deform
the matched points on one object to their corresponding
positions on a prototype object. A number of different
methods for recovering the modal deformation parameters
are described in the next section.
A. Recovering Deformations
We want to describe the deformation parameters ~
U that
take the set of points from the first image to the corresponding
points in the second. Given that \Phi 1 and \Phi 2
have been computed, and that correspondences have been
established, then we can solve for the modal displacements
directly. This is done by noting that the nodal displacements
U that align corresponding features on both shapes
can be written:
where x 1;i is the i th node on the first shape and x 2;i is its
matching node on the second shape.
Recalling that
U, and using the identity of Equation
10, we find:
~
Normally there is not one-to-one correspondence between
the features. In the more typical case where the recovery
is underconstrained, we would like unmatched nodes
to move in a manner consistent with the material properties
and the forces at the matched nodes. This type of
solution can be obtained in a number of ways.
In the first approach, we are given the nodal displacements
u i at the matched nodes, and we set the loads r i at
unmatched nodes to zero. We can then solve the equilibrium
we have as many knowns
as unknowns. Modal displacements are then obtained via
Eq. 40. This approach yields a closed-form solution, but
we have assumed that forces at the unmatched nodes are
zero.
By adding a strain-energy minimization constraint, we
can avoid this assumption. The strain energy can be measured
directly in terms of modal displacements, and enforces
a penalty that is proportional to the squared vibration
frequency associated with each mode:
~
U
U: (41)
Since rigid body modes ideally introduce no strain, it is
logical that their
We can now formulate a constrained least squares solu-
tion, where we minimize alignment error that includes this
modal strain energy term:
U
U
squared fitting error
U
U:
strain energy
This strain term directly parallels the smoothness functional
employed in regularization [35].
Differentiating with respect to the modal parameter vector
yields the strain-minimizing least squares equation:
~
\Theta \Phi T \Phi
Thus we can exploit the underlying physical model to enforce
certain geometric constraints in a least squares solu-
tion. The strain energy measure allows us to incorporate
some prior knowledge about how stretchy the shape is, how
much it resists compression, etc. Using this extra knowl-
edge, we can infer what "reasonable" displacements would
be at unmatched feature points.
Since the modal matching algorithm computes the
strength for each matched feature, we would also like to
utilize these match-strengths directly in alignment. This is
achieved by including a diagonal weighting matrix:
~
\Theta \Phi T W
The diagonal entries of W are inversely proportional to
the affinity measure for each feature match. The entries
for unmatched features are set to zero.
B. Dynamic Solution: Morphing
So far, we have described methods for finding the modal
displacements that directly deform and align two feature
sets. It is also possible to solve the alignment problem by
physical simulation, in which the finite element equations
are integrated over time until equilibrium is achieved. In
this case, we solve for the deformations at each time step
via the dynamic equation (Eq. 12). In so doing, we compute
the intermediate deformations in a manner consistent
with the material properties that were built into the finite
element model. The intermediate deformations can also be
used for physically-based morphing.
When solving the dynamic equation, we use features of
one image to exert forces that pull on the features of the
other image. The dynamic loads R(t) at the finite element
nodes are therefore proportional to the distance between
matched features:
where k is an overall stiffness constant and u i (t) is the
nodal displacement at the previous time step. These loads
simulate "ratchet springs," which are successively tightened
until the surface matches the data [10].
The modal dynamic equilibrium equation can be written
as a system of 2m independent equations of the form:
~
where the ~
are components of the transformed load
vector ~
These independent equilibrium
equations can be solved via an iterative numerical integration
procedure (e.g., Newmark method [2]). The system is
integrated forward in time until the change in load energy
goes below a threshold. The loads r i (t) are updated at
each time step by evaluating Equation 45.
C. Coping with Large Rotations
If the rotation needed to align the two sets of points is
potentially large, then it is necessary to perform an initial
alignment step before recovering the modal deforma-
tions. Orientation, position, and (if desired) scale can be
recovered in closed-form via quaternion-based algorithms
described by Horn [12] or by Wang and Jepson [43]. 1
Using only a few of the strongest feature correspondences
(recall that strong matches have relatively small values in
the affinity matrix Z) the rigid body modes can be solved
for directly. The resulting additional alignment parameters
are:
position vector
q unit quaternion defining orientation
s scale factor
c 1 and c 2 centroids for the two objects
Since this initial orientation calculation is based on only the
strongest matches, these are usually a very good estimate
of the rigid body parameters.
1 While all the examples reported here are two-dimensional, it was decided
that for generality, a 3-D orientation recovery method would be
employed. For 2-D orientation recovery problems, simply set z coordinates
to zero.
The objects can now be further aligned by recovering
the modal deformations ~
U as described previously. As be-
fore, we compute virtual loads that deform the features in
the first image towards their corresponding positions in the
second image. Since we have introduced an additional ro-
tation, translation, and scale, Equation 39 will be modified
so as to measure distances between features in the correct
coordinate frame:
where R is a rotation matrix computed from the unit
quaternion q.
Through the initial alignment step, we have essentially
reduced virtual forces between corresponding points; the
spring equation accounts for this force reduction by inverse
transforming the matched points x 2;i into the finite
local coordinate frame. The modal amplitudes
~
U are then solved for via a matrix multiply (Eq. 40) or by
solving the dynamic system (Eq. 12).
D. Comparing Objects
Once the mode amplitudes have been recovered, we can
compute the strain energy incurred by these deformations
by plugging into Equation 41. This strain energy can then
be used as a similarity metric. As will be seen in the next
section, we may also want to compare the strain in a subset
of modes deemed important in measuring similarity, or the
strain for each mode separately. The strain associated with
the i th mode is simply:
Since each mode's strain energy is scaled by its frequency
of vibration, there is an inherent penalty for deformations
that occur in the higher-frequency modes. In our experi-
ments, we have used strain energy for most of our object
comparisons, since it has a convenient physical meaning;
however, we suspect that (in general) it will be necessary
to weigh higher-frequency modes less heavily, since these
modes typically only describe high-frequency shape variations
and are more susceptible to noise.
Instead of looking at the strain energy needed to align
the two shapes, it may be desirable to directly compare
mode amplitudes needed to align a third, prototype object
with each of the two objects. In this case, we first compute
two modal descriptions ~
U 1 and ~
our
favorite distance metric for measuring the distance between
the two modal descriptions.
VIII. Recognition Experiments
A. Alignment and Description
Figure
demonstrates how we can align a prototype
shape with other shapes, and how to use this computed
strain energy as a similarity metric. As input, we are
given the correspondences computed for the various airplane
silhouettes shown in Figure 8. Our task is to align
and describe the three different target airplanes (shown in
Prototype
(a)
-5
mode number
amplitude
airplane 1 aligned prototype little strain energy
(b)
-5
mode number
amplitude
aligned prototype large strain energy in low-order modes
(c)
-5
mode number
amplitude
airplane 3 aligned prototype large strain energy in many modes
Fig. 10. Describing planes in terms of a prototype. The graphs show the 36 mode amplitudes used to align the prototype with each target shape. (a)
shows that similar shapes can be aligned with little deformation; (b) shows that viewpoint changes produce mostly low-frequency deformations, and
(c) shows that to align different shapes requires both low and high frequency deformations.
gray) in terms of modal deformations of a prototype airplane
(shown in black). In each case, there were approximately
contour points used, and correspondences were
computed using the first 36 eigenmodes. On the order of
50 strongest corresponding features were used as input to
Equation 43. The modal strain energy was computed using
Equation 41.
The graphs in Figure 10 show the values for the 36 recovered
modal amplitudes needed to align or warp the prototype
airplane with each of the target airplanes. These
mode amplitudes are essentially a recipe for how to build
each of the three target airplanes in terms of deformations
from the prototype.
Figure
10(a) shows an airplane that is similar to the
prototype and is viewed from a viewpoint that results in a
similar image geometry. As a consequence the two planes
can be accurately aligned with little deformation, as indicated
by the graph of mode amplitudes required to warp
the prototype to the target shape.
Figure
10(b) depicts an airplane that is from the same
class of airplanes as the prototype, but viewed from a very
different angle. In this case, the graph of mode amplitudes
shows a sizable strain in the first few modes. This
makes sense, since generally the first six to nine deformation
modes account for affine-like deformations that are
similar to the deformations produced by changes in view-point
The final example, Figure 10(c), is very different from
the prototype airplane, and is viewed from a different view-
point. In this case, the recovered mode deformations are
large in both the low and higher-frequency modes.
This figure illustrates how the distribution of strain energy
in the various modes can be used judge the similarity
of different shapes, and to determine if differences are
likely due primarily to changes in viewpoint. Figure 10(a)
shows that similar shapes can be aligned with little defor-
mation; (b) shows that viewpoint changes produce mostly
low-frequency deformations, and (c) shows that to align different
shapes generally requires deformations of both low
and high frequency.
B. Determining Relationships Between Objects
By looking more closely at the mode strains, we can
pin-point which modes are predominant in describing an
object. Figure 11 shows what we mean by this. As before,
we can describe one object's silhouette features in terms of
deformations from a prototype. In this case, we want to
compare different hand tools. The prototype is a wrench,
and the two target objects are a bent wrench and hammer.
Silhouettes were extracted from the images, and thinned
down to approximately 80 points per contour. Using the
strongest matched contour points, we then recovered the
first 22 modal deformations that warp the prototype onto
the other tools. A rotation, translation, and scale invariant
alignment stage was employed as detailed in Section V.C.
The strain energy attributed to each modal deformation
is shown in the graph at the bottom of the figure. As can
be seen from the graph, the energy needed to align the prototype
with a similar object (the bent wrench) was mostly
isolated in two modes: modes 6 and 8. In contrast, the
strain energy needed to align the wrench with the hammer
is much greater and spread across the graph.
Figure
12 shows the result of aligning the prototype
with the two other tools using only the two most dominant
modes. The top row shows alignment with the bent
wrench using just the sixth mode (a shear), and then just
the eighth mode (a simple bend). Taken together, these
two modes do a very good job of describing the deformation
needed to align the two wrenches. In contrast, aligning
the wrench with the hammer (bottom row of Figure 12)
cannot be described simply in terms of a few deformations
of the wrench.
By observing that there is a simple physical deformation
that aligns the prototype wrench and the bent wrench, we
can conclude that they are probably closely related in category
and functionality. In contrast, the fact that there is
no simple physical relationship between the hammer and
the wrench indicates that they are likely to be different
types of object, and may have quite different functionality.
prototype wrench
bent wrench
mode number
strain
energy
bent wrench
hammer
Fig. 11. Describing a bent wrench and a hammer in terms of modal deformations
from a prototype wrench. Silhouettes were extracted from the
images, and then the strongest corresponding contour points were found.
Using these matched contour points, the first 28 modal deformations that
warp the prototype's contour points onto the other tools were then recovered
and the resulting strain energy computed. A graph of the modal
strain attributed to each modal deformation is shown at the bottom of
the figure.
mode 6 mode 8 6 and 8
mode 11 mode 23 11 and 23
Fig. 12. Using the two modes with largest strain energy to deform the
prototype wrench to two other tools. The figures demonstrates how the
top two highest-strain modal deformations contribute to the alignment of
a prototype wrench to the bent wrench and a hammer of Figure 11.
prototype 0.8 2.1 3.2
3.9 4.8 5.1 8.2
23.1 24.2 28.1 28.8
Fig. 13. Using modal strain energy to compare a prototype wrench with
different hand tools. As in Figure 11, silhouettes were first extracted from
each tool image, and then the strongest corresponding contour points were
found. Mode amplitudes for the first 22 modes were recovered and used
to warp the prototype onto the other tools. The modal strain energy
that results from deforming the prototype to each tool is shown below
each image in this figure. As can be seen, strain energy provides an good
measure for similarity.
C. Recognition of Objects and Categories
In the next example (Figures 13 and 14) we will use
modal strain energy to compare three different prototype
tools: a wrench, hammer, and crescent wrench. As before,
silhouettes were first extracted and thinned from each tool
image, and then the strongest corresponding contour points
were found. Mode amplitudes for the first 22 modes were
recovered and used to warp each prototype onto the other
tools. The modal strain energy that results from deforming
the prototype to each tool is shown below each image.
Total CPU time per trial (match, align, and compare) averaged
11 seconds on an HP 735 workstation.
Figure
13 depicts the use of modal strain energy in comparing
a prototype wrench with thirteen other hand tools.
As this figure shows, the shapes most similar to the wrench
prototype are those other two-ended wrenches with approximately
straight handles. Next most similar are closed-ended
and bent wrenches, and most dissimilar are hammers
and single-ended wrenches. Note that the matching
is orientation and scale invariant (modulo limits imposed
by pixel resolution).
Figure
14 continues this example using as prototypes
the hammer and a single-ended wrench. Again, the modal
strain energy that results from deforming the prototype to
each tool is shown below each image.
When the hammer prototype is used, the most similar
shapes found are three other images of the same hammer,
taken with different viewpoints and illumination. The next
most similar shapes are a variety of other hammers. The
least similar shapes are a set of wrenches.
For the single-ended wrench prototype, the most similar
shapes are a series of single-ended wrenches. The next most
similar is a straight-handled double-ended wrench, and the
prototype 0.58 1.0 1.4
1.6 1.9 2.1 2.4
13.0 15.3 60.8 98.7
prototype 1.3 1.4 1.9
3.5 5.1 5.7 18.2
Fig. 14. Using modal strain energy to compare a crescent wrench with
different hand tools, and a prototype hammer with different hand tools.
energies were computed as in Figure 13. The modal strain energy
that results from deforming the prototype to each tool is shown below
each image.
least similar are a series of hammers and a bent, double-ended
wrench.
The fact that the similarity measure produced by the
system corresponds to functionally-similar shapes is im-
portant. It allows us to recognize the most similar wrench
or hammer from among a group of tools, even if there is no
tool that is an exact match. Moreover, if for some reason
the most-similar tool can't be used, we can then find the
next-most-similar tool, and the next, and so on. We can
find (in order of similarity) all the tools that are likely to
be from the same category.
IX. Conclusion
The advantages afforded by our method stem from the
use of the finite element technique of Galerkin surface approximation
to avoid sampling problems and to incorporate
outside information such as feature connectivity and dis-
tinctiveness. This formulation has allowed us to develop
an information-preserving shape matrix that models the
distribution of "virtual mass" within the data. This shape
matrix is closely related to the proximity matrix formulation
[30;32;33] and preserves its desirable properties, e.g.,
rotation invariance. In addition, the combination of finite
element techniques and a mass matrix formulation have allowed
us to avoid setting initial parameters, and to handle
much larger deformations.
Moreover, it is important to emphasize that the transformation
to modal space not only allows for automatically
establishing correspondence between clouds of feature
points; the same modes (and the underlying FEM model)
can then be used to describe the deformations that take
the features from one position to the other. The amount
of deformation required to align the two feature clouds can
be used for shape comparison and description, and to warp
the original images for alignment and sensor fusion. The
power of this method lies primarily in it's ability to unify
the correspondence and comparison tasks within one representation
Finally, we note that the descriptions computed are
canonical, and vary smoothly even for very large defor-
mations. This allows them to be used directly for object
recognition as illustrated by the airplane and hand-tool
examples in the previous section. Because the deformation
comparisons are physically-based, we can determine
whether or not two shapes are related by a simple physical
deformation. This has allowed us to identify shapes that
appear to be members of the same category.
Acknowledgment
Thanks are given to Joe Born, Irfan Essa, and John Martin
for their help and encouragement, and to Ronen Basri
for providing the edge images of Saabs and Volkswagens.
--R
Computer Vision
Finite Element Procedures in Engineering Analysis.
Learning flexible models from image sequences.
Feature based image metamor- phosis
A framework for spatiotemporal control in the tracking of visual contours.
Principal warps: Thin-plate splines and the decomposition of deformations
Tracking points on deformable objects.
Trainable method of parametric shape description.
Measurement of non-rigid motion using contour shape descriptors
Geometrical aspects of interpreting images as a three-dimensional scene
Snakes: Active contour models.
Application of the Karhunen-Loeve procedure for the charachterization of human faces
Learning and Recognition of 3D Objects from Appearance.
Perceptual organization and representation of natural form.
Automatic extraction of deformable part models.
Computational Complexity Versus Virtual Worlds.
Recovery of non-rigid motion and structure
A theory of networks for approximation and learning.
Radial basis functions for multivariate
Codon constraints on closed 2D shapes.
Natural shape detection based on principle components analysis.
A modal framework for correspondence and recognition.
Modal Matching for Correspondence and Recognition.
Object recognition and categorization using modal matching.
An algorithm for associating the features of two images.
A physically-based approach to 2-D shape blending
Towards a vision-based motion framework
Parametrically deformable contour models.
The Computation of Visible Surface Representations.
Dynamic 3-D models with local and global deformations: Deformable su- perquadrics
Constraints on deformable models: Recovering 3D shape and non-rigid motion
Machine learning and human interface for the cmu navlab.
Eigenfaces for recognition.
Recognition by linear combinations of models.
An eigendecomposition approach to weighted graph matching problems.
An algorithm for pattern description on the level of relative proximity.
A new closed-form solution of absolute orientation
Digital Image Warping.
Jane's World Aircraft Recognition Hand
--TR
--CTR
Jinhai Cai , Zhi-Qiang Liu, Hidden Markov Models with Spectral Features for 2D Shape Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.12, p.1454-1458, December 2001
Terry Caelli , Serhiy Kosinov, An Eigenspace Projection Clustering Method for Inexact Graph Matching, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.4, p.515-519, April 2004
Nicolae Duta , Anil K. Jain , Marie-Pierre Dubuisson-Jolly, Automatic Construction of 2D Shape Models, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.5, p.433-446, May 2001
Faisal Bashir , Ashfaq Khokhar , Dan Schonfeld, A hybrid system for affine-invariant trajectory retrieval, Proceedings of the 6th ACM SIGMM international workshop on Multimedia information retrieval, October 15-16, 2004, New York, NY, USA
Gun Park , Kyoung Mu Lee , Sang Uk Lee , Jin Hak Lee, Recognition of partially occluded objects using probabilistic ARG (attributed relational graph)-based matching, Computer Vision and Image Understanding, v.90 n.3, p.217-241, June
Raquel Ramos Pinho , Joo Manuel R. S. Tavares, Morphing of image represented objects using a physical methodology, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Ali Shokoufandeh , Diego Macrini , Sven Dickinson , Kaleem Siddiqi , Steven W. Zucker, Indexing Hierarchical Structures Using Graph Spectra, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1125-1140, July 2005
Vladan Papic , Hrvoje Dujmic, A curve matching algorithm for dynamic image sequences, Proceedings of the 5th WSEAS International Conference on Signal Processing, Computational Geometry & Artificial Vision, p.107-111, September 15-17, 2005, Malta
Marco Carcassoni , Edwin R. Hancock, Correspondence Matching with Modal Clusters, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.12, p.1609-1615, December
E. Sharon , D. Mumford, 2D-Shape Analysis Using Conformal Mapping, International Journal of Computer Vision, v.70 n.1, p.55-75, October 2006
Hong Fang Wang , Edwin R. Hancock, Correspondence matching using kernel principal components analysis and label consistency constraints, Pattern Recognition, v.39 n.6, p.1012-1025, June, 2006
Giancarlo Iannizzotto , Antonio Puliafito , Lorenzo Vita, Using the Median Distance to Compare Object Shapes in Content-BasedImage Retrieval, Multimedia Tools and Applications, v.8 n.2, p.197-217, March 1999
Shan Li , Moon-Chuen Lee , Donald Adjeroh, Effective invariant features for shape-based image retrieval: Research Articles, Journal of the American Society for Information Science and Technology, v.56 n.7, p.729-740, May 2005
Andrew Hill , Chris J. Taylor , Alan D. Brett, A Framework for Automatic Landmark Identification Using a New Method of Nonrigid Correspondence, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.3, p.241-251, March 2000
Davi Geiger , Tyng-Luh Liu , Robert V. Kohn, Representation and Self-Similarity of Shapes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.1, p.86-99, January
Boaz J. Super, Fast retrieval of isolated visual shapes, Computer Vision and Image Understanding, v.85 n.1, p.1-21, January 2002
Frederic Bangas , Marc Jaeger , Dominique Michelucci , M. Roelens, The ellipsoidal skeleton in medical applications, Proceedings of the sixth ACM symposium on Solid modeling and applications, p.30-38, May 2001, Ann Arbor, Michigan, United States
Haili Chui , Anand Rangarajan, A new point matching algorithm for non-rigid registration, Computer Vision and Image Understanding, v.89 n.2-3, p.114-141, February
Alberto S. Aguado , Eugenia Montiel , Ed Zaluska, Modeling generalized cylinders via Fourier morphing, ACM Transactions on Graphics (TOG), v.18 n.4, p.293-315, Oct. 1999
Yoram Gdalyahu , Daphna Weinshall, Flexible syntactic matching of curves and its application to automatic hierarchal classification of silhouettes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.12, p.1313-1328, December 1999
Thomas B. Sebastian , Benjamin B. Kimia, Curves vs. skeletons in object recognition, Signal Processing, v.85 n.2, p.247-263, February 2005
Elisabetta Delponte , Francesco Isgr , Francesca Odone , Alessandro Verri, SVD-matching using SIFT features, Graphical Models, v.68 n.5, p.415-431, September 2006
Varun Jain , Hao Zhang, A spectral approach to shape-based retrieval of articulated 3D models, Computer-Aided Design, v.39 n.5, p.398-407, May, 2007
Baback Moghaddam, Principal Manifolds and Probabilistic Subspaces for Visual Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.6, p.780-788, June 2002
Niklas Peinecke , Franz-Erich Wolter , Martin Reuter, Laplace spectra as fingerprints for image recognition, Computer-Aided Design, v.39 n.6, p.460-476, June, 2007
Ali Shokoufandeh , Lars Bretzner , Diego Macrini , M. Fatih Demirci , Clas Jnsson , Sven Dickinson, The representation and matching of categorical shape, Computer Vision and Image Understanding, v.103 n.2, p.139-154, August 2006
Yakov Keselman , Sven Dickinson, Generic Model Abstraction from Examples, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.7, p.1141-1156, July 2005
Stan Sclaroff , John Isidoro, Active blobs: region-based, deformable appearance models, Computer Vision and Image Understanding, v.89 n.2-3, p.197-225, February
S. Belongie , J. Malik , J. Puzicha, Shape Matching and Object Recognition Using Shape Contexts, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.4, p.509-522, April 2002
Kaleem Siddiqi , Ali Shokoufandeh , Sven J. Dickinson , Steven W. Zucker, Shock Graphs and Shape Matching, International Journal of Computer Vision, v.35 n.1, p.13-32, Nov. 1999
Robert Osada , Thomas Funkhouser , Bernard Chazelle , David Dobkin, Shape distributions, ACM Transactions on Graphics (TOG), v.21 n.4, p.807-832, October 2002
Pengcheng Shi , Albert J. Sinusas , R. Todd Constable , James S. Duncan, Volumetric Deformation Analysis Using Mechanics-Based Data Fusion: Applications in Cardiac Motion Recovery, International Journal of Computer Vision, v.35 n.1, p.87-107, Nov. 1999
Thomas Funkhouser , Patrick Min , Michael Kazhdan , Joyce Chen , Alex Halderman , David Dobkin , David Jacobs, A search engine for 3D models, ACM Transactions on Graphics (TOG), v.22 n.1, p.83-105, January | object recognition;vibration modes;finite element methods;correspondence;modal analysis;deformation;shape invariants;eigenmodes;shape description |
628688 | On Discontinuity-Adaptive Smoothness Priors in Computer Vision. | AbstractA variety of analytic and probabilistic models in connection to Markov random fields (MRFs) have been proposed in the last decade for solving low level vision problems involving discontinuities. This paper presents a systematic study of these models and defines a general discontinuity adaptive (DA) MRF model. By analyzing the Euler equation associated with the energy minimization, it shows that the fundamental difference between different models lies in the behavior of interaction between neighboring points, which is determined by the a priori smoothness constraint encoded into the energy function. An important necessary condition is derived for the interaction to be adaptive to discontinuities to avoid oversmoothing. This forms the basis on which a class of adaptive interaction functions (AIFs) is defined. The DA model is defined in terms of the Euler equation constrained by this class of AIFs. Its solution is C1 continuous and allows arbitrarily large but bounded slopes in dealing with discontinuities. Because of the continuous nature, it is stable to changes in parameters and data, a good property for regularizing ill-posed problems. Experimental results are shown. | Introduction
MOOTHNESS is a generic assumption underlying a
wide range of physical phenomena. It characterizes the
coherence and homogeneity of matter within a scope of
space (or an interval of time). It is one of the most common
assumptions in computer vision models, in particular,
those formulated in terms of Markov random fields (MRFs)
[1], [2], [3] and regularization [4]. Its applications are seen
widely in image restoration, surface reconstruction, optical
flow and motion, shape from X, texture, edge detection,
region segmentation, visual integration and so on.
The assumption of the uniform smoothness implies the
smoothness everywhere. However, improper imposition of
it can lead to undesirable, oversmoothed, solutions. This
occurs when the uniform smoothness is violated, for ex-
ample, at discontinuities where abrupt changes occur. It
is necessary to take care of discontinuities when using
smoothness priors. Therefore, how to apply the smoothness
constraint wile preserving discontinuities has been one
of the most active research areas in level vision (see, e.g.
[5], [6], [1], [7], [3], [8], [9], [10], [11], [12], [13], [14], [15],
[16], [17], [18]).
This paper presents a systematic study on smoothness
priors involving discontinuities. The results are based on
an analysis of the Euler equation associated with the energy
minimization in MRF and regularization models. Through
S. Z. Li is currently with the School of Electrical and Electronic
Engineering, Nanyang Technological University, Singapore 2263. E-
mail: [email protected] .
the analysis, it is identified that the fundamental difference
among different models for dealing with discontinuities lies
in their ways of controlling the interaction between neighboring
points. Thereby, an important necessary condition
is derived for any regularizers or MRF prior potential functions
to be able to deal with discontinuities.
Based on these findings, a so-called discontinuity adaptive
(DA) smoothness model is defined in terms of the Euler
equation constrained by a class of adaptive interaction
functions (AIFs). The DA solution is C 1 continuous, allowing
arbitrarily large but bounded slopes. Because of
the continuous nature, it is stable to changes in parameters
and data. This is a good property for regularizing
ill-posed problems. The results provide principles for the
selection of a priori clique potential functions in stochastic
MRF models and regularizers in deterministic regularization
models. It is also shown that the DA model includes
as special instances most of the existing models, such as the
line process (LP) model [1], [3], weak string and membrane
[10], approximations of the LP model [19], [20], minimal description
length [13], biased anisotropic diffusion [16], and
mean field theory approximation [17].
The study of discontinuities is most sensibly carried out
in terms of analytical properties, such as derivatives. For
this reason, analytical regularization, a special class of
MRF models, is used as the platform for it. If we consider
that regularization contains three parts [21]: the data, the
class of solution functions and the regularizer, the present
work addresses mainly the regularizer part. In Section II,
regularization models are reviewed in connection to dis-
continuities. In Section III, a necessary condition for the
discontinuity adaptivity is made explicit; based on this,
the DA model is defined and compared with other models.
In Section IV, an algorithm for finding the DA solution is
presented, some related issues are discussed. Experimental
results are shown in Section V. Finally, conclusions are
drawn.
II. Smoothness, Regularization and
Discontinuities
In MRF vision modeling, the smoothness assumption
can be encoded into an energy via one of the two routes:
analytic and probabilistic. In the analytic route, the encoding
is done in the regularization framework [4], [22].
From the regularization viewpoint, a problem is said to be
"ill-posed" if it fails to satisfy one or more of the following
criteria: the solution exists, is unique and depends continuously
on the data. Additional, a priori, assumptions have
to be imposed on the solution to convert an ill-posed problem
into a well-posed one. An important assumption of
IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-17, NO.6, PP.576-586, JUNE 1995
such assumptions is the smoothness [23]. It is incorporated
into the energy function whereby the cost of the solution
is defined.
From the probabilistic viewpoint, a regularized solution
corresponds to the maximum a posteriori (MAP) estimate
of an MRF [1], [24]. Here, the prior constraints are encoded
into the a priori MRF probability distribution. The MAP
solution is obtained by maximizing the posterior probability
or equivalently minimizing the corresponding energy.
The MRF model is more general than the regularization
model in (1) that it can encode prior constraints other than
the smoothness and (2) that it allows arbitrary neighborhood
systems other than the nearest ones. However, the
analytic regularization model provides a convenient platform
for the study of smoothness priors because of close
relationships between the smoothness and the analytical
continuity.
A. Regularization and Discontinuities
Consider the problem of restoring a signal f from the
data denotes the noise. The regularization
formulation defines the solution f to be the global
minimum of an energy function E(f
The energy is the sum of two terms
The closeness term, U(d j f ), measures the cost caused by
the discrepancy between the solution f and the data d
a
where -(x) is a weighting function and a and b are the
bounds of the integral. The smoothness term, U(f ), measures
the cost caused by the irregularities of the solution f ,
the irregularities being measured by the derivative magnitudes
jf (n) (x)j. With identical independent additive Gaussian
noise, U(f j d), U(d j f) and U(f) correspond to the
energies in the posterior, the likelihood the prior Gibbs
distributions of an MRF, respectively [24].
The smoothness term U(f ), also called a regularizer, is
the object of study in this work. It penalizes the irregularities
according to the a priori smoothness constraint
encoded in it. It is generally defined as
-n
a
g(f (n) (x))dx (3)
where Un (f) is the n th order regularizer, N is the highest
order to be considered and -n - 0 is a weighting factor.
A potential function g(f (n) (x)) is the penalty against the
irregularity in f (n\Gamma1) (x) and corresponds to prior clique
potentials in MRF models. Regularizers differ in the definition
of Un (f ), more specifically in the selection of g.
A.1 Standard Regularization
In the standard regularization [23], [4], the potential
function takes the pure quadratic form
With g q , the more irregular f (n\Gamma1) (x) is at x, the larger
jf (n) j, and consequently the larger potential g(f (n) ) contributed
to Un (f ). The standard quadratic regularizer can
have a more general form
Un (f; wn
a
wn (x)[f (n) (x)] 2 dx (5)
where wn (x) are the pre-specified non-negative continuous
functions [23]. It may also be generalized to multi-dimensional
cases and to include cross derivative terms.
The quadratic regularizer imposes the smoothness constraint
everywhere. It determines the constant interaction
between neighboring points and leads to smoothing strength
proportional to jf (n) j, as will be shown in the next section.
The homogeneous or isotropic application of the smoothness
constraint inevitably leads to oversmoothing at discontinuities
at which the derivative is infinite.
If the function wn (x) can be pre-specified in such a way
that wn at x where f (n) (x) is infinite, then the
oversmoothing can be avoided. In this way, wn (x) act as
continuity-controllers [6]. It is further suggested that wn (x)
may be discontinuous and not pre-specified [9]. For exam-
ple, by regarding wn (x) as unknown functions, one could
solve these unknowns using variational methods. But how
well wn (x) can thus be derived remains unclear. The introduction
of line processes [1], [3] or weak continuity constraints
[10] provides a solution to this problem.
A.2 Line Process Model and Its Approximations
The LP model assumes piecewise smoothness whereby
the smoothness constraint is switched off at points where
the magnitude of the signal derivative exceeds certain
threshold. It is defined on a lattice rather than on a continuous
domain. Quantize the continuous interval [a; b] into
uniformly spaced points x 1 ; :::; xm so that f
Introduce a set of binary line
process variables l i 2 f0; 1g into the smoothness term. If
also takes on a value in f0; 1g, then l i
is related it by l . The on-state, l of the line
process variable indicates that a discontinuity is detected
between neighboring points the off-state, l
indicates that the signal between the two points is contin-
uous. Each turn-on of a line process variable is penalized
by a quantity -ff. These give the LP regularizer
l i (6)
The energy for the LP model is
l i
This is the weak string model and its extension to 2D the
weak membrane model [10]. Eq.(7) corresponds to the energy
in the posterior distribution of the surface field f and
LI: ON DISCONTINUITY-ADAPTIVE SMOOTHNESS PRIORS IN COMPUTER VISION 3
the line process field l, the distribution being of the Gibbs
e \GammaU (f;l j d) (8)
where Z is a normalizing constant called the partition function
The line process variables are determined as follows: If
then it is cheaper to pay the price [f
it is more economical to
turn on the variable l to insert a discontinuity with the
cost of ff. This is an interpretation of the LP model based
on the concept of the weak continuity constraint introduced
by Blake [5] for edge labeling. An earlier idea of weak
constraints can be found in Hinton's thesis work [25]. In
the LP model, the interaction is piecewise constant (1 or
and the smoothing strength at i is either proportional to
see the next section. The concept of
discontinuities can be extended to model-based recognition
of overlapping objects [26], [27]. There, the relational bond
between any two features in the scene should be broken if
the features are ascribed to two different objects.
Finding f 2 IR m and l 2 f0; 1g m such that U(f; l j d)
is minimized is a mixture of real and combinatorial op-
timization. Algorithms for this can be classified as cate-
gories: stochastic [1], [3] and deterministic [19], [20], [10],
[17]. Some annealing techniques are often combined into
them to obtain global solutions.
In stochastic approaches, f and l are updated according
to some probability distribution parameterized by a
temperature parameter. For example, Geman and Geman
propose to use simulated annealing with the Gibbs sampler
[1] to find the global MAP solution. Marroquin [3] minimizes
the energy by a stochastic update in l together with
a deterministic update in f using gradient descent.
Deterministic approaches often use some classical gradient
based methods. Before these can be applied, the
combinatorial minimization problem has to be converted
into one of real minimization. By eliminating the line pro-
cess, Blake and Zisserman [10] convert the previous minimization
problem into one which minimizes the following
function containing only real variables
where the truncated quadratic potential function
shall be referred to as the line process potential function.
Blake and Zisserman introduce a parameter p into g ff (j) to
control the convexity of E, obtaining g (p)
ff (j). The parameter
p varies from 1 to 0, which corresponds to the variation
from a convex approximation of the function to its original
form.
Koch et al. [19] and Yuille [20] perform the conversion
using the Hopfield approach [28]. Continuous variables - l i
in the range [0; 1] are introduced to replace the binary line
process variables l i in f0; 1g. Each - l i is related to an internal
variable by a sigmoid function
with - ? as the parameter whereby lim -!0
. The
energy with this treatment is
It is shown that at stationary points where
there are v hence the approximated
line process variables [20]
This gives the effective potential function as
As the temperature decreases toward zero, - l i approaches
Geiger and Girosi [17] approximate the line process using
mean field theory. They introduce a parameter fi into (8),
giving an approximated posterior probability
Using the saddle point approximation method, they derive
mean field equations which yield the approximated line process
variables which are identical to (13). The solution is
found in the limit when fi !1.
proposes a continuous adaptive regularizer model.
There, the smoothness constraint is applied without the
switch-off as the LP model. Its effect is decreased as the
derivative magnitude becomes larger and is completely off
only at the true discontinuities where the derivative is infi-
nite. This is an earlier form of the DA model in this work.
B. Other Regularization Models
Grimson and Pavlidis [7] propose an approach in which
the degree of interaction between pixels across edges is adjusted
in order to detect discontinuities. Lee and Pavlidis
[11] investigate a class of smoothing splines which are piece-wise
polynomials. Errors of fit are measured after each
successive regularization and used to determine whether
discontinuities should be inserted. This process iterates
until convergence is reached. Besl et al. [29] propose a
smoothing window operator to prevent smoothing across
discontinuities based on robust statistics. Liu and Harris
4 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-17, NO.6, PP.576-586, JUNE 1995
[30] develop, based on a previous work [31], a computational
network in which surface reconstruction, discontinuity
detection and estimation of first and second derivatives
are performed cooperatively.
Mumford and Shah [8] define an energy on a continuous
domain
a
Z a i+1
a i
where - and ff are constants, k is the number of discontinuities
and the sequence a = a
indicates the locations of discontinuities. The minimal
solution (f ; fa i g ; k ) is found by minimizing over each
value of the integer k, every sequence fa k g, and every
function f(x) continuously differentiable on each interval
a . The minimization over k is a hard problem.
Using the minimal description length principle, Leclerc
[13] presents the following function for restoration of piece-wise
constant image f from noisy data d
where ffi(\Delta) 2 f0; 1g is the Kronecker delta function. To
minimize the function, he approximates the delta function
with the exponential function parameterized by -
and approaches the solution by continuation in - toward 0.
III. The Discontinuity Adaptive MRF Model
By analyzing the smoothing mechanism in terms of the
Euler equation, it will be clear that major difference between
different models lies in their way of controlling the
interaction between neighboring points and adjusting the
smoothing strength. The DA model is defined based on
the principle that wherever a discontinuity occurs, the interaction
should diminish.
A. Defining the DA Model
We focus on the models which involve only the first order
derivative and consider the general string model
a
where
Solutions f minimizing U(f j d) must satisfy the following
associated Euler-Lagrange differential equation or simply
the Euler equation [32]
dx
with the boundary conditions
where f a and f b are prescribed constants. In the following
discussion of solutions to the differential equation, the
following assumptions are made: Both -(x) and d(x) are
continuous and f(x) is continuously differentiable 1 . Writing
the Euler equation out yields
d
dx
A potential function g is usually chosen to be (a) even
such that (b) the derivative of g can be
expressed as the following form
where h is called an interaction function, . Obviously, h
thus defined is also even. With these assumptions, the
Euler equation can be expressed as
d
dx
The magnitude jg 0 (f 0 relates to the strength
with which a regularizer performs smoothing; and h(f 0 (x))
determines the interaction between neighboring pixels.
necessary condition for any regularization
model to be adaptive to discontinuities is
lim
where C 2 [0; 1) is a constant. The above condition with
prohibits smoothing at discontinuities where
limited (bounded)
smoothing. In any case, however, the interaction h(j) must
be small for large jjj and approaches 0 as jjj goes to 1.
This is an important guideline for selecting g and h for the
purpose of the adaptation.
Definition 1. An adaptive interaction function (AIF) h fl
parameterized by fl (? 0) is a function that satisfies:
The class of AIFs, denoted by IHI fl , is defined as the collection
of all such h fl . 2
The continuity requirement (i) guarantees the twice differentiability
of the integrand u(f j d) in (20) with respect
1 In this work, the continuity of -(x) and d(x) and differentiability of
are assumed for the variational problems defined on continuous
domains [32], [33]. However, they are not necessary for discrete problems
where [a; b] is quantized into discrete points. For example, in
the discrete case, -(x i ) is allowed to take a value in f1,0g, indicating
whether datum d(x i ) is available or not.
LI: ON DISCONTINUITY-ADAPTIVE SMOOTHNESS PRIORS IN COMPUTER VISION 5
I
Four choices of AIFs, the corresponding APFs and bands.
AIF APF Band
to f 0 , a condition for the solution f to exist [32]. How-
ever, this can be relaxed to h fl 2 C 0 for discrete problems.
The evenness of (ii) is usually assumed for spatially unbiased
smoothing. The positive definiteness of (iii) keeps
the interaction positive such that the sign of jh fl (j) will
not be altered by h fl (j). The monotony of (iv) leads to
decreasing interaction as the magnitude of the derivative
increases. The bounded asymptote property of (v) provides
the adaptive discontinuity control as stated earlier. Other
properties
the last meaning zero interaction at dis-
continuities. The above definition characterizes the properties
AIFs should possess rather than instantiates some
particular functions. Therefore, the following definition of
the DA model is rather broad.
Definition 2. The DA solution f is defined by the Euler
equation (25) constrained by
The DA solution is C 1 continuous 2 . Therefore the DA
solution and its derivative never have discontinuities in
them. The DA overcomes oversmoothing by allowing its
solution to be steep, but C 1 continuous, at point x where
data d(x) is steep. With every possible data configuration
d, every f 2 C 1 is possible.
There are two reasons for defining the DA in terms of the
constrained Euler equation: First, it captures the essence of
the DA problem; the DA model should be defined based on
the h fl therein. Second, some satisfying h fl may not have
their corresponding g fl (and hence the energy) in closed-
form, where g fl is defined below.
Definition 3. The adaptive potential function (APF) corresponding
to an h fl 2 IHI fl is defined by
is called an adaptive
string. The following are some properties of
sically, g fl is one order higher than h fl in continuity; it
2 See [33] for a comprehensive discussion about the continuity of
solutions of the class of problems to which the DA belongs.
s
s
(1) (2) (3) (4)
Fig. 1. The qualitative shapes of the four DA functions.
is even, g fl its derivative function is odd,
however, it is not necessary for g fl (1)
to be bounded. Furthermore, g fl is strictly monotonically
increasing as jjj increases because g fl
means larger jjj
leads to larger penalty g fl (j). It conforms to the original
spirit of the standard quadratic regularizers determined by
q . The line process potential function g ff does not have
such a property: Its penalty is fixed and does not increase
as jjj increases beyond
ff. The idea that large values of
are equally penalized is questionable [14].
In practice, it is not always necessary to know the explicit
definition of g fl . The most important factors are the
Euler equation and the constraining function h fl . Nonethe-
less, knowing g fl is helpful for analyzing the convexity of
E(f ).
For a given g fl (j), there exists a region of j within which
the smoothing strength jg 0
increases monotonically
as jjj increases and the function g fl is convex:
(b
The region B fl is referred to as the band. The lower and
upper bounds b L ; b H correspond to the two extrema of
(j), which can be obtained by solving g 00
we have b l = \Gammab H when g is even. When b L
thus g fl (j) is strictly convex. For h fl defined
with C ? 0, the bounds are b
Table
I instantiates four possible choices of AIFs, the
corresponding APFs and the bands. Fig.1 shows their qualitative
shapes (a trivial constant may be added to g fl (j)).
Fig.2 gives a graphical comparison of the g(j)'s for the
quadratic, the LP model and the first three DA models
listed in Table I and their derivatives, g 0
The fourth AIF, h allows bounded but
non-zero smoothing at discontinuities: lim j!1 jh 4fl
fl. It is interesting because g 00
all j (except at and leads to strictly convex mini-
mization. In fact, a positive number C in (27) leads to a
convex AIF and hence energy function 3 . The convex subset
of models for discontinuity adaptivity and M estimation is
3 This is due to the following theorem: If g(\Delta) is convex on IR, a real-valued
energy function
convex w.r.t. f for all f 2 C 1 and fixed -; d 2 C 1 .
6 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-17, NO.6, PP.576-586, JUNE 1995
Quadratic
Line-Process
Fig. 2. Comparison of different potential functions g(j) (top) and
their first derivatives g 0
appealing because they have some inherent advantages over
nonconvex models, both in stability and computational efficiency
Fig.2 also helps us visualize how the DA performs
smoothing. Like the quadratic g q the DA allows
the smoothing strength jg 0
(j)j to increase monotonically
as j increases within the band B fl . Outside the band, the
smoothing decreases as j increases and becomes zero as
This differs from the quadratic regularizer which
allows boundless smoothing when j !1. Also unlike the
LP model which shuts down smoothing abruptly just beyond
its band B
ff), the DA decreases smoothing
continuously towards zero.
B. Relations with Previous Models
The first three instantiated models behave in a similar
way to the quadratic prior model when
noticed by [15]. This can be understood by
looking at the power series expansion of g fl (j), g fl
are constants with c 1 ? 0 (the
expansion can also involve a trivial additive constant c 0 ).
Thus in this situation of sufficiently small j 2 =fl, the adaptive
model inherits the convexity of the quadratic model.
The interaction function h also well explains the differences
between various regularizers. For the quadratic
regularizer, the interaction is constant everywhere
and the smoothing strength is proportional to jjj. This
is why the quadratic regularizer leads to oversmoothing at
discontinuities where j is infinite. In the LP model, the
interaction is piecewise constant
Obviously, it inhibits oversmoothing by switching off
smoothing when jjj exceeds
ff in a binary manner.
In the LP approximations using the Hopfield approach
[19], [20] and mean field theory [17], the line process variables
are approximated by (13). This approximation effectively
results in the following interaction function
As the temperature - decreases toward zero, the above
approaches h ff =2, that is,
Obviously, h ff;- (j) with nonzero
- is a member of the AIF family, i.e. h ff;- (j) 2 IHI fl and
therefore the approximated LP models are instances of the
DA model.
It is interesting to note an observation made by Geiger
and Girosi: "sometimes a finite fi solution may be more desirable
or robust" ([17], pages 406-407) where
T . They
further suggest that there is "an optimal (finite) temperature
(fi)" for the solution. An algorithm is presented in
[34] for estimating an optimal fi. The LP approximation
with finite fi ! +1 or nonzero T ? 0 is more an instance
of the DA than the LP model which they aimed to approx-
imate. It will be shown in Section III-D that the DA model
is indeed more stable than the LP model.
Anisotropic diffusion [35] is a scale-space method for
edge-preserving smoothing. Unlike fixed coefficients in the
traditional isotropic scale-space filtering [36], anisotropic
diffusion coefficients are spatially varying according to the
gradient information. A so-called biased anisotropic diffusion
[16] model is obtained if anisotropic diffusion is combined
with a closeness term. Two choices of APFs are used
in those anisotropic diffusion models: g 1fl and g 2fl .
Shulman and Herve [14] propose to use the following
Huber's robust error penalty function [37] as the adaptive
potential
similar role to ff in g ff . The above is
a convex function and has the first derivative as: g 0
for other j. Comparing g 0
fi (j) with (24), we find that
the corresponding AIF is h fi
This function allows bounded
but nonzero smoothing at discontinuities. The same function
has also been applied by [38] to curve fitting. A comparative
study on the DA model and robust statistics can
be found in [39], [27].
The approximation (18) of Leclerc's minimal length
model [13] is in effect the same as the DA with APF 1.
may be one of the best cost functions for the piece-wise
constant restoration; for more general piecewise continuous
restoration, one needs to use (18) with a nonzero - ,
which is a DA instance. Regarding the continuity property
of domains, Mumford and Shah's model [8] and Terzopou-
los' continuity-controlled regularization model [9] can also
be defined on continuous domains as the DA model.
LI: ON DISCONTINUITY-ADAPTIVE SMOOTHNESS PRIORS IN COMPUTER VISION 7
C. Discrete Data and 2D Cases
When the data d is available at discrete points: d
m, the Euler equation for the adaptive
string is
d
dx
where ffi(\Delta) is the Dirac delta function. Integrating the
above equation once yields
where c is a constant. From the above, the solution f is
determined by the following
with appropriate boundary conditions at
xm . Obviously, f in this case is piecewise C 1 continuous;
more exactly, it is composed of consecutively joined line
segments.
The adaptive string model can be extended to 2D and
higher order equivalents. The 2D adaptive membrane has
the following
@
@x
@
@y
[f y h(f y
corresponding to the Euler equation (25), where data
images. Extending
the DA to second order, one obtains the following for
the adaptive rod on 1D
d
and for the adaptive plate on 2D
@
@xy
[f xy h(f xy
@
[f yy h(f yy
D. Solution Stability
The DA solution depends continuously on its parameters
and the data whereas the LP solution does not. An informal
analysis follows. Consider an LP solution f ff obtained
with ff. The solution is a local equilibrium satisfying
In the above, h ff (f 0
0, depending on ff and the
configuration of f ff . When jf 0
ff (x)j 2 is close to ff for some
a small change \Deltaff may flip h ff over from one state to the
other. This is due to the binary non-linearity of h ff . The
flip-over leads to a significantly different solution. This can
be expressed as
lim
\Deltaff!0
where 0(x) is a function which is constantly zero in the
domain [a; b]. The variation ffif ff with respect to \Deltaff may
not be zero for some f ff and ff, which causes instability.
However, the DA solution, denoted f fl , is stable
lim
\Deltafl !0
where f fl denotes the DA solution. Conclusion on the stability
due to changes in parameter - can be drawn similarly.
The same is also true with respect to the data. Given ff
and - fixed, the solution depends on the data, i.e.
Assume a small variation ffid in the data d. The solution
f [d] must change accordingly to reach a new equilibrium
to satisfy the Euler equation. However, there
always exist possibilities that h ff (f 0
ff (x)) may flip over for
some x when jf 0
ff (x)j 2 is near the ff, resulting in an abrupt
change in the LP solution f ff . This can be represented by
lim
That is, the variation ffif ff with respect to ffid may not be
zero for some f ff and d. However, the DA model is stable
to such changes, i.e.
lim
[f
because of its continuous nature. From the analysis, it
can be concluded that the DA better regularizes ill-posed
problems than the LP.
IV. Computation of DA Solutions
A. Solving the Euler Equation
The Euler equation (25) can be treated as a boundary
value problem. It can also be solved by minimizing
the corresponding energy (19) because a minimum of the
energy is (sufficiently) a solution of the equation (With
the form of the energy needs not to be
known in order to minimize it - see below). The energy
minimization approach is chosen here.
Because both are
bounded below, so is the energy. This means that a minimal
solution, and hence a solution to the Euler equation,
exists. If h fl is chosen with the corresponding energy
E(f) is non-convex with respect to f , and the minimization
is subject local minima. See [10] for an analysis of
convexity in string and membrane models. The following
presents a discrete method for finding a local minimum.
Sample the integral interval [a; uniformly
spaced points: x b. For clarity and without
loss of generality, let us assume that the bounds a and b are
re-scaled in such a way that the point spacing is one unit 4
4 More advanced numerical methods using varying spacing, such as
[40] may be advantageous in obtaining more accurate solutions.
8 IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, VOL. PAMI-17, NO.6, PP.576-586, JUNE 1995
i.e. 1. Approximate the first derivative f 0
i by
the first order backward difference f 0
Then the energy (19) is approximated by
consists of the neighbors of i. Using the gradient-descent
method, we can obtain the following updating
(f (t)
The solution f (t) takes prescribed values at the boundary
points at to meet the boundary condition
(22) and the values may be estimated from data d i near the
boundaries. With initial f (0) , the solution is in the limit
Equation (49) helps us see more of how the DA works.
The smoothing at i is due to -
1g. The contributions to smoothing
are from the two neighboring points. That from site i 0
is proportional to the product of the two factors, f
and h fl (f This is why we relate jg 0
to the strength of smoothing performed by regularizers.
On the other hand, h(j) acts as an adaptive weighting
function to control the smoothing due to the difference
Therefore we regard it as the interaction.
Two more remarks are made: First, the contributions from
the two sides, are treated separately or
non-symmetrically. Second, the sum of the contributions
to smoothing is zero if the three points are aligned when
and h(j) is even since in this situation
The updating rule for the adaptive membrane on 2D can
be easily derived. In the 2D case (40) , there is an additional
term in the y dimension. This leads to the following
energy
Letting @f i;j
@f i;j
leads to the following updating rule
(f (t)
is the
set of the four neighboring points 5 of (i; j). The updating
on 2D grid can be performed on the white and black sites
of a checkerboard alternatively to accelerate convergence.
5 With the 4-neighborhood system, the model considers derivatives
in the horizontal and vertical directions. With the 8-neighborhood
system, the regularizer also includes the diagonal derivatives weighted
by 1=
2.
Choose a convex
using Eq.(49);
set
Until (f
Fig. 3. A GNC algorithm for finding the DA solution.
There are three parameters in (49) which shall be determined
for the DA model: -, fl and -. Parameter -
is related to the convergence of the relaxation algorithm.
There is an upper bound for - for the system to be sta-
ble. There also exists an optimal value for - [10]. Optimal
choices of - for quadratic regularization may be made using
cross-validation [41], [42]. In [43], a least squares method
[44] is presented for estimating MRF clique potentials for
a LP model. Automated selection of the - and fl parameters
for the DA model is an unsolved problem. They are
currently chosen in an ad hoc way.
B. Non-Convex Minimization
The DA model with leads to non-convex dynamic
systems and direct minimization using gradient descent
only guarantees to find a local minimum. A GNC-like
algorithm can be constructed for approximating the global
solution.
Because a convex g guarantees a convex energy function
E(f ), it is useful to study the convexity of E by analyzing
the convexity of g fl . The expansion (30) illustrates that
when
the DA model behaves in a similar way to the quadratic
regularizer and hence is convex. An analysis in [10] shows
that it is sufficient to guarantee the convexity if fl is chosen
large enough to satisfy
where c is some real number (see an analysis in [10] for
determining c ). The exact value of c for the convexity
depends on g and -. It can be safely assumed that c
in any case.
Therefore, a convex fl (0) can be chosen for which
This is equivalent to choosing
a fl (0) such that f (0)
is the
band introduced in the section. For APFs 1, 2 and 3, the
must be larger than 2v, 3v and v, respectively, where
The graduation from an initially convex approximation
of g fl to its target form can be implemented by continuation
in fl from a big value to the target value. A GNC-like
algorithm using this heuristic is outlined in Fig.3. Given
d, - and a target value fl target for fl, the algorithm aims to
construct a sequence ffl (t) g with fl (1) ! fl target , and thus
LI: ON DISCONTINUITY-ADAPTIVE SMOOTHNESS PRIORS IN COMPUTER VISION 9
ff (t)
g to approach the global minimum f
for which E(f In the algorithm, ffl is a constant
for judging the convergence, and - is a factor for decreasing
1. The choice of - controls the
balance between the quality of the solution and the computational
time. In principle, fl (t) should vary continuously
to keep a good track of the global minimum. In discrete
computation, we choose 0:9 - 0:99. More rapid decrease
of fl (t) (with smaller -) is likely to lead the system to
an unfavorable local minimum. Witkin et al. [45] present
a more sophisticated scheme for decreasing fl by relating
the step to the energy change:
are constants. This seems reasonable.
C. Analog Network
The computation can be performed using an analog net-
work. Let f i be the potential of neural cell i. Let C
1=2- be the membrane capacitance and R
the membrane resistance. Let
be the conductance or synaptic efficacy between neurons i
. If the exponential
g 1fl is used, then
Let d i be the external current input to i, with d
Now (49) can be written as
@t
The above is the dynamic equation at neuron i of the net-
work. The diagram of the network circuit is shown in Fig.4.
The synaptic current from i to i 0 is
I
If the exponential g 1fl is used, then
I
A plot of current I i;i 0 versus potential difference f
was shown at the bottom of Fig.2. The voltage-controlled
nonlinear synaptic conductance T i;i 0 , characterized by the
h function defined in (27), realizes the adaptive continuity
control; the corresponding nonlinear current I i;i 0 realizes
the adaptive smoothing. The current I i;i 0 diminishes
asymptotically to zero as the potential difference between
neurons i and i 0 reaches far beyond the band B fl .
Fig.7 shows the behavior of the analog network under
component defects such as manufacturing inadequacy,
quality changes, etc. The defects are simulated by adding
\Sigma25% evenly distributed random noise into R, C, and T
in (55). The data d is shown in triangles with 50% missing
rate; the locations of the missing data, for which - are
indicated by triangles at the bottom. The noise in the data
is white Gaussian with standard deviation
r
R
f
l, s
r r
Fig. 4. Schematic diagram of the analog network circuit for the DA
model.
and (right). The interaction function is chosen to
be h 2fl
Solutions obtained with simulated
component defects are shown in dashed lines in comparison
with those obtained without such noise shown in thicker
solid lines. The ideal signal is shown in thinner solid lines.
As can be seen, there is only a little difference between
the solutions obtained with and without such noise. This
demonstrates not only the stability of the network circuit
but also the error-tolerance property of the DA model.
V. Experiments
Two experimental results are presented in the following 6 .
The first is the reconstruction of a real image of size
256 \Theta 256 (Fig.5). Here, APF 1 (g 1fl ) is used and the parameters
are empirically chosen as 2. The
result shows that the reconstructed image is much cleaner
with discontinuities well preserved.
The second experiment is the detection of step and roof
edges from a simulated noisy pyramid image of size 128 \Theta
128 (Fig.6). The detection process runs in three stages:
regularizing the input image and computing images of
first derivatives in the two directions from the regularized
image using finite difference, 2) regularizing the derivative
images and detecting steps and roofs by thresholding
the regularized derivative images. APF 2 (g 2fl ) is used and
the parameters are empirically chosen as
for the first stage of the regularization and
for the second stage.
Edges in the horizontal and vertical directions are best
detected while those in the diagonal directions are not so
well done. This is because only derivatives in the two
axes directions are considered in the DA discussed so far;
changes in the diagonal directions are largely ignored. Regularizers
using the 8-neighborhood system (see the footnote
for Eq.(51) should help improve the detection of diagonal
changes.
Results in Fig.7 show the behavior of the analog DA
network under component defects such as manufacturing
inadequacy, quality changes, etc. The defects are simulated
by adding \Sigma25% evenly distributed random noise into R,
C, and T in (55). The data d is shown in triangles with 50%
missing rate; the locations of the missing data, for which
are indicated by triangles at the bottom. The noise
6 More results can be found in Chapter 3 of [18].
Fig. 5. 3D plots of an image (left) and its reconstruction using DA
(right). The plots are drawn after sampling the images at every
4 pixels in both directions into the size of 64 \Theta 64.
in the data is white Gaussian with standard deviation
Solutions obtained with simulated
component defects are shown in dashed lines in comparison
with those obtained without such noise shown in thicker
solid lines. The ideal signal is shown in thinner solid lines.
As can be seen, there is only a little difference between
the solutions obtained with and without such noise. This
demonstrates not only the stability of the network circuit
but also the error-tolerance property of the DA model.
Fig. 6. Step and roof edges (right) detected from a pyramid image
(left). Step edges are shown in dots and roof edges in crosses.
VI. Conclusion
Through an analysis of the associated Euler equation,
a necessary condition is made explicit for MRF or regularization
models to be adaptive to discontinuities. On
this basis, the DA model is defined by the Euler equation
constrained by the class of adaptive interaction functions
(AIFs). The definition provides principles for choosing
deterministic regularizers and MRF clique potential func-
tions. It also includes many existing models as special instances
LI: ON DISCONTINUITY-ADAPTIVE SMOOTHNESS PRIORS IN COMPUTER VISION 11
Fig. 7. Stability of the DA solution under disturbances in parameters.
The DA model has its solution in C 1 and adaptively
overrides the smoothness assumption where the assumption
is not valid, without the switching-on/off of discontinuities
in the LP model. The DA solution never contains true dis-
continuities. The DA model "preserves" discontinuities by
allowing the solution to have arbitrarily large but bounded
slopes. The LP model "preserves true discontinuities" by
switching between small and unbounded (or large) slopes.
Owing to its continuous properties, the DA model possesses
some theoretical advantages over the LP model. Unlike
the LP model, it is stable to changes in parameters and
in the data. Therefore it is better than the LP model in
solving ill-posed problems. In addition, it is able to deal
with problems on a continuous domain. Furthermore, it is
better suited for analog VLSI implementation.
Acknowledgments
The author is grateful to Yihong Huang, Eric Sung, Wei
Yun Yau and Han Wang for their helpful comments.
--R
""
""
Probabilistic Solution of Inverse Problems
""
""
""
""
""
""
""
""
""
""
""
""
""
Towards 3D Vision from Range Images: An Optimisation framework and Parallel Distributed Networks
""
""
""
""
Solutions of Ill-posed Prob- lems
""
Relaxation and Its Role in Vision
""
Markov Random Field Modeling in Computer Vision
""
""
""
""
Methods of Mathematical Physics
""
""
""
Robust Statistics
""
""
""
""
""
""
""
""
--TR
--CTR
Jian-Feng Cai , Raymond H. Chan , Carmine Fiore, Minimization of a Detail-Preserving Regularization Functional for Impulse Noise Removal, Journal of Mathematical Imaging and Vision, v.29 n.1, p.79-91, September 2007
A. Tonazzini , L. Bedini, Monte Carlo Markov chain techniques for unsupervised MRF-based image denoising, Pattern Recognition Letters, v.24 n.1-3, p.55-64, January
Michele Ceccarelli, A Finite Markov Random Field approach to fast edge-preserving image recovery, Image and Vision Computing, v.25 n.6, p.792-804, June, 2007
Stan Z. Li , Han Wang , William Y. C. Soh, Robust Estimation of Rotation Angles from Image Sequences Usingthe Annealing M-Estimator, Journal of Mathematical Imaging and Vision, v.8 n.2, p.181-192, March, 1998
David W. Jacobs , Daphna Weinshall , Yoram Gdalyahu, Classification with Nonmetric Distances: Image Retrieval and Class Representation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.6, p.583-600, June 2000 | energy functions;regularization;markov random fields;computer vision;minimization;euler equation;discontinuities |
628710 | Algebraic Functions For Recognition. | AbstractIn the general case, a trilinear relationship between three perspective views is shown to exist. The trilinearity result is shown to be of much practical use in visual recognition by alignmentyielding a direct reprojection method that cuts through the computations of camera transformation, scene structure and epipolar geometry. Moreover, the direct method is linear and sets a new lower theoretical bound on the minimal number of points that are required for a linear solution for the task of reprojection. The proof of the central result may be of further interest as it demonstrates certain regularities across homographies of the plane and introduces new view invariants. Experiments on simulated and real image data were conducted, including a comparative analysis with epipolar intersection and the linear combination methods, with results indicating a greater degree of robustness in practice and a higher level of performance in reprojection tasks. | Introduction
We establish a general result about algebraic connections
across three perspective views of a 3D scene and demonstrate
its application to visual recognition via alignment.
We show that, in general, any three perspective views of a
scene satisfy a pair of trilinear functions of image coordi-
nates. In the limiting case, when all three views are ortho-
graphic, these functions become linear and reduce to the
form discovered by [38]. Using the trilinear result one can
manipulate views of an object (such as generate novel views
from two model views) without recovering scene structure
(metric or non-metric), camera transformation, or even the
geometry. Moreover, the trilinear functions can be
recovered by linear methods with a minimal configuration
of seven points. The latter is shown to be new lower bound
on the minimal configuration that is required for a general
linear solution to the problem of re-projecting a 3D scene
onto an arbitrary novel view given corresponding points
across two reference views. Previous solutions rely on recovering
the epipolar geometry which, in turn, requires a
minimal configuration of eight points for a linear solution.
The central results in this paper are contained in Theorems
3. The first theorem states that the variety
of views / of a fixed 3D object obtained by an un-calibrated
pin-hole camera satisfy a relation of the sort
are two arbitrary views of
the object, and F has a special trilinear form. The coefficients
of F can be recovered linearly without establishing
first the epipolar geometry, 3D structure of the object, or
A. Shashua is with the Artificial Intelligence Laboratory and the Center
for Biological Computational Learning, Massachusetts Institute of Tech-
nology, Cambridge, MA 02139.
camera motion. The auxiliary Lemmas required for the
proof of Theorem 1 may be of interest on their own as
they establish certain regularities across projective transformations
of the plane and introduce new view invariants
(Lemma 4).
Theorem 2 addresses the problem of recovering the co-efficients
of the trilinear functions in the most economical
way. It is shown that among all possible trilinear functions
across three views, there exists at most four linearly
independent such functions. As a consequence, the coefficients
of these functions can be recovered linearly from
seven corresponding points across three views.
Theorem 3 is an obvious corollary of Theorem 1 but contains
a significant practical aspect. It is shown that if the
views are obtained by parallel projection, then F
reduces to a special bilinear form - or, equivalently, that
any perspective view / can be obtained by a rational linear
function of two orthographic views. The reduction to a
bilinear form implies that simpler recognition schemes are
possible if the two reference views (model views) stored in
memory are orthographic.
These results may have several applications (discussed in
Section VI), but the one emphasized throughout this paper
is for the task of recognition of 3D objects via alignment.
The alignment approach for recognition ([37, 16], and references
therein) is based on the result that the equivalence
class of views of an object (ignoring self occlusions) undergoing
3D rigid, affine or projective transformations can be
captured by storing a 3D model of the object, or simply
by storing at least two arbitrary "model" views of the object
- assuming that the correspondence problem between
the model views can somehow be solved (cf. [27, 5, 33]).
During recognition a small number of corresponding points
between the novel input view and the model views of a
particular candidate object are sufficient to "re-project"
the model onto the novel viewing position. Recognition is
achieved if the re-projected image is successfully matched
against the input image. We refer to the problem of predicting
a novel view from a set of model views using a
limited number of corresponding points, as the problem of
re-projection.
The problem of re-projection can in principal be dealt
with via 3D reconstruction of shape and camera mo-
tion. This includes classical structure from motion methods
for recovering rigid camera motion parameters and
metric shape [36, 18, 35, 14, 15], and more recent methods
for recovering non-metric structure, i.e., assuming
the objects undergo 3D affine or projective transforma-
tions, or equivalently, that the cameras are uncalibrated
[17, 25, 39, 10, 13, 30]. The classic approaches for perspective
views are known to be unstable under errors in image
measurements, narrow field of view, and internal camera
calibration [3, 9, 12], and therefore, are unlikely to be of
practical use for purposes of re-projection. The non-metric
approaches, as a general concept, have not been fully tested
on real images, but the methods proposed so far rely on recovering
first the epipolar geometry - a process that is also
known to be unstable in the presence of noise.
It is also known that the epipolar geometry alone is sufficient
to achieve re-projection by means of intersecting
lines [24, 6, 8, 26, 23, 11] using at least eight corresponding
points across the three views. This, however,
is possible only if the centers of the three cameras are non-collinear
- which can lead to numerical instability unless
the centers are far from collinear - and any object point
on the tri-focal plane cannot be re-projected as well. Fur-
thermore, as with the non-metric reconstruction methods,
obtaining the epipolar geometry is at best a sensitive process
even when dozens of corresponding points are used
and with the state of the art methods (see Section V for
more details and comparative analysis with simulated and
real images).
For purposes of stability, therefore, it is worthwhile exploring
more direct tools for achieving re-projection. For
instance, instead of reconstruction of shape and invariants
we would like to establish a direct connection between
views expressed as a functions of image coordinates alone
- which we call "algebraic functions of views". Such a result
was established in the orthographic case by [38]. There
it was shown that any three orthographic views of an object
satisfy a linear function of the corresponding image coordinates
- this we will show here is simply a limiting case
of larger set of algebraic functions, that in general have a
trilinear form. With these functions one can manipulate
views of an object, such as create new views, without the
need to recover shape or camera geometry as an intermediate
step - all what is needed is to appropriately combine
the image coordinates of two reference views. Also, with
these functions, the epipolar geometries are intertwined,
leading not only to absence of singularities, and a lower
bound on the minimal configuration of points, but as we
shall see in the experimental section to more accurate performance
in the presence of errors in image measurements.
Part of this work (Theorem 1 only) was presented in concise
form in [31].
II. Notations
We consider object space to be the three-dimensional
projective space P 3 , and image space to be the two-dimensional
projective space P 2 . Let \Phi ae P 3 be a set
of points standing for a 3D object, and let /
views (arbitrary), indexed by i, of \Phi. Given two cameras
with centers located at O; O respectively, the
epipoles are defined to be at the intersection of the line OO 0
with both image planes. Because the image plane is finite,
we can assign, without loss of generality, the value 1 as
the third homogeneous coordinate to every observed image
point. That is, if (x; y) are the observed image coordinates
of some point (with respect to some arbitrary origin -
say the geometric center of the image), then
denotes the homogeneous coordinates of the image plane.
Note that this convention ignores special views in which a
point in \Phi is at infinity in those views - these singular
cases are not modeled here.
Since we will be working with at most three views at
a time, we denote the relevant epipoles as follows: let
be the corresponding epipoles between
views
the corresponding
epipoles between views corresponding
image points across three views will be denoted by
1). The term
"image coordinates" will denote the non-homogeneous co-ordinate
representation of P 2 , e.g., (x; y);
for the three corresponding points.
Planes will be denoted by - i , indexed by i, and just -
if only one plane is discussed. All planes are assumed to
be arbitrary and distinct from one another. The
denotes equality up to a scale, GLn stands for the group
of n \Theta n matrices, and PGLn is the group defined up to a
scale.
III. The Trilinear Form
The central results of this paper are presented in the
following two theorems. The remaining of the section is
devoted to the proof of this result and its implications.
Theorem 1 (Trilinearity) Let arbitrary
perspective views of some object, modeled by a set of
points in 3D. The image coordinates (x;
of three corresponding points across
three views satisfy a pair of trilinear equations of the following
and
where the coefficients ff j , fi j , are fixed for
all points, are uniquely defined up to an overall scale, and
The following auxiliary propositions are used as part of the
proof.
the projective mapping (homography) / 1 7! / 2 due to some
plane -. Let A be scaled to satisfy p 0
are corresponding points coming from
an arbitrary point P corresponding
coming from an arbitrary point
The coefficient k is independent of / 2 , i.e., is invariant to
the choice of the second view.
The lemma, its proof and its theoretical and practical implications
are discussed in detail in [28, 32]. Note that the
particular case where the homography A is affine, and the
is on the line at infinity, corresponds to the construction
of affine structure from two orthographic views
[17]. In a nutshell, a representation R 0 of P 3 (tetrad of
coordinates) can always be chosen such that an arbitrary
plane - is the plane at infinity. Then, a general uncalibrated
camera motion generates representations R which
can be shown to be related to R 0 by an element of the
affine group. Thus, the scalar k is an affine invariant within
a projective framework, and is called a relative affine invariant
. A ratio of two such invariants, each corresponding
to a different reference plane, is a projective invariant [32].
For our purposes, there is no need to discuss the methods
for recovering k - all we need is to use the existence of
a relative affine invariant k associated with some arbitrary
reference plane - which, in turn, gives rise to a homography
A.
due to the same plane -, are said to be scale-compatible
if they are scaled to satisfy Lemma 1, i.e., for any point
projecting onto there exists a
scalar k that satisfies
for any view / i , where v i 2 / i is the epipole with / 1 (scaled
arbitrarily).
PGL 3 be two homographies of / 1 7! / 2 due to planes
respectively. Then, there exists a scalar s, that satisfies
the equation:
for some coefficients ff; fi; fl.
Proof: Let q 2 / 1 be any point in the first view. There
exists a scalar s q that satisfies v
as shown in [29],
homography / 1 7! / 2 due to any plane.
Therefore, well. The mapping of two distinct
points q; v onto the same point v 0 could happen only if H is
the homography due to the meridian plane (coplanar with
the projection center O), thus Hp
s q is a fixed scalar s. The latter, in turn, implies that H is
a matrix whose columns are multiples of v 0 .
Lemma 3 (Auxiliary for Lemma
Let A; A 0 2 PGL 3 be homographies from / 1 7! / 2 due
to distinct planes - respectively, and
be homographies from / 1 7! / 3 due to -
where Cv - v.
Proof: Let
are homographies
from
are homographies from
A
we have
. Note that the only difference
between A 1 and B 1 is due to the different location of
the epipoles v; - v, which is compensated by C (Cv - v).
3 be the homography from / 1 to - 2 , and
the homography from - 2 to - 1 . Then with
proper scaling of E 1 and E 2 we have
and with proper scaling of C we have,
Lemma 4 (Auxiliary - Uniqueness)
For scale-compatible homographies, the scalars s; ff; fi; fl of
Lemma 2 are invariants indexed by That is,
given an arbitrary third view / 3 , let be the homographies
from / 1 7! / 3 due to -
be scale-compatible with A, and B 0 be scale-compatible with
A 0 . Then,
Proof: We show first that s is invariant, i.e., that
sB 0 is a matrix whose columns are multiples of v 00 . From
Lemma 2, and Lemma 3 there exists a matrix H, whose
columns are multiples of v 0 , a matrix T that satisfies A
AT , and a scalar s such that I \Gamma
multiplying both sides by BC, and then pre-multiplying
by C \Gamma1 we obtain
From Lemma 3, we have . The matrix
A \Gamma1 H has columns which are multiples of v (because
whose columns are multiple
of -
v, and BCA \Gamma1 H is a matrix whose columns are multiples
of v 00 . Pre-multiplying BCA \Gamma1 H by C \Gamma1 does not
change its form because every column of BCA
simply a linear combination of the columns of BCA \Gamma1 H.
As a result, is a matrix whose columns are multiples
of v 00 .
. Since the homographies
are scale compatible, we have from Lemma 1
the existence of invariants k; k 0 associated with an arbitrary
is due to - 1 , and k 0 is due to
Then from Lemma 2 we have
is arbitrary, this could happen
only if the coefficients of the multiples of v 0 in H and
the coefficients of the multiples of v 00 in -
H, coincide.
Proof of Theorem: Lemma 1 provides the existence part
of theorem, as follows. Since Lemma 1 holds for any plane,
choose a plane - 1 and let A; B be the scale-compatible
homographies
for every point corresponding points p 0 2
there exists a scalar k that satisfies: p 0
. We can isolate k from both
equations and obtain:
\Gammay
are the row vectors of A
and B and v
Because of
the invariance of k we can equate terms of (1) with terms
of (2) and obtain trilinear functions of image coordinates
across three views. For example, by equating the first two
terms in each of the equations, we obtain:
3 a 3 )
In a similar fashion, after equating the first term of (1)
with the second term of (2), we obtain:
3 a 3 )
Both equations are of the desired form, with the first six
coefficients identical across both equations.
The question of uniqueness arises because Lemma 1
holds for any plane. If we choose a different plane, say
then we must show that the
new homographies give rise to the same coefficients (up to
an overall scale). The parenthesized terms in (3) and (4)
have the general form: v 0
j a i , for some i and j. Thus,
we need to show that there exists a scalar s that satisfies
(b
This, however, follows directly from Lemmas 2 and 4.
The direct implication of the theorem is that one can
generate a novel view (/ 3 ) by simply combining two model
views The coefficients ff j and fi j of the combination
can be recovered together as a solution of a linear
system of 17 equations corresponding
points across the three views (more than nine points
can be used for a least-squares solution).
In the next theorem we obtain the lower bound on the
number of points required for solving for the coefficients of
the trilinear functions. The existence part of the proof of
Theorem 1 indicates that there exists nine trilinear functions
of that type, with coefficients having the general form
a i . Thus, we have at most 27 distinct coefficients
(up to a uniform scale), and thus, if more than two of
the nine trilinear functions are linearly independent, we
may solve for the coefficients using less than nine points.
The next theorem shows that at most four of the trilinear
functions are linearly independent and consequently seven
points are sufficient to solve for the coefficients.
Theorem 2 There exists nine distinct trilinear forms of
the type described in Theorem 1, of which at most four are
linearly independent. The coefficients of the four trilinear
forms can be recovered linearly with seven corresponding
points across the three views.
Proof: The existence of nine trilinear forms follow directly
from (1) and (2). Let ff
. The nine forms
are given below (the first two are (3) and (4) repeated for
For a given triplet the first four functions on
the list produce a 4 \Theta 27 matrix. The rank of the matrix
is four because it contains four orthogonal columns
(columns associated with ff
these functions are linearly independent. Since we have 27
coefficients, and each triplet contributes four linear
equations, then seven corresponding points across the three
views provide a sufficient number of equations for a linear
solution for the coefficients (given that the system is determined
up to a common scale, seven points produce two
extra equations which can be used for consistency checking
or for obtaining a least squares solution).
The remaining trilinear forms are linearly spanned by
the first four, as follows:
where the numbers in parenthesis represent the equation
numbers of the various trilinear functions.
Taken together, both theorems provide a constructive
means for solving for the positions x 00 ; y 00 in a novel view
given the correspondences across two model views.
This process of generating a novel view can be easily accomplished
without the need to explicitly recover structure,
camera transformation, or even just the epipolar geometry
- and requires fewer corresponding points than any other
known alternative.
The solution for x 00 ; y 00 is unique without constraints on
the allowed camera transformations. There are, however,
certain camera configurations that require a different set of
four trilinear functions from the one suggested in the proof
of Theorem 2. For example, the set of equations (5), (6),(9)
and (10) are also linearly independent. Thus, for example,
in case v 0
3 vanish simultaneously, i.e., v
then that set should be used instead. Similarly, equations
(3), (4),(9) and (10) are linearly independent, and should
be used in case v 0 situations arise with
which can be dealt by
choosing the appropriate basis of four functions from the
six discussed above. Note that we have not addressed the
problem of singular configurations of seven points. For ex-
ample, its clear that if the seven points are coplanar, then
their correspondences across the three views could not possibly
yield a unique solution to the problem of recovering
the coefficients. The matter of singular surfaces has been
studied for the eight-point case necessary for recovering the
epipolar geometry [19, 14, 22]. The same matter concerning
the results presented in this paper is an open problem.
Moving away from the need to recover the epipolar geometry
carries distinct and significant advantages. To get a
better idea of these advantages, we consider briefly the process
of re-projection using epipolar geometry. The epipolar
intersection method can be described succinctly (see
[11]) as follows. Let F 13 and F 23 be the matrices ("funda-
mental" matrices in recent terminology [10]) that satisfy
by incidence of p 00
with its epipolar line, we have:
Therefore, eight corresponding points across the three
views are sufficient for a linear solution of the two fundamental
matrices, and then all other object points can be re-projected
onto the third view. Equation (12) is also a trilinear
form, but not of the type introduced in Theorem 1. The
differences include (i) epipolar intersection requires the correspondences
coming from eight points, rather than seven,
(ii) the position of p 00 is solved by a line intersection process
which is singular in the case the three camera centers are
collinear; in the trilinearity result the components of p 00
are solved separately and the situation of three collinear
cameras is admissible, (iii) the epipolar intersection process
is decomposable, i.e., only two views are used at a
time; whereas the epipolar geometries in the trilinearity
result are intertwined and are not recoverable separately.
The latter implies a better numerically behaved method in
the presence of noise as well, and as will be shown later, the
performance, even using the minimal number of required
points, far exceeds the performance of epipolar intersection
using many more points. In other words, by avoiding the
need to recover the epipolar geometry we obtain a significant
practical advantage as well, since the epipolar geometry
is the most error-sensitive component when working
with perspective views.
The connection between the general result of trilinear
functions of views and the "linear combination of views"
result [38] for orthographic views, can easily be seen by
setting A and B to be affine in P 2 , and v 0
example, (3) reduces to
which is of the form
As in the perspective case, each point contributes four
equations, but here there is no advantage for using all four
of them to recover the coefficients, therefore we may use
only two out of the four equations, and require four corresponding
points to recover the coefficients. Thus, in the
case where all three views are orthographic, x 00 (y 00 ) is expressed
as a linear combination of image coordinates of the
two other views - as discovered by [38].
IV. The Bilinear Form
Consider the case for which the two reference (model)
views of an object are taken orthographically (using a tele
lens would provide a reasonable approximation), but during
recognition any perspective view of the object is al-
lowed. It can easily be shown that the three views are then
connected via bilinear functions (instead of trilinear):
Theorem 3 (Bilinearity) Within the conditions of Theorem
1, in case the views / 1 and / 2 are obtained by parallel
projection, then the pair of trilinear forms of Theorem 1 reduce
to the following pair of bilinear equations:
and
Proof: Under these conditions we have from Lemma 1 that
A is affine in P 2 and v 0
reduces to:
Similarly, (4) reduces to:
Both equations are of the desired form, with the first four
coefficients identical across both equations.
The remaining trilinear forms undergo a similar reduc-
tion, and Theorem 2 still holds, i.e., we still have four linearly
independent bilinear forms. Consequently, we have
coefficients up to a common scale (instead of 27) and
four equations per point, thus five corresponding points
(instead of seven) are sufficient for a linear solution.
A bilinear function of three views has two advantages
over the general trilinear function. First, as mentioned
above, only five corresponding points (instead of seven)
across three views are required for solving for the coeffi-
cients. Second, the lower the degree of the algebraic func-
tion, the less sensitive the solution may be in the presence
of errors in measuring correspondences. In other words,
it is likely (though not necessary) that the higher order
terms, such as the term x 00 x 0 x in Equation 3, will have a
higher contribution to the overall error sensitivity of the
system.
Compared to the case when all views are assumed ortho-
graphic, this case is much less of an approximation. Since
the model views are taken only once, it is not unreasonable
to require that they be taken in a special way, namely, with
a tele lens (assuming we are dealing with object recogni-
tion, rather than scene recognition). If this requirement is
satisfied, then the recognition task is general since we allow
any perspective view to be taken during the recognition
process.
V. Experimental Data
The experiments described in this section were done in
order to evaluate the practical aspect of using the trilinear
result for re-projection compared to using epipolar intersection
and the linear combination result of [38] (the latter
we have shown is simply a limiting case of the trilinear
result).
The epipolar intersection method was implemented as
described in Section III by recovering first the fundamental
matrices. Although eight corresponding points are sufficient
for a linear solution, in practice one would use more
than eight points for recovering the fundamental matrices
in a linear or non-linear squares method. Since linear least
squares methods are still sensitive to image noise, we used
the implementation of a non-linear method described in
[20] which was kindly provided by T. Luong and L. Quan
(these were two implementations of the method proposed
in [20] - in each case, the implementation that provided
the better results was adopted).
The first experiment is with simulation data showing
that even when the epipolar geometry is recovered accu-
rately, it is still significantly better to use the trilinear result
which avoids the process of line intersection. The second
experiment is done on a real set of images, comparing
the performance of the various methods and the number of
corresponding points that are needed in practice to achieve
reasonable re-projection results.
A. Computer Simulations
We used an object of 46 points placed randomly with z
coordinates between 100 units and 120 units, and x; y co-ordinates
ranging randomly between -125 and +125. Focal
length was of 50 units and the first view was obtained by
fx=z; fy=z. The second view (/ 2 ) was generated by a rotation
around the point (0; 0; 100) with axis (0:14; 0:7; 0:7)
and by an angle of 0:3 radians. The third view (/ 3 ) was
generated by a rotation around an axis (0; 1; 0) with the
same translation and angle. Various amounts of random
noise was applied to all points that were to be re-projected
onto a third view, but not to the eight or seven points
that were used for recovering the parameters (fundamental
matrices, or trilinear coefficients). The noise was random,
added separately to each coordinate and with varying levels
from 0.5 to 2.5 pixel error. We have done 1000 trials as fol-
lows: 20 random objects were created, and for each degree
of error the simulation was ran 10 times per object. We
collected the maximal re-projection error (in pixels) and
the average re-projection error (averaged of all the points
that were re-projected). These numbers were collected separately
for each degree of error by averaging over all trials
(200 of them) and recording the standard deviation as well.
Since no error were added to the eight or seven points that
were used to determine the epipolar geometry and the tri-linear
coefficients, we simply solved the associated linear
systems of equations required to obtain the fundamental
matrices or the trilinear coefficients.
The results are shown in Figure 1. The graph on the left
shows the performance of both algorithms for each level of
image noise by measuring the maximal re-projection error.
We see that under all noise levels, the trilinear method is
significantly better and also has a smaller standard devi-
ation. Similarly for the average re-projection error shown
in the graph on the right.
This difference in performance is expected, as the tri-linear
method takes all three views together, rather than
every pair separately, and thus avoids line intersections.
B. Experiments On Real Images
Figure
shows three views of the object we selected for
the experiment. The object is a sports shoe with added texture
to facilitate the correspondence process. This object
was chosen because of its complexity, i.e., it has a shape of
a natural object and cannot easily be described parametrically
(as a collection of planes or algebraic surfaces). Note
that the situation depicted here is challenging because the
re-projected view is not in-between the two model views,
i.e., one should expect a larger sensitivity to image noise
than in-between situations. A set of 34 points were manually
selected on one of the frames, / 1 , and their correspondences
were automatically obtained along all other frames
used in this experiment. The correspondence process is
based on an implementation of a coarse-to-fine optical-
flow algorithm described in [7]. To achieve accurate correspondences
across distant views, intermediate in-between
frames were taken and the displacements across consecutive
frames were added. The overall displacement field was
then used to push ("warp") the first frame towards the target
frame and thus create a synthetic image. Optical-flow
was applied again between the synthetic frame and the target
frame and the resulting displacement was added to the
overall displacement obtained earlier. This process provides
a dense displacement field which is then sampled to
obtain the correspondences of the 34 points initially chosen
in the first frame. The results of this process are shown in
Figure
2 by displaying squares centered around the computed
locations of the corresponding points. One can see
that the correspondences obtained in this manner are rea-
sonable, and in most cases to sub-pixel accuracy. One can
readily automate further this process by selecting points
in the first frame for which the Hessian matrix of spatial
derivatives is well conditioned - similar to the confidence
values suggested in the implementations of [4, 7, 34] -
however, the intention here was not so much as to build a
complete system but to test the performance of the trilinear
re-projection method and compare it to the performance of
epipolar intersection and the linear combination methods.
The trilinear method requires at least seven corresponding
points across the three views (we need 26 equation,
and seven points provide 28 equations), whereas epipolar
intersection can be done (in principle) with eight points.
Fig. 1. Comparing the performance of the epipolar intersection method (the dotted line) and the trilinear functions method (dashed line) in the
presence of image noise. The graph on the left shows the maximal re-projection error averaged over 200 trials per noise level (bars represent standard
deviation). Graph on the right displays the average re-projection error averaged over all re-projected points averaged over the 200 trials per noise
level.
Fig. 2. Top Row: Two model views, / 1 on the left and / 2 on the right (image size are 256 \Theta 240). The overlayed squares illustrate the corresponding
points (34 points). Bottom Row: Third view / 3 . Note that / 3 is not in-between / 1 and / 2 , making the re-projection problem more challenging (i.e.,
performance is more sensitive to image noise than in-between situations).
Fig. 3. Re-projection onto / 3 using the trilinear result. The re-projected points are marked as crosses, therefore should be at the center of the squares
for accurate re-projection. On the left, the minimal number of points were used for recovering the trilinear coefficients (seven points); the average
pixel error between the true an estimated locations is 0.98, and the maximal error is 3.3. On the right 10 points were used in a least squares fit;
average error is 0.44 and maximal error is 1.44.
Fig. 4. Results of re-projection using intersection of epipolar lines. In the lefthand display the ground plane points were used for recovering the
fundamental matrix (see text), and in the righthand display the fundamental matrices were recovered from the implementation of [20] using all 34
points across the three views. Maximum displacement error in the lefthand display is 25.7 pixels and average error is 7.7 pixels. Maximal error in the
righthand display is 43.4 pixels and average error is 9.58 pixels.
The question we are about to address is what is the number
of points that are required in practice (due to errors in
correspondence, lens distortions and other effects that are
not adequately modeled by the pin-hole camera model) to
achieve reasonable performance?
The trilinear result was first applied with the minimal
number of points (seven) for solving for the coefficients,
and then applied with 8,9, and 10 points using a linear
least-squares solution (note that in general, better solutions
may be obtained by using SVD or Jacobi methods instead
of linear least-squares, but that was not attempted here).
The results are shown in Figure 3. Seven points provide a
re-projection with maximal error of 3.3 pixels and average
error of 0.98 pixels. The solution using
an improvement with maximal error of 1.44 and average
error of 0.44 pixels. The performance using eight and nine
points was reasonably in-between the performances above.
Using more points did not improve significantly the results;
for example, when all 34 points were used the maximal
error went down to 1.14 pixels and average error stayed at
pixels.
Next the epipolar intersection method was applied. We
used two methods for recovering the fundamental matrices.
One method is by using the implementation of [20], and the
other is by taking advantage that four of the corresponding
points are coming from a plane (the ground plane). In the
former case, much more than eight points were required
in order to achieve reasonable results. For example, when
using all the 34 points, the maximal error was 43.4 pixels
and the average error was 9.58 pixels. In the latter case,
we recovered first the homography B due to the ground
plane and then the epipole v 00 using two additional points
(those on the film cartridges). It is then known (see [28,
21, 32]) that F is the anti-symmetric
matrix of v 00 . A similar procedure was used to recover
F 23 . Therefore, only six points were used for re-projection,
Fig. 5. Results of re-projection using the linear combination of views method proposed by [38] (applicable to parallel projection). Top Row: In the
lefthand display the linear coefficients were recovered from four corresponding points; maximal error is 56.7 pixels and average error is 20.3 pixels. In
the righthand display the coefficients were recovered using 10 points in a linear least squares fashion; maximal error is 24.3 pixels and average error
is 6.8 pixels. Bottom Row: The coefficients were recovered using all 34 points across the three views. Maximal error is 29.4 pixels and average error
is 5.03 pixels.
but nevertheless, the results were slightly better: maximal
error of 25.7 pixels and average error of 7.7 pixels. Figure 4
shows these results.
Finally, we tested the performance of re-projection using
the linear combination method. Since the linear combination
method holds only for orthographic views, we are actually
testing the orthographic assumption under a perspective
situation, or in other words, whether the higher (bilin-
and trilinear) order terms of the trilinear equations are
significant or not. The linear combination method requires
at least four corresponding points across the three views.
We applied the method with four, 10 (for comparison with
the trilinear case shown in Figure 3), and all 34 points
(the latter two using linear least squares). The results are
displayed in Figure 5. The performance in all cases are significantly
poorer than when using the trilinear functions,
but better than the epipolar intersection method.
VI. Discussion
We have seen that any view of a fixed 3D object can be
expressed as a trilinear function with two reference views
in the general case, or as a bilinear function when the reference
views are created by means of parallel projection.
These functions provide alternative, much simpler, means
for manipulating views of a scene than other methods.
Moreover, they require fewer corresponding points in the-
ory, and much fewer in practice. Experimental results show
that the trilinear functions are also useful in practice yielding
performance that is significantly better than epipolar
intersection or the linear combination method (although
we emphasize that the linear combination was tested just
to provide a base-line for comparison, i.e., to verify that
the extra bilinear and trilinear terms indeed contribute to
better performance).
In general two views admit a "fundamental" matrix (cf.
[10]) representing the epipolar geometry between the two
views, and whose elements are subject to a cubic constraint
(rank of the matrix is 2). The trilinearity results (Theorems
imply, first, that three views admit a "fundamental"
tensor with 27 distinct elements. Second, the robustness
of the re-projection results may indicate that the elements
of this tensor, are either independent, or constrained by a
second order polynomial. In other words, in the two-view
case, the elements of the fundamental matrix lie on third-degree
hypersurface in P 9 . In the error-free case, a linear
solution, ignoring the cubic constraint, is valid. However,
in the presence of errors, the linear solution does not guarantee
that the point in P 9 (representing the solution) will
lie on the hypersurface, hence a non-admissible solution is
obtained. The non-linear solutions proposed by [20], do not
address this problem directly, but in practice yield better
behaved solutions than the linear ones. Since the trilinear
case yields good performance in practice using linear meth-
ods, we arrive to the conjecture above. The notion of the
"fundamental" tensor, its properties, relation to the geometry
of three views, and applications to 3D reconstruction
from multiple views, constitutes (in my mind) an important
future direction.
The application that was emphasized throughout the paper
is visual recognition via alignment. Reasonable performance
was obtained with the minimal number of required
points (seven) with the novel view (/ 3 ) - which may be
too many if the image to model matching is done by trying
all possible combinations of point matches. The existence
of bilinear functions in the special case where the model
is orthographic, but the novel view is perspective, is more
encouraging from the standpoint of counting points. Here
we have the result that only five corresponding points are
required to obtain recognition of perspective views (pro-
vided we can satisfy the requirement that the model is or-
thographic). We have not experimented with bilinear functions
to see how many points would be needed in practice,
but plan to do that in the future. Because of their simplic-
ity, one may speculate that these algebraic functions will
find uses in tasks other than visual recognition - some of
those are discussed below.
There may exist other applications where simplicity is of
major importance, whereas the number of points is less of
a concern. Consider for example, the application of model-based
compression. With the trilinear functions we need
17 parameters to represent a view as a function of two reference
views in full correspondence (recall, 27 coefficients
were used in order to reduce the number of corresponding
points from nine to seven). Assume both the sender
and the receiver have the two reference views and apply
the same algorithm for obtaining correspondences between
the two views. To send a third view (ignoring problems
of self occlusions that may be dealt with separately) the
sender can solve for the 17 parameters using many points,
but eventually send only the 17 parameters. The receiver
then simply combines the two reference views in a "trilin-
way" given the received parameters. This is clearly a
domain where the number of points is not a major concern,
whereas simplicity, and robustness (as shown above) due to
the short-cut in the computations, is of great importance.
Related to image coding, an approach of image decomposition
into "layers" was recently proposed by [1, 2]. In this
approach, a sequence of views is divided up into regions,
whose motion of each is described approximately by a 2D
affine transformation. The sender sends the first image followed
only by the six affine parameters for each region for
each subsequent frame. The use of algebraic functions of
views can potentially make this approach more powerful
because instead of dividing up the scene into planes one
can attempt to divide the scene into objects, each carries
the 17 parameters describing its displacement onto the subsequent
frame.
Another area of application may be in computer graph-
ics. Re-projection techniques provide a short-cut for image
rendering. Given two fully rendered views of some 3D ob-
ject, other views (again ignoring self-occlusions) can be rendered
by simply "combining" the reference views. Again,
the number of corresponding points is less of a concern
here.
Acknowledgments
I acknowledge ONR grants N00014-92-J-1879 and
N00014-93-1-0385, NSF grant ASC-9217041, and ARPA
grant N00014-91-J-4038 as a source of funding for the Artificial
Intelligence laboratory and for the Center for Biological
Computational Learning. Also acknowledged is the
McDonnell-Pew postdoctoral fellowship that has been my
direct source of funding for the duration of this work. I
thank T. Luong an L. Quan for providing their implementation
for recovering fundamental matrices and epipoles.
Thanks to N. Navab and A. Azarbayejani for assistance in
capturing the image sequence (equipment courtesy of MIT
Media Laboratory).
--R
Layered representations for image coding.
Layered representation for motion analysis.
Inherent ambiguities in recovering 3D motion and structure from a noisy flow field.
A unified perspective on computational techniques for the measurement of visual motion.
Contour matching using local affine transformations.
Invariant linear methods in photogrammetry and model-matching
Hierarchical motion-based frame rate conversion
Affine and projective structure from motion.
Robustness of correspondence based structure from motion.
What can be seen in three dimensions with an uncalibrated stereo rig?
What can two images tell us about a third one?
Why stereo vision is not always about 3D reconstruction.
Stereo from uncalibrated cameras.
Relative orientation.
Relative orientation revisited.
Recognizing solid objects by alignment with an image.
Affine structure from motion.
A computer algorithm for reconstructing a scene from two projections.
the reconstruction of a scene from two projections - configurations that defeat the 8-point algorithm
On determining the fundamental matrix: Analysis of different methods and experimental results.
Canonical representations for the geometries of multiple projective views.
The projective geometry of ambiguous surfaces.
Correspondence and affine shape from two orthographic views: Motion and Recognition.
Geometry and Photometry in 3D visual recogni- tion
Illumination and view position in 3D visual recog- nition
On geometric and algebraic aspects of 3D affine and projective structures from perspective 2D views.
Projective re-construction from two perspective/orthographic views and for visual recognition
Projective structure from uncalibrated images: structure from motion and recognition.
Trilinearity in visual recognition by alignment.
Relative affine structure: Theory and application to 3d reconstruction from perspective views.
The quadric reference surface: Applications in registering views of complex 3d objects.
Factoring image sequences into shape and motion.
Uniqueness and estimation of three-dimensional motion parameters of rigid objects with curved sur- face
The Interpretation of Visual Motion.
Aligning pictorial descriptions: an approach to object recognition.
Recognition by linear combination of models.
Model based invariants for 3-D vision
--TR
--CTR
D. Gregory Arnold , Kirk Sturtz , Vince Velten , N. Nandhakumar, Dominant-Subspace Invariants, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.7, p.649-662, July 2000
Shai Avidan , Amnon Shashua, Threading Fundamental Matrices, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.1, p.73-77, January 2001
D. Ortn , J. M. M. Montiel, Indoor robot motion based on monocular images, Robotica, v.19 n.3, p.331-342, May 2001
Gideon P. Stein , Amnon Shashua, On Degeneracy of Linear Reconstruction From Three Views: Linear Line Complex and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.3, p.244-251, March 1999
Harpreet S. Sawhney , Yanlin Guo , Rakesh Kumar, Independent Motion Detection in 3D Scenes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.10, p.1191-1199, October 2000
Atsushi Marugame , Jiro Katto , Mutsumi Ohta, Structure Recovery with Multiple Cameras from Scaled Orthographic and Perspective Views, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.7, p.628-633, July 1999
Jianbo Su , Ronald Chung , Liang Jin, Homography-based partitioning of curved surface for stereo correspondence establishment, Pattern Recognition Letters, v.28 n.12, p.1459-1471, September, 2007
Long Quan, Two-Way Ambiguity in 2D Projective Reconstruction from Three Uncalibrated 1D Images, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.2, p.212-216, February 2001
Kalle strm , Magnus Oskarsson, Solutions and Ambiguities of the Structure and Motion Problem for 1DRetinal Vision, Journal of Mathematical Imaging and Vision, v.12 n.2, p.121-135, April 2000
Ronald Chung , Hau-San Wong, Polyhedral Object Localization in an Image by Referencing to a Single Model View, International Journal of Computer Vision, v.51 n.2, p.139-163, February
Akihiro Sugimoto, A Linear Algorithm for Computing the Homography from Conics in Correspondence, Journal of Mathematical Imaging and Vision, v.13 n.2, p.115-130, Oct. 2000
S. Avidan , T. Evgeniou , A. Shashua , T. Poggio, Image-based view synthesis by combining trilinear tensors and learning techniques, Proceedings of the ACM symposium on Virtual reality software and technology, p.103-110, September 1997, Lausanne, Switzerland
Richard I. Hartley, Lines and Points in Three Views and the Trifocal Tensor, International Journal of Computer Vision, v.22 n.2, p.125-140, March 1997
Nassir Navab , Mirko Appel, Canonical Representation and Multi-View Geometry of Cylinders, International Journal of Computer Vision, v.70 n.2, p.133-149, November 2006
Olivier Faugeras , Long Quan , Peter Strum, Self-Calibration of a 1D Projective Camera and Its Application to the Self-Calibration of a 2D Projective Camera, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.10, p.1179-1185, October 2000
Anders Heyden, Reduced Multilinear Constraints: Theory and Experiments, International Journal of Computer Vision, v.30 n.1, p.5-26, Oct. 1998
Stefan Carlsson , Daphna Weinshall, Dual Computation of Projective Shape and Camera Positions from Multiple Images, International Journal of Computer Vision, v.27 n.3, p.227-241, May 1, 1998
Long Quan , Bill Triggs , Bernard Mourrain, Some Results on Minimal Euclidean Reconstruction from Four Points, Journal of Mathematical Imaging and Vision, v.24 n.3, p.341-348, May 2006
Gideon P. Stein , Amnon Shashua, Model-Based Brightness Constraints: On Direct Estimation of Structure and Motion, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.22 n.9, p.992-1015, September 2000
Cristian Sminchisescu , Dimitris Metaxas , Sven Dickinson, Incremental Model-Based Estimation Using Geometric Constraints, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.5, p.727-738, May 2005
Amnon Shashua , Nassir Navab, Relative Affine Structure: Canonical Model for 3D From 2D Geometry and Applications, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.9, p.873-883, September 1996
Shai Avidan , Amnon Shashua, Novel View Synthesis by Cascading Trilinear Tensors, IEEE Transactions on Visualization and Computer Graphics, v.4 n.4, p.293-306, October 1998
John Oliensis, A Multi-Frame Structure-from-Motion Algorithm under Perspective Projection, International Journal of Computer Vision, v.34 n.2-3, p.163-192, Nov. 1999
Amnon Shashua, On Photometric Issues in 3D Visual Recognition from aSingle 2D Image, International Journal of Computer Vision, v.21 n.1-2, p.99-122, January. 1997
Hayman , Torfi Thrhallsson , David Murray, Tracking While Zooming Using Affine Transfer and Multifocal Tensors, International Journal of Computer Vision, v.51 n.1, p.37-62, January | visual recognition;algebraic and geometric invariants;reprojection;projective geometry;alignment |
628731 | Person Identification Using Multiple Cues. | AbstractThis paper presents a person identification system based on acoustic and visual features. The system is organized as a set of non-homogeneous classifiers whose outputs are integrated after a normalization step. In particular, two classifiers based on acoustic features and three based on visual ones provide data for an integration module whose performance is evaluated. A novel technique for the integration of multiple classifiers at an hybrid rank/measurement level is introduced using HyperBF networks. Two different methods for the rejection of an unknown person are introduced. The performance of the integrated system is shown to be superior to that of the acoustic and visual subsystems. The resulting identification system can be used to log personal access and, with minor modifications, as an identity verification system. | Introduction
The identification of a person interacting with
computers represents an important task for automatic
systems in the area of information retrieval,
automatic banking, control of access to security ar-
eas, buildings and so on. The need for a reliable
identification of interacting users is obvious. At the
same time it is well known that the security of such
systems is too often violated in every day life. The
possibility to integrate multiple identification cues,
such as password, identification card, voice, face,
fingerprints and the like will, in principle, enhance
the security of a system to be used by a selected set
of people.
This paper describes in detail the theoretical
foundations and design methodologies of a person
recognition system that is part of MAIA, the integrated
AI project under development at IRST [26].
Previous works about speaker recognition [30],
[16] have proposed methods for classifying and combining
acoustic features and for normalizing [27],
[22] the various classifier scores. In particular, score
normalization is a fundamental step when a system
is required to confirm or reject the identity given
by the user (user verification): in this case, in fact,
the identity is accepted or rejected according to a
comparison with a preestimated threshold. Since
the integration of voice and images in an identification
system is a new concept, new methods for both
classifier normalization and integration were inves-
tigated. Effective ways for rejecting an unknown
person by considering score and rank information
and for comparing images with improved similarity
measures are proposed. A simple method for
adapting the acoustic models of the speakers to a
real operating environment also was developed.
The speaker and face recognition systems are decomposed
into two and three single feature classifiers
respectively. The resulting five classifiers produce
non-homogeneous lists of scores that are combined
using two different approaches. In the first
approach, the scores are normalized through a robust
estimate of the location and scale parameters
of the corresponding distributions. The normalized
scores are then combined using a weighted geometric
average and the final identification is accepted
or rejected according to the output of a linear clas-
sifier, based on score and rank information derived
from the available classifiers. Within the second
approach, the problem of combining the normalized
outputs of multiple classifiers and of accept-
ing/rejecting the resulting identification is considered
a learning task. A mapping from the scores
and ranks of the classifiers into the interval (0; 1)
is approximated using an HyperBF network. A
final threshold is then introduced based on cross-
validation. System performance is evaluated and
discussed for both strategies. Because of the novelty
of the problem, standard data-bases for system
training and test are not yet available. For
this reason, the experiments reported in this paper
are based on data collected at IRST. A system implementation
operating in real-time is available and
was tested on a variety of IRST researchers and vis-
itors. The joint use of acoustic and visual features
proved effective in increasing system performance
and reliability.
The system described here represents an improvement
over a recently patented identification
system based on voice and face recognition [6], [9].
The two systems differ in many ways: in the latter
the speaker and face recognition systems are not
further decomposed into classifiers, the score normalization
does not rely on robust statistical techniques
and, finally, the rejection problem is not addressed
The next sections will introduce the speaker and
face recognition systems. The first approach to the
integration of classifiers and the linear accept/reject
rule for the final system identification are then dis-
cussed. Finally, the novel rank/measurement level
integration strategy using an HyperBF network is
introduced with a detailed report on system performance
2. Speaker recognition
The voice signal contains two types of informa-
tion: individual and phonetic. They have mutual
effects and are difficult to separate; this represents
one of the main problems in the development of
automatic speaker and speech recognition systems.
The consequence is that speaker recognition systems
perform better on speech segments having specific
phonetic contents while speech recognition systems
provide higher accuracy when tuned on the
voice of a particular speaker. Usually the acoustic
parameters for a speech/speaker recognizer are derived
by applying a bank of band-pass filters to adjacent
short time windows of the input signal. The
energy outputs of the filters, for various frames, provide
a good domain representation. Figure 1 gives
an example of such an analysis. The speech waveforms
correspond to utterances of the Italian digit 4
(/kwat:ro/) by two different speakers. The energy
outputs of a 24 triangular band-pass filter bank are
represented below the speech waveforms (darker regions
correspond to higher energy values).
Fig. 1. Acoustic analysis of two utterances of the digit 4
(/kwat:ro/) by two different speakers.
In the past years several methods and systems
for speaker identification [13], [16] were proposed
that perform more or less efficiently depending on
the text the user is required to utter (in general,
systems can be distinguished into text dependent or
text independent), the length of the input utterance,
the number of people in the reference database and,
finally, the time interval between test and training
recordings.
For security applications, it is desirable that the
user utter a different sentence during each inter-
action. The content of the utterance can then be
verified to ensure that the system is not cheated by
prerecorded messages. For this work, a text independent
speaker recognition system based on Vector
Quantization (VQ) [28] was built. While it cannot
yet verify the content of the utterance, it can
be modified (using supervised clustering or other
techniques) to obtain this result.
A block diagram of the system is depicted in
Figure
2. In the system, each reference speaker is
represented by means of two sets of vectors (code-
books) that describe his/her acoustic characteris-
tics. During identification, two sets of acoustic features
(static and dynamic), derived from the short
time spectral analysis of the input speech signal,
are classified by evaluating their distances from the
prototype vectors contained in the speaker code-book
couples. In this way, two lists of scores are
sent to the integration module. In the following
both the spectral analysis and vector quantization
techniques will be described in more detail (see also
[21] and a reference book such as [23]).
Since the power spectrum of the speech signal decreases
as frequency increases a preemphasis filter
that enhances the higher frequencies is applied to
the sampled input signal. The transfer function of
the filter is
The preemphasized signal, x(n), 1 - n - N , is
subdivided into frames y t (n), 1 - t - T , having
length L. Each frame is obtained by multiplying
x(n) by an Hamming window h t (n):
(1)
(2)
Voice
Signal
Static Parameters
Static Score List
Dynamic Score List
Analysis
Acoustic
Distance Computation
Distance Computation
Dynamic Codebooks
Static Codebooks
Dynamic Parameters
Fig. 2. The speaker recognition system based on Vector
Quantization.
In the equation above L represents the length,
in samples, of the Hamming window and S is the
analysis step (also expressed in samples). For the
system L and S were chosen to correspond to 20
ms and 10 ms respectively.
The signal is multiplied by an Hamming window
(raised cosine) to minimize the sidelobe effects on
the spectrum of the resulting sequence y t (n).
The acoustic analysis of each frame is performed
as follows:
1. the power spectrum of the sequence y t (n) is
evaluated;
2. a bank of spaced
according to a logarithmic scale (Mel scale), is
applied to the power spectrum and the energy
outputs s tq , 1 - q - Q, from each filter are
evaluated;
3. the Mel Frequency Cepstrum Coefficients
are com-
puted, from the filterbank outputs, according
to the following equation:
the MFCCs are arranged into a vector, ' t ,
which is called static, since it refers to a single
speech frame;
4. to account for the transitional information
contained in the speech signal a linear fit
is applied to the components of 7 adjacent
MFCCs; the resulting regression coefficients
are arranged into a vector that is called dynamic
5. a binary variable is finally evaluated that allows
marking the frame as speech or background
noise; this parameter is computed by means of
the algorithm described in [12].
The Mel scale is motivated by auditory analysis
of sounds. The inverse Fourier transform of the
log-spectrum (cepstrum) provides parameters that
improves performance at both speech and speaker
recognition [23], [29]. Furthermore, the Euclidean
distance between two cepstral vectors represents
a good measure for comparing the corresponding
speech spectra. The static and dynamic 8-
dimensional vectors related to windows marked as
background noise are not considered during both
system training and testing. As previuosly said,
VQ is used to design the static and dynamic codebooks
of a given reference speaker, say the th one.
Starting from a set of training vectors (static or dy-
namic) \Theta g, derived from a certain
number of utterances, the objective is to find a new
set that represents
well the acoustic characteristics of the given
speaker. To do this a clustering algorithm, similar
to that described in [21], is applied to the \Theta i
set. The algorithm makes use of an iterative procedure
that allows determination of codebook cen-
troids, \Psi i , by minimizing their average distance,
from the training vectors:
min m=1
The distance d(' ik ; / im ) is defined as follows:
In the equation above t denotes transpose and
W is the covariance matrix of the training vectors.
The matrix W is estimated from the training data
of all the speakers in the reference database. This
matrix was found to be approximately diagonal, so
that only the diagonal elements are used to evaluate
distances.
In the recognition phase the distances, D Si ; DDi ,
between the static and dynamic vector sequences,
derived from the input signal, and the corresponding
speaker codebooks are evaluated and sent to the
integration module.
If is the static (or dynamic) input
sequence and \Psi i is the i th static (or dynamic)
codebook, then the total static (or dynamic) distance
will be:
min m=1
where I is the total number of speakers in the reference
database.
To train the system, 200 isolated utterances of
the Italian digits (from 0 to were collected for
each reference user. The recordings were realized by
means of a Digital Audio Tape (DAT): the signal on
the DAT tape, sampled at 48 kHz, was downsampled
to manually end-pointed, and stored
on a computer disk. The speech training material
was analyzed and clustered as previously described.
As demonstrated in [28], system performance depends
on both input utterance length and codebook
preliminary experiments have suggested that
the speaker, to be identified, should utter a string
of at least 7 digits in a continuous way and in whatever
order. In the reported experiments the number
of digits was kept equal to 7 and the codebook size
was set to higher values did not
improve recognition accuracy. Furthermore, if input
signal duration is too short, the system requires
the user to repeat the digit string.
To evaluate integrated system performance (see
section 4.1) the reference users interacted 3 times
with the system during 3 different sessions. The
test sessions were carried out in an office environment
using an ARIEL board as acquisition channel.
Furthermore the test phase was performed about
five months after the training recordings. Due
to both the different background noise and acquisition
conditions between training and test, the codebooks
must be adapted.
Adaptation means designing a new codebook,
starting from a given one, that better resembles the
acoustic characteristics of both the operating environment
and the acquisition channel. Adaptation
should also take into account variations in time of
the speaker's voice (intraspeaker variations). Adaptation
requires the use of few utterances to modify
the codebook as it is not necessary to design it from
scratch (this would require at least 30-40 seconds
of speech). In our case, the adaptation vectors are
derived from the digit strings uttered by the users
during a single test session. The dynamic codebooks
were not adapted since they represent temporal
variations of the speech spectra and therefore
they are less sensitive to both intraspeaker voice
variability and acquisition channel variations.
The adaptation process of the i th codebook, C i
can be summarized as follows:
1. the mean vectors - i and - i of the adaptation
vectors and of the given codebook respectively
are evaluated;
2. the difference vector
3. the vectors of C i are shifted by a quantity
equal to \Delta i obtaining a new set C 0
i is placed in the
region of the adaptation vectors;
4. the adaptation vectors are clustered using
the set C 0
i as initial estimate of the cen-
troids; therefore a new set of centroids O
and the corresponding cell occupancies
are evaluated;
5. the adapted codebook \Psi i is obtained according
to the following equation:
In the equation above the parameter nim determines
the fraction of deviation vector
im ), that has to be summed to the initial
centroid c 0
im . Eqn. 7 is a simple method
to modify the centroids of a codebook according
to the number of data available for their
estimates. ffi im can be zero when the utterance
used for adaptation does not contain sounds
whose spectra are related to the m-th centroid.
For the system, ff was chosen equal to 0:1. The
two shifts applied by the adaptation procedure can
be interpreted as follows:
1. the major shift, accounts for environment
and channel variations with respect to training;
2. ffi im , the minor shift, accounts for intra-speaker
voice variations in time.
3. Face recognition
Person identification through face recognition is
the most familiar among the possible identification
strategies. Several automatic or semiautomatic systems
were realized since the early seventies - albeit
with varying degree of success. Different techniques
were proposed, ranging from the geometrical description
of salient facial features to the expansion
of a digitized image of the face on an appropriate
Fig. 3. The highlighted regions represent the templates used
for identification.
basis of images (see [8] for references). The strategy
used by the described system is essentially based
on the comparison, at the pixel level, of selected regions
of the face [8]. A set of regions, respectively
encompassing the eyes, nose, and mouth of the user
to be identified are compared with the corresponding
regions stored in the database for each reference
user (see Figure 3). The images should represent a
frontal view of the user face without marked ex-
pressions. As will be clear from the detailed de-
scription, these constraints could be relaxed at the
cost of storing a higher number of images per user
in the database. The fundamental steps of the face
recognition process are the following:
1. acquisition of a frontal view of the user
2. geometrical normalization of the digitized image
3. intensity normalization of the image;
4. comparison with the images stored in the
database.
The image of the user face is acquired with a CCD
camera and digitized with a frame grabber.
To compare the resulting image with those stored
in the database, it is necessary to register the im-
age: it has to be translated, scaled, and rotated so
that the coordinates of a set of reference points take
corresponding standard values. As frontal views are
considered, the centers of the pupils represent a natural
set of control points that can be located with
good accuracy. Eyes can be found through the following
steps:
1. locate the (approximate) symmetry axis of the
2. locate the left/right eye by using an eye template
for which the location of the pupil is
known; if the confidence of the eye location is
not sufficiently high, declare failure (the identification
system will use only acoustic infor-
3. achieve translation, scale and rotation invariance
by fixing the origin of the coordinate system
at the midpoint of the interocular segment
and the interocular distance and the direction
of the eye-to-eye axis at predefined values.
Under the assumption that the user face is approximately
vertical in the digitized image, a good estimate
of the coordinate S of the symmetry axis is
given by
where represent convolution, I the image, K V the
convolution kernel [\Gamma1; 0; 1] t , P V the vertical projection
whose index i runs over the columns of the
image. The face can then be split vertically into
two, slightly overlapping parts containing the left
and right eye respectively. The illumination under
which the image is taken can impair the template
matching process used to locate the eye. To minimize
this effect a filter, N (I), is applied to image
I:
where
I KG(oe)
and KG(oe) is a Gaussian kernel whose oe is related to
the expected interocular distance \Delta ee . The arithmetic
operations act on the values of corresponding
pixels. The process mapping I into N reduces the
influence of ambient lighting while keeping the necessary
image details. This is mainly due to the removal
of linear intensity gradients that are mapped
to the constant value 1. Extensive experiments, using
ray-tracing and texture-mapping techniques to
generate synthetic images under a wide range of
lighting directions have shown that the local contrast
operator of eqn. (9) exhibits a lower illumination
sensitivity than other operators such as the
laplacian, the gradient magnitude or direction [5]
and that there is an optimal value of the parameter
oe (approximately equal to the iris radius).
The same filter is applied to the eye templates.
The template matching process is based on the
algorithm of hierarchical correlation proposed by
Burt [11]. Its final result is a map of correlation
values: the center of gravity of the pixels with maximum
value representing the location of the eye.
Once the two eyes have been located, the confidence
of the localization is expressed by a coefficient, CE ,
that measures the symmetry of the eye positions
with respect to the symmetry axis, the horizontal
alignment and the scale relative to that of the eye
templates:
where C l and C r represent the (maximum) correlation
value for the left/right eye, s the interocular
distance expressed as a multiple of the interocular
distance of the eyes used as templates, \Delta' represents
the angle of the interocular axis with respect
to the horizontal axis while oe ' and oe s represent tolerances
on the deviations from the prototype scale
and orientation.
The first factor in the RHS of eqn. (11) is the
average correlation value of the left and right eye:
the higher it is the better the match with the eye
templates. The second factor represents the symmetry
of the correlation values and equals 1 when
the two values are identical. The third and fourth
factors allow weighing the deviation from both the
assumed scale and (horizontal) orientation of the
interocular axis, respectively. The parameters of
the Gaussians, oe s and oe ' , were determined by the
analysis of a set of interactions.
If the value of CE is too low, the face recognition
system declares failure and the identification proceeds
using the acoustic features alone. Otherwise,
the image is translated, scaled and rotated to match
the location of the pupils to that of the database
images. In the reported experiments the interocular
distance was set equal to 28 pixels. Alternative
techniques for locating eyes are reported in [17],
[32]. Due to the geometrical standardization, the
subimages containing the eyes, nose, and mouth
are approximately characterized by the same coordinates
in every image. These regions are extracted
from the image of the user face and compared in
turn to the corresponding regions extracted from
the database entries, previously filtered according
to eqns. (9)(10). Let us introduce a similarity
measure C based on the computation of the L 1 norm
of a vector
and on the corresponding
distance dL1 (x;
The L 1 distance of two vectors is mapped by
C(\Delta; \Delta) into the interval [0; 1], higher values representing
smaller distances. This definition can be
easily adapted to the comparison of images. For
the comparison to be useful when applied to real
images, it is necessary to normalize the images so
that they have the same average intensity - and
standard deviation (or scale) oe. The latter is particularly
sensitive to values far from the average -
so that the scale of the image intensity distribution
can be better estimated by the following quantity:
oe L1 =n
where the image is considered as a one dimensional
vector x. The matching of an image B to an image
A can then be quantified by the maximum value
of C(A; B), obtained by sliding the smaller of the
Fig. 4. The distribution of the correlation values for corresponding
features of the same person and of different
people.
two images over the larger one. A major advantage
of the image similarity computed according to
eqn. (12) over the more common estimate given by
the cross-correlation coefficient [1], based on the L 2
norm, is its reduced sensitivity to small amounts
of unusually high differences between corresponding
pixels. These differences are often due to noise
or image specularities such as iris highlights. A detailed
analysis of the similarity measure defined in
eqn. (12) is given in [7]. An alternative technique
for face identification is reported in [31]. Let us
denote with fU km gm=1;:::;pk the set of images available
for the k th user. A comparison can now be
made between a set of regions of the unknown image
N and the corresponding regions of the database
images. The regions currently used by the system
correspond to the eyes, nose and mouth. A list of
similarity scores is obtained for each region F ff of
image
fs kff
where R ff (N ) represents a region of N containing
F ff with a frame whose size is related to the interocular
distance. The lists of matching scores corresponding
to eyes, nose, and mouth are then available
for further processing. The distribution of the
correlation values for corresponding features of the
same person and of different people are reported in
Figure
4.
Integration with the scores derived from the
acoustic analysis can now be performed with a single
or double step process. In the first case, the two
acoustic and the three visual scores are combined
simultaneously, while in the second the acoustic and
visual scores are first combined separately and the
final score is given by the integration of the outputs
of the speaker and face recognition systems (see [9]
for an example of the latter). The next section will
introduce two single-step integration strategies for
classifiers working at the measurement level.
4. Integration
The use of multiple cues, such as face and voice,
provides in a natural way the information necessary
to build a reliable, high performance system.
Specialized subsystems can identify (or verify) each
of the previous cues and the resulting outputs can
then be combined into a unique decision by some
integration process. The objective of this section
is to describe and evaluate some integration strate-
gies. The use of multiple cues for person recognition
proved beneficial for both system performance and
reliability 1 .
A simplified taxonomy of multiple classifier systems
is reported in [33]. Broadly speaking, a classifier
can output information at one of the following
levels:
the abstract level: the output is a subset of the
possible identification labels, without any qualifying
the rank level: the output is a subset of the possible
labels, sorted by decreasing confidence
(which is not supplied);
the measurement level: the output is a subset of
labels qualified by a confidence measure.
The level at which the different classifiers of a
composite system work clearly constrains the ways
their responses can be merged. The first of the following
sections will address the integration of the
speaker/face recognition systems at the measurement
level. The possibility of rejecting a user as
unknown will then be discussed. Finally, a novel,
hybrid level approach to the integration of a set of
classifiers will be presented.
4.1 Measurement level integration
The acoustic and visual identification systems already
constitute a multiple classifier system. How-
ever, both the acoustic and visual classifiers can
be further split into several subsystems, each one
based on a single type of feature. In our system,
five classifiers were considered (see secs. 2, working
on the static, dynamic acoustic features, and on
the eyes, nose and mouth regions.
A critical point in the design of an integration
procedure at the measurement level is that of measurement
normalization. In fact, the responses of
the different classifiers usually have different scales
(and possibly offsets), so that a sensible combination
of the outputs can proceed only after the scores
are properly normalized. As already detailed, the
outputs of the identification systems are not ho-
mogeneous: the acoustic features provide distances
while the visual ones provide correlation values. A
Two aspects of reliability are critical for a person identification
system: the first is the ability of rejecting a user
as unknown, the second is the possibility of working with
a reduced input, such as only the speech signal or the face
image.
first step towards the normalization of the scores
is to reverse the sign of distances, thereby making
them concordant with the correlation values: the
higher the value, the more similar the input pat-
terns. Inspection of the score distributions shows
them to be markedly unimodal and roughly sym-
metrical. A simple way to normalize scores is to
estimate their average values and standard deviations
so that distributions can be translated and
rescaled in order to have zero average and unit vari-
ance. The values can then be forced into a standard
interval, such as (0; 1), by means of an hyperbolic
tangent mapping. The normalization of the scores
can rely on a fixed set of parameters, estimated
from the score distributions of a certain number of
interactions, or can be adaptive, estimating the parameters
from the score distribution of the current
interaction. The latter strategy was chosen mainly
because of its ability to cope with variations such as
different speech utterance length without the need
to re-estimate the normalization parameters.
The estimation of the location and scale parameters
of the distribution should make use of robust
statistical techniques [18], [20]. The usual arithmetic
average and standard deviation are not well
suited to the task: they are highly sensitive to outlier
points and could give grossly erroneous esti-
mates. Alternative estimators exist that are sensitive
to the main bulk of the scores (i.e. the central
part of a unimodal symmetric distribution) and are
not easily misled by points in the extreme tails of
the distribution. The median and the Median Absolute
Deviation (MAD) are examples of such location
and scale estimators and can be used to reliably
normalize the distribution of the scores. However,
the median and the MAD estimators have a low
efficiency relative to the usual arithmetic average
and standard deviation. A class of robust estimators
with higher efficiency was introduced by Hampel
under the name of tanh-estimators and is used
in the current implementation of the system (see
[18] for a detailed description). Therefore each list
of scores fS ij g i=1;:::;I from classifier j, being I the
number of people in the reference database, can be
transformed into a normalized list by the following
mapping:
tanh
oe tanh
where - tanh and oe tanh are the average and standard
deviation estimates of the scores fS ij g i=1;:::;I
as given by the Hampel estimators. An example
of distributions of the resulting normalized scores
is reported in Figure 5 for each of the five features
used in the classification.
In the following formulas, a subscript index i m indicates
the m th entry within the set of scores sorted
Fig. 5. The density distribution of the normalized scores for
each of the classifiers: S1, S2 represent the static and
dynamic speech scores while F 1, F2 and F3 represent
the eyes, nose and mouth scores respectively.
by decreasing value. The normalized scores can be
integrated using a weighted geometric average:
ijA
1=
where the weights w j represent an estimate of the
score dispersion in the right tail of the corresponding
distributions:
The main reason suggesting the use of geometric average
for the integration of scores relies on probabil-
ity: if we assume that the features are independent
the probability that a feature vector corresponds
to a given person can be computed by taking the
product of the probabilities of each single feature.
The normalized scores could then be considered as
equivalent to probabilities. Another way of looking
at the geometric average is that of predicate
conjunction using a continuous logic [2], [3]. The
weights reflect the importance of the different features
(or predicates). As defined in eqn. (17), each
feature is given an importance proportional to the
separation of the two best scores. If the classification
provided by a single feature is ambiguous, it is
given low weight. A major advantage of eqn. (16)
is that it does not require a detailed knowledge of
how each feature is distributed (as would be necessary
when using a Bayes approach). This eases
the task of building a system that integrates many
features.
The main performance measure of the system is the
percentage of persons correctly recognized. Performance
can be further qualified by the average value
of the following ratio
I
Feature Recognition
Voice 88 1.14
Dynamic 71 1.08
Face 91 1.56
Eyes
Nose 77 1.25
Mouth
I
The recognition performance and average
separation ratio R for each single feature and for
their integration. Data are based on 164 real
interactions and a database of 89 users.
The ratio R x measures the separation of the correct
match S 0
x from the wrong ones. This ratio
is invariant against the scale and location parameters
of the integrated score distribution and can
be used to compare different integration strategies
(weighted/unweighted geometric average, adap-
tive/fixed normalization). The weighted geometric
average of the scores adaptively normalized exhibits
the best performance and separation among
the various schemes on the available data.
Experiments have been carried out using data acquired
during 3 different test sessions. Of the 89
persons stored in the database, 87 have interacted
with the system in one or more sessions. One of the
three test sessions was used to adapt the acoustic
and visual databases (in the last case the images of
the session were simply added to those available);
therefore, session 1 was used to adapt session 2 and
session 2 to adapt session 3. As the number of interactions
for each adapted session is 82 the total
number of test interactions was 164. As each session
consisted of 82 interactions, the system was tested
on 164 interactions. The recognition performance
and the average value of R x for the different separate
features and for their integration are reported
in
Table
I.
4.2 Rejection
An important capability of a classifier is to reject
input patterns that cannot be classified in any of
the available classes with a sufficiently high degree
of confidence. For a person verification system, the
ability to reject an impostor is critical. The following
paragraphs introduce a rejection strategy that
takes into account the level of agreement of all the
different classifiers in the identification of the best
candidate.
A simple measure of confidence is given by the
integrated score itself: the higher the value, the
higher the confidence of the identification. Another
is given by the difference of the two best scores: it
is a measure of how sound the ranking of the best
candidate is. The use of independent features (or
feature sets) also provides valuable information in
the form of the rankings of the identification labels
across the classifier ouputs: if the pattern does not
belong to any of the known classes, its rank will
vary significantly from classifier to classifier. On
the contrary, if the pattern belongs to one of the
known classes, rank agreement will be consistently
high. The average rank and the rank dispersion
across the classifiers can then be used to quantify
the agreement of the classifiers in the final identi-
fication. The confidence in the final identification
can then be quantified through several measures.
The decision about whether the confidence is sufficient
to accept the system output can be based
on one or several of them. In the proposed system,
a linear classifier, based on absolute and relative
scores, ranks and their dispersion, will be used to
accept/reject the final result. The following issues
will be discussed:
1. degree of dependence of the features used;
2. choice of the confidence measures to be used
in the accept/reject rule;
3. training and test of the linear classifier used
to implement the accept/reject rule.
As a preliminary step, the independence of the
features used in the identification process will be
evaluated. It is known that the higher the degree
of independence, the higher the information provided
to the classifier. Let us consider a couple of
features X and Y . Let f(x
the corresponding normalized scores. They can be
considered as random samples from a population
with a bivariate distribution function. Let A i be
the rank of x i among x when they are arranged
in descending order, and B i the rank of y i
among defined similarly to A i . Spear-
man's rank correlation [19] is defined by:
A and -
B are the average values of fA i g and
respectively. An important characteristic of
rank correlation is its non-parametric nature. To
assess the independence of the features it is not
necessary to know the bivariate distribution from
which the (X are drawn, since the distribution
of their ranks is known, under the assumption of
independence. It turns out that
s
II
The rank correlation value of the couples of
features. The parenthesized values represent the
significance of the correlation. S1 and S2 represent
the dynamic and static acoustic features
nose and
mouth.
is distributed approximately as a Student's distribution
with I \Gamma 2 degrees of freedom [19]. It is then
possible to assess the dependence of the different
features used by computing the rank correlation of
each couple and by testing the corresponding signif-
icance. Results for the features used in the system
developed are given in Table II.
The acoustic features are clearly correlated, as
well as the nose and mouth features. The latter
correlation is due to the overlapping of the nose and
mouth regions, which was found to be necessary
in order to use facial regions characterized by the
same coordinates for the whole database. Acoustic
and visual features are independent, as could be
expected.
The feasibility of using a linear classifier was investigated
by looking at the distribution of acceptable
and non-acceptable 2 best candidates in a 3D
space whose coordinates are the integrated score,
a normalized ratio of the first to second best score
and the standard deviation of the rankings. As can
be seen in Figure 6 a linear classifier seems to be
appropriate.
The full vector d 2 R used as input to the linear
classifier is given by:
1. the integrated score, S 1 , of the best candidate;
2. the normalized ratio of the first to the second
best integrated score:
3. the minimum and maximum ranks of the first
and second final best candidates (4 entries);
4. the rank standard deviation of the first and
second final best candidates (2 entries);
5. the individual ranks of the first and second
final best candidates (10 entries).
To train the linear classifier the following procedure
was used. A set of positive examples fp i g is derived
Non-acceptable best candidates derive from two sources:
misclassified users from real interactions and best candidates
from virtual interactions, characterized by the removal of the
user entry from the data base.
Fig. 6. Let us represent the match with the database entries
by means of the integrated score, the standard deviation
of the rankings across the different features and the normalized
ratio of the first to second best integrated score.
The resulting three dimensional points are plotted and
marked with a 2 if they represent a correct match or
with a \Theta if the match is incorrect. Visual inspection
of the resulting point distribution shows that the two
classes of points can be separated well by using a plane.
from the data relative to the persons correctly classified
by the system. A set of negative examples
is given by the data relative to the best candidate
when the system did not classify the user
correctly. The set of negative examples can be augmented
by the data of the best candidate when the
correct entry is removed from the database, thereby
simulating the interaction with a stranger. The linear
discriminant function defined by the vector w
can be found by minimizing the following error:
)Awhere ff and fi represent the weight to be attributed
to false negatives and to false positives respectively
and is the dimensionality of the input vec-
tors. When represents the output
error of a linear perceptron with a symmetric sigmoidal
unit.
Final acceptance or rejection of an identification,
associated to a vector d, is done according to the
simple rule:
l
l
reject (24)
Fig. 7. System performance when false positives and false
negatives are weighted differently.
Note that the LHSs of eqns. (23)(24) represent
the signed distance, in arbitrary units, of point d
from the plane defined by w that divides the space
into two semispaces. Points lying in the correct
semispace contribute to E inversely to their distance
from plane w. Points lying near the plane
contribute with ff or fi while points lying in the
wrong semispace and at great distance from the discriminating
plane contribute with 2ff or 2fi. If the
two classes of points are linearly separable it is possible
to drive E to zero (see [14], [24]). A stochastic
minimization algorithm [4], [10] was used to minimize
When the system is required to work in a strict
mode (no errors allowed, that is, no strangers ac-
cepted), fi ?? ff should be considered in the training
phase. Note that a similar discriminant function
can be computed for each of the recognition
subsystems (i.e. face recognition and voice recog-
nition), thereby enabling the system to reject an
identification when it is not sufficiently certain even
when not all of the identification cues are available.
The training/test of the classifier followed a leave-
one-out strategy to maximize the number of data
available in the training phase [15]. The classifier is
trained by using all but one of the available samples
and tested on the excluded one. The performance
of the classifier can be evaluated by excluding in
turn each of the available samples and averaging
the classification error.
In the reported experiments, the available examples
were grouped per interacting user. The leave-
one-out method was then applied to the resulting
87 sets (the number of users that interacted with
the system) to guarantee the independence of the
training and test sets.
Each set was used in turn for testing, leaving the
remaining 86 for training. The results are reported
in
Table
III. A complete operating characteristic
curve for the integrated performance shown
in
Table
III is reported in Figure 7 where the
stranger-accepted and familiar-rejected rates at different
fi=ff ratios are plotted.
Face
Stranger accepted 4.0
Familiar rejected 8.0
Familiar misrecog. 0.5
Voice
Stranger accepted 14.0
Familiar rejected 27.0
Familiar misrecog. 1.0
Integrated
Stranger accepted 0.5
Familiar rejected 1.5
Familiar misrecog. 0.0
III
rates of the subsystems and of the complete
system when a rejection threshold is introduced.
Data are based on the subset of interactions for
which both face and speech data were available (155
out of 164).
Similar experiments were run on the acoustic and
visual features separately and are also reported in
Table
III. The results show that the use of the complete
set of features provides a relevant increase in
reliable performance over the separate subsystems.
4.3 Hybrid level integration
In this sub-section, a hybrid rank/measurement
level at which multiple classifiers can be combined
will be introduced. The approach is to reconstruct a
mapping from the sets of scores, and corresponding
ranks, into the set f0; 1g. The matching to each
of the database entries, as described by a vector
of five scores and the corresponding ranks should
be mapped to 1, if it corresponds to the correct
label, and to 0 otherwise. The reconstruction of
the mapping proceeds along the following steps:
1. find a set of positive and negative examples;
2. choose a parametric family of mappings;
3. choose the set of parameters for which the corresponding
mapping minimizes a suitable error
measure over the training examples.
Another way to look at the reconstruction of the
mapping is to consider the problem as a learning
task, where, given a set of acceptable and non acceptable
inputs, the system should be able to appropriately
classify unseen data.
Let fC j g be the set of classifiers. Each of them
associates to each person X some numerical data
that can be considered a vector. By comparison
with the i th database entry, a normalized similarity
can be computed. Each score
can be associated to its rank r ij in the list of scores
produced by classifier C j . The output of each classifier
can then be regarded as a list of couples
I represents the number
of people in the reference database. A mapping is
sought L 01 such that:
If, after mapping the list of scores, more than a
label qualifies, the system rejects the identification.
It is possible to relax the definition of L 01 by letting
the value of the mapping span the whole interval
[0; 1]. In this way the measurement level character
of the classification can be retained. The new
mapping L can be interpreted as a fuzzy predicate.
The following focuses on the fuzzy variant, from
which the original formulation can be obtained by
introducing a threshold !:
where '(\Delta) is the Heavyside unit-step function and
dimensional
vector containing the feature normalized matching
scores and corresponding ranks. The goal is to
approximate the characteristic function of the correct
matching vectors as a sum of Gaussian bumps.
Therefore the search for L is conducted within the
following family of functions:
ff
where
being a diagonal matrix with positive entries,
R. The approximating function
can be represented as an HyperBF network [25]
whose topology is reported in Figure 8.
The sigmoidal mapping is required to ensure that
the co-domain be restricted to the interval (0; 1).
The location t ff , shape \Sigma and height c ff of each
bump are chosen by minimizing the following error
measure:
ff
is a set of examples (points at
which the value of the mapping to be recovered is
known). The first subscript i denotes the database
c
x xS
Fig. 8. The function used to approximate the mapping
from the score/rank domain into the interval (0; 1) can
be represented as an HyperBF network.
entry from which x ij is derived and the second subscript
j represents the example.
The required value of the mapping at x ij is 1
when i is the correct label (class) for the j-th example
and 0 otherwise. The error measure E is
minimized over the parameter space f(c
by means of a stochastic algorithm with adaptive
memory [10]. The number of free parameters involved
in the minimization process dictates the use
of a large set of examples. As a limited number
of real interactions was available, a leave-one-
out strategy was used for training and testing the
system as for the linear classifier previously de-
scribed. From each of the available user-system
interactions, a virtual interaction was derived by
removing from the database the entry of the interacting
user, thereby simulating an interaction with
a stranger. For each interaction j
1. the vector corresponding to the correct
database entry provides a positive example;
2. the vectors of the first ten, non correct, entries
of the real interaction (as derived from sorting
the integrated scores of Section 4.1) and the
vectors of the first ten entries of the virtual
interaction provide the negative examples.
The reason for using only the first ten non correct
entries is that the matching scores decay quickly
with rank position in the final score list and additional
examples would not provide more informa-
tion. Data from different interactions of the same
user were then grouped. The resulting set of examples
was used to generate an equal number of
different training/testing set pairs. Each set was
used in turn for testing, leaving the remaining ones
for training. The problem of matching the number
of free parameters in the approximation function to
the complexity of the problem was solved by testing
the performance of networks with increasing size.
For each network size, a value for threshold ! of
eqn. (26) was computed to minimize the total error
defined as the sum of the percentage of accepted
Fig. 9. The total error achieved by networks with different
number of units. The total error is computed by
summing the percentage of accepted strangers, misrecognized
and rejected database people. For each net size
a threshold was chosen to minimize the cumulative error.
accepted
Familiar rejected (%) 3.0 3.0 3.5
Familiar misrecog. (%
IV
The performance of the system when using an
HyperBF network with 21 units to perform score
integration.
strangers, misrecognized and rejected database per-
sons. In Figure 9, the total error is reported as a
function of the network size. Note that the threshold
is computed on the test set, so that it gives an
optimistic estimate.
To obtain a correct estimate of system perfor-
mance, a cross-validation approach was used for the
net giving the best (optimistic) total error estimate.
be the interval over which the total error
assumes its minimumvalue (see Figure 10). The
threshold value can be chosen as:
favouring acceptance over rejection;
favouring rejection over acceptance.
The resulting performance is reported in Table IV.
Note that using ! 1 the system was able to reject
all of the strangers, which is the ultimate requirement
for a reliable system, missing only 3.5% of the
known users.
5. Conclusions
A system that combines acoustic and visual
cues in order to identify a person has been de-
scribed. The speaker recognition sub-system is
based on Vector Quantization of the acoustic parameter
space and includes an adaptation phase of
the codebooks to the test environment. A different
method to perform speaker recognition, which
makes use of the Hidden Markov Model technique
and pitch information, is under investigation.
Fig. 10. Error percentages as a function of the rejection
threshold for a Gaussian based expansion.
A face recognition sub-system also was described.
It is based on the comparison of facial features at
the pixel level using a similarity measure based on
the L 1 norm.
The two sub-systems provide a multiple classifier
system. In the implementation described, 5 classifiers
acoustic and 3 visual) were considered. The
multiple classifier operates in two steps. In the first
one, the input scores are normalized using robust
estimators of location and scale. In the second step,
the scores are combined using a weighted geometric
average. The weights are adaptive and depend on
the score distributions. While normalization is fundamental
to compensate for input variations (e.g.
variations of illumination, background noise con-
ditions, utterance length and of speaker voices),
weighting emphasizes the classification power of the
most reliable classifiers. The use of multiple cues,
acoustic and visual, proved to be effective in improving
performance. The correct identification
rate of the integrated system is 98% which represents
a significant improvement with respect to the
88% and 91% rates provided by the speaker and
face recognition systems respectively. Future use
of the Hidden Markov Model technique is expected
to improve performance of the VQ based speaker
recognizer.
An important capability of the multiple classifier
itself is the rejection of the input data when they
can not be matched with sufficient confidence to
any of the database entries.
An accept/reject rule is introduced by means of a
linear classifier based on measurement and rank information
derived from the five recognition systems.
A novel, alternative, approach to the integration of
multiple classifiers at the hybrid rank/measurement
level is also presented. The problem of combining
the outputs of a set of classifiers is considered as
a learning task. A mapping from the scores of the
classifiers and their ranks into the interval (0; 1) is
approximated using an HyperBF network. A final
rejection/acceptance threshold is then introduced
using the cross-validation technique. System performance
is evaluated on data acquired during real
interactions of the users in the reference database.
Performance of the two techniques is similar.
The current implementation of the system is
working on an HP 735 workstation with a Matrox
Magic frame grabber. In order to optimize system
throughput, it relies on a hierarchical match with
the face database.
The incoming picture, represented by a set of features
is compared at low resolution with the complete
database. For each person in the database,
the most similar feature, among the set of available
images, is chosen and the location of the best
matching position stored. The search is then continued
at the upper resolution level by limiting the
search to the most promising candidates at the previous
level.
These candidates are selected by integrating their
face scores according to the procedure described in
section 4.1. All available data must be used to secure
a reliable normalization of the scores. How-
ever, new scores at higher resolution are computed
only for a selected subset of persons and this constitutes
a problem for the integration procedure. In
fact, scores from image comparisons at different levels
would me mixed, similarity values deriving from
lower resolutions being usually higher. To overcome
this difficulty, the scores from the previous level are
reduce (scaled) by the highest reduction factor obtained
comparing the newly computed scores to the
corresponding previous ones.
The performance, measured on the datasets used
for the reported experiments, does not decrease and
the overall identification time (face and voice pro-
cessing) is approximately 5 seconds.
The same approach, using codebooks of reduced
size could be applied to the speaker identification
system, thereby increasing system throughput.
Adding a subject to the database is a simple task
for both subsystems. This is due to the modularity
of the databases, each subject being described
independently of the others. The integration strategy
itself does not require any update. The rejection
and the combined identification/rejection procedures
do require updating. However, the training
of the linear perceptron and of the HyperBF
network can be configured more as a refinement of
a suboptimal solution (available from the previous
database) than as the computation of a completely
unknown set of optimal parameters. While the sys-
tem, as presented, is mainly an identification sys-
tem, a small modification transforms it into a verification
system. For each person in the database
it is possible to select a subset containing the most
similar people (as determined by the identification
system). When the user must be verified the identification
system can be used using the appropriate
subset, thereby limiting the computational effort,
and verifying the identity of the user using the techniques
reported in the paper.
Future work will have the purpose of further improving
the global efficiency of the system with the
investigation of more accurate and reliable rejection
methods.
Acknowledgement
The authors would like to thank Dr. L. Stringa,
Prof. T. Poggio and Prof. R. de Mori for valuable
suggestions and discussions. The authors are grateful
to the referees for many valuable comments.
--R
Computer Vision.
Selecting uncertainty calculi and granularity: an experiment in trading-off precision and complexity
Rum: A layered architecture for reasoning with uncertainty.
On Training Neural Nets through Stochastic Minimization.
Estimation of Pose and Illuminant Direction for Face Processing.
A recognition system
Robust Estimation of Correlation: an Application to Computer Vision.
Face Recognition: Features versus Templates.
Automatic Person Recognition by Using Acoustic and Geometric Features.
Stochastic minimization with adaptive memory.
Smart sensing within a pyramid vision ma- chine
A start-end point detection algorithm for a real-time acoustic front-end based on dsp32c vme board
Speaker Recognition
Pattern Recognition and Scene Analysis.
Introduction to Statistical Pattern Recog- nition
Cepstrum Analysis Technique for Automatic Speaker Verification.
Recognizing Human Eyes.
Robust statistics: the approach based on influence functions.
Introduction to mathematical statistics.
Robust Statistics.
Similarity normalization method for speaker verification based on a posteriori probability.
Speech Communication.
Adaptive Pattern Recognition and Neural Networks.
Regularization algorithms for learning that are equivalent to multilayer networks.
A Project for an Intelligent System: Vision and Learning.
The use of cohort normalized scores for speaker verification.
Evaluation of a Vector Quantization Talker Recognition System in Text Independent and Text Dependent Modes.
On the Use of Istantaneous and Transitional Spectral Information in Speaker Recognition.
Automatic Face Recognition using Directional Derivatives.
Eyes Detection for Face Recognition.
Methods of Combining Multiple Classifiers and Their Applications to Handwriting Recognition.
--TR
--CTR
Jian-Gang Wang , Hui Kong , Eric Sung , Wei-Yun Yau , Eam Khwang Teoh, Fusion of appearance image and passive stereo depth map for face recognition based on the bilateral 2DLDA, Journal on Image and Video Processing, v.2007 n.2, p.6-6, August 2007
Anil K. Jain , Arun Ross, Multibiometric systems, Communications of the ACM, v.47 n.1, January 2004
Arun Ross , Anil Jain, Information fusion in biometrics, Pattern Recognition Letters, v.24 n.13, p.2115-2125, September
Jian-Gang Wang , Eng Thiam Lim , Xiang Chen , Ronda Venkateswarlu, Real-time Stereo Face Recognition by Fusing Appearance and Depth Fisherfaces, Journal of VLSI Signal Processing Systems, v.49 n.3, p.409-423, December 2007
Niall A. Fox , Ralph Gross , Philip de Chazal , Jeffery F. Cohn , Richard B. Reilly, Person identification using automatic integration of speech, lip, and face experts, Proceedings of the ACM SIGMM workshop on Biometrics methods and applications, November 08, 2003, Berkley, California
Julian Fierrez-Aguilar , Daniel Garcia-Romero , Javier Ortega-Garcia , Joaquin Gonzalez-Rodriguez, Adapted user-dependent multimodal biometric authentication exploiting general information, Pattern Recognition Letters, v.26 n.16, p.2628-2639, December 2005
R. Brunelli, verification through finger matching: A comparison of Support Vector Machines and Gaussian Basis Functions classifiers, Pattern Recognition Letters, v.27 n.16, p.1905-1915, December, 2006
Maycel-Isaac Faraj , Josef Bigun, Audio-visual person authentication using lip-motion from orientation maps, Pattern Recognition Letters, v.28 n.11, p.1368-1382, August, 2007
H. Vajaria , T. Islam , P. Mohanty , S. Sarkar , R. Sankar , R. Kasturi, Evaluation and analysis of a face and voice outdoor multi-biometric system, Pattern Recognition Letters, v.28 n.12, p.1572-1580, September, 2007
Robert Snelick , Umut Uludag , Alan Mink , Michael Indovina , Anil Jain, Large-Scale Evaluation of Multimodal Biometric Authentication Using State-of-the-Art Systems, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.3, p.450-455, March 2005
R. Brunelli, verification through finger matching: a comparison of support vector machines and Gaussian basis functions classifiers, Pattern Recognition Letters, v.27 n.16, p.1905-1915, December 2006
Doroteo T. Toledano , Rubn Fernndez Pozo , lvaro Hernndez Trapote , Luis Hernndez Gmez, Usability evaluation of multi-modal biometric verification systems, Interacting with Computers, v.18 n.5, p.1101-1122, September, 2006
Hakan Altinay , Mbeccel Demirekler, Undesirable effects of output normalization in multiple classifier systems, Pattern Recognition Letters, v.24 n.9-10, p.1163-1170, 01 June
Aleix M. Martnez, Recognizing Imprecisely Localized, Partially Occluded, and Expression Variant Faces from a Single Sample per Class, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.24 n.6, p.748-763, June 2002
Rodrigo de Luis-Garca , Carlos Alberola-Lpez , Otman Aghzout , Juan Ruiz-Alzola, Biometric identification systems, Signal Processing, v.83 n.12, p.2539-2557, December
Seong G. Kong , Jingu Heo , Faysal Boughorbel , Yue Zheng , Besma R. Abidi , Andreas Koschan , Mingzhong Yi , Mongi A. Abidi, Multiscale Fusion of Visible and Thermal IR Images for Illumination-Invariant Face Recognition, International Journal of Computer Vision, v.71 n.2, p.215-233, February 2007
K. Srinivasa Rao , A. N. Rajagopalan, A probabilistic fusion methodology for face recognition, EURASIP Journal on Applied Signal Processing, v.2005 n.1, p.2772-2787, 1 January 2005
Harini Veeraraghavan , Paul Schrater , Nikos Papanikolopoulos, Robust target detection and tracking through integration of motion, color, and geometry, Computer Vision and Image Understanding, v.103 n.2, p.121-138, August 2006
Sharon Oviatt, Advances in Robust Multimodal Interface Design, IEEE Computer Graphics and Applications, v.23 n.5, p.62-68, September
H. E. etingl , E. Erzin , Y. Yemez , A. M. Tekalp, Multimodal speaker/speech recognition using lip motion, lip texture and audio, Signal Processing, v.86 n.12, p.3549-3558, December 2006
Slobodan Ribaric , Ivan Fratric, A Biometric Identification System Based on Eigenpalm and Eigenfinger Features, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.11, p.1698-1709, November 2005 | classification;learning;face recognition;correlation;template matching;robust statistics;speaker recognition |
628733 | Logical/Linear Operators for Image Curves. | AbstractWe propose a language for designing image measurement operators suitable for early vision. We refer to them as logical/linear (L/L) operators, since they unify aspects of linear operator theory and Boolean logic. A family of these operators appropriate for measuring the low-order differential structure of image curves is developed. These L/L operators are derived by decomposing a linear model into logical components to ensure that certain structural preconditions for the existence of an image curve are upheld. Tangential conditions guarantee continuity, while normal conditions select and categorize contrast profiles. The resulting operators allow for coarse measurement of curvilinear differential structure (orientation and curvature) while successfully segregating edge-and line-like features. By thus reducing the incidence of false-positive responses, these operators are a substantial improvement over (thresholded) linear operators which attempt to resolve the same class of features. | Introduction
There is no shortage of so-called "edge-detectors" and "line-detectors" in computer
vision. These are operators intended to respond to lines and edges in images. Many
different designs have been proposed, based on a range of optimality criteria (e.g
[1, 2]), and many of these designs exhibit properties in common with biological vision
systems [3]. While this agreement between mathematics and physiology is encourag-
ing, there is still dissatisfaction with these operators-despite their 'optimal' design,
they do not work sufficiently well to support subsequent analysis. Part of the problem
is undoubtedly the myopic perspective to which such operators are restricted,
suggesting the need for more global interactions [4]. But we believe that more can be
done locally, and that another significant part of the problem stems from the types
of models on which the operators are based and the related mathematical tools that
have been invoked to derive them. In this paper we introduce an approach to operator
design that differs significantly from the standard practice, and illustrate how it
can be used to design non-linear operators for locating lines and edges.
The usual model used in the design of edge operators involves two components:
an ideal step edge plus additive Gaussian noise. This model was proposed in one
of the first edge detector designs [5], and has continued through the most recent
[2, 6]. Thus it is no surprise that the solution resembles the product of two operators,
one to smooth the noise (e.g. a Gaussian) and the other to locate the edge (e.g. a
derivative).
While some of the limitations of the ideal step edge model have been addressed
elsewhere (e.g. [7, 8]), a perhaps more important limitation of the operator design
has not been considered. It is assumed that in viewing a small local region of the
image, only a single section of one edge is being examined. This may be a valid
simplification in some continuous limit, but it is definitely not valid in digital images.
Many of the systematic problems with edge and line detectors occur when structure
changes within the local support of the operator (e.g. several edges or lines coincide).
Since these singularities are not dealt with by the noise component of the model
either, the linear operator behaves poorly in their vicinity.
In particular, curve-detecting operators are usually designed to respond if a certain
intensity configuration exists locally (see Figure 1a). A signal estimation component
of the operator is then incorporated in the design to filter local noise (Figure 1b).
However, significant contrast changes are rarely noise-they are more likely to be the
result of a set of distinct objects whose images project to coincident image positions
1c). An operator which claims to 'detect' or `select' a certain class of image
features must continue to do so in the presence of such confounding information.
We propose that image operators should be designed to respond positively to
the expected image structure, and to not respond at all when such structure is not
(a)
(b)
(c)
Figure
1: A set of potential image curve configurations which must be
considered in the design of operators. An ideal image (a) of a black curve
on a white background; a noisy image (b) of a lower-contrast version of the
same curve; an obscured version (c) of the ideal image. The oval in each
image represents the spatial support of a local operator. A negative contrast
line operator should respond positively in all three cases.
present. Simple linear operators achieve the first of these goals, but in order to fulfill
the second we must incorporate a more direct verification of the existence conditions
for a given feature into the operator itself. We accomplish this by decomposing
the linear operator into components which correspond to assertions of the logical
preconditions for a given feature. When the expected image structure is present, a
boolean combination of these responses produces a linear response, whereas if any
of the conditions are violated the response will be suppressed non-linearly. Because
these operators unite elements from boolean logic and linear operator theory, we refer
to them as Logical/Linear (L/L) operators.
A. Image Curves
For consistency we shall adopt the following terminology. Edges are the curves which
separate lighter and darker areas of an image-the perceived discontinuities in the
intensity surface; lines are those curves which might have been drawn by a pen or
pencil (sometimes referred to as bars in other work [9]). Image curves are either
of these. Two independent properties describe such image curves: their structure
along the tangential and in the normal directions. Tangentially, both lines and edges
are projections of space curves; it is the cross-sectional structure in the image which
differentiates them.
Formally, let I: be an analytic intensity surface (an image) and ff:
curve parameterized by arc length (see Figure 2). The
orientation '(s) is the direction of the tangent - (s), a unit vector in the direction of
ff 0 (s), and the normal vector n(s) is a unit vector in the direction ff 00 (s).
Figure
2: An image curve ff: with the tangent - (s) and
normal n(s).
Formally, an image curve is defined by a set of local structural conditions
on the image in the directions tangential and normal to the curve. The normal
cross-section fi s at the point ff(s) is given by
image curve is a map ff: S ! I such that
(Tangent) ff is C 1 continuous on S, and (1a)
(Normal) a condition N(fi s ) holds for all s 2 S: (1b)
s ), the normal condition, determines the classification of the curve.
For the purposes of this paper, we concentrate on three kinds of image curve:
1. ff is an Edge 1 in I iff ff is an image curve with normal condition
2. ff is a Positive Contrast Line in I iff ff is an image curve with normal
1 Note that the definition of a line is independent of curve orientation, while a rising edge will only
be seen looking along ff in one direction. Thus lines need only be parameterized over - orientations
while edges require 2- orientations.
(a)
(b)
(c)
Figure
3: A set of image curve configurations which may generate false
positive operator responses. An image of an contrast edge (a) should not
stimulate a line operator; a improperly oriented operator (b) should not be
stimulated; an operator which does not lie on the curve (c) should not be
stimulated. The oval in each image represents the spatial support of the local
operator. A negative contrast line operator should not respond positively in
any of these cases.
condition
3. ff is a Negative Contrast Line in I iff ff is an image curve with normal
condition
Thus, edges are order 0 discontinuities (steps) in cross-section, while positive and
negative contrast lines are order 1 discontinuities (creases) which are also maxima or
respectively.
Note that in contrast to traditional definitions, the tangent and normal conditions
above are both point conditions, which must hold at every point in the trace of
the curve. We thus have a basis for designing purely local operators to locate and
categorize such curves in images.
Linear operators do respond when these conditions are met. However, they also
respond in situations in which the conditions are not met. These responses are referred
to as false-positives. The current analysis will focus on a mechanism for avoiding three
kinds of false-positives typical of linear operators:
1. Merging or interference between nearby curves: The local continuity of image
curves is important for resolving and separating nearby features. Linear operators
interfere with testing continuity by filling in gaps between nearby features
and responding significantly to curves which are far from their preferred orientations
(Fig. 3b).
2. Smoothing out discontinuities or failing to localize line-endings: The locations
of the discontinuities and end-points of a curve are fundamental to higher-level
descriptions [10, 11, 12]. Linear operators systematically interfere with their
localization by responding whenever the receptive field of the operator at all
overlaps the curve (Fig. 3c).
3. Confusion between lines and edges: Lines and edges are differentiated by their
cross-sections. For accurate identification the logical conditions on the cross-section
must be satisfied, and in each case we will show that a linear operator
tests them incompletely (Fig. 3a).
II. A Logical/Linear Framework for Image
Operators
The three qualitatively different kinds of image curves defined in xI.A. imply three
distinct sets of preconditions for the existence of an image curve. As noted previ-
ously, the curve description process must respect these distinctions. Focusing for the
moment on the line condition of (2b), we begin by adopting an oriented, linear line
operator similar to the one described in [2].
Canny adopted the assumption of linearity to facilitate noise sensitivity analy-
sis, and relied on post-processing to guarantee locality and selectivity of response.
He arrived at a line operator whose cross-section is similar to a Gaussian second-
derivative, and an edge operator similar to a Gaussian first derivative. Neurophysiologists
[13, 14, 3] and psychophysicists [15] have adopted such linear models to capture
many of the functional properties of the early visual system, and the mathematics for
analyzing them is widely known (e.g. Fourier analysis). These models are also attractive
from a computational point of view because they exhibit most of the properties
required of a measurement operator for image curves. However, they also exhibit the
false-positive responses described above (partially shown in Figures 7 and 8).
To limit these false-positives, we will relax the assumption of linearity and test
the necessary structural conditions explicitly. This is accomplished by developing an
algebra of Logical/Linear (L/L) Operators which allow these conditions to be
tested as the operator's response is being constructed. The resulting responses will
appear to be linear as long as all of these conditions are fulfilled.
A. Logical/Linear Combinators
As stated above, we wish to retain as much as possible of the desirable properties of
the linear operator approach while allowing for the kind of structural analysis which
can be used to categorize curves and verify continuity. We pursue these apparently
contradictory goals by starting with a nearly optimal linear operator, and then decomposing
it in a way that allows for it's reconstruction, provided that the structural
design conditions are verified.
In particular, we
1. Begin with a linear operator and decompose it into a set of linear component
operators whose sum is identical to the initial operator.
2. These linear components represent measurement operators for the logical pre-conditions
of the designed feature's existence.
3. The overall operator response is to be positive only if these structural preconditions
are satisfied.
4. For the range of inputs generating positive responses, the operator should act
identically to the original linear operator.
The combination of operator responses to fulfill the third and fourth conditions
above can be derived from a mapping of the real line to logical values. Assume
that positive operator responses represent confirmation of a logical condition (logical
True) and that negative responses represent rejection (logical False). To derive the
numeric)logical mapping, we adopt the principle that all confirming evidence should
be combined if the logical condition holds, and contradictory evidence combined if the
condition fails. This leads to the following set of logical/linear combinators:
Definition 2 The Logical/Linear Combinators
- are given by
Before we descend into technical detail, it should be noted that these operators
can be thought of as accumulating evidence for or against a particular hypothesis,
with positive values being evidence 'for' and negative values evidence `against'. Thus
if an hypothesis h requires that both of two prior hypotheses (x and y) be true
then consistent positive evidence from these inputs, represented by positive values,
is required to produce a positive output
y. Should this combined hypothesis
instead be rejected, all evidence for this rejection is combined. In all cases, the logical
truth or falsehood of an hypothesis is contained in the sign of the value, while the
strength of the evidence for or against the hypothesis is represented by the magnitude.
It should be apparent that reasoning about the signs of derivatives (see xI.A.) will be
natural with this formalism.
B. A Logical/Linear Algebra
We now proceed to develop general properties of these L/L combinators and define a
class of operators which embody these properties. With this background established,
we can then move on to the development of the specialized operators we will use for
early vision.
Using the combinators
-, we define a generative syntax for L/L expressions.
Logical/Linear Operator on the vector space X
any function L in the language L defined by the following grammar L:
where each a i is a real constant and each / i (x) is a bounded, real-valued, linear
function.
Example 1: The expression
defines a L/L operator which is positive only if both x and y are positive,
in which case it evaluates as F (x; y. An equivalent description of F as an
operator is given by
is the projection operator which selects the i th dimension of X.
There are two fundamental properties which justify the use of the term "Logi-
cal/Linear expressions" to describe these forms: they comprise a Boolean Algebra,
and they are linear on certain subspaces of their entire domain.
To show the first of these, consider the universe of vectors U in IR n excluding the
and the subspaces f L(x)
2 For real-valued variables, the exclusion of the axes needed to demonstrate logical equivalence is
not problematic because it is a subset of measure 0.
Theorem 1 (Logical) For the language of L/L operators L 2 L, the set of all
sets f L(x) g + and their complements f L(x)
Algebra with
- and complement \Gamma.
Proof The following equivalences can be derived directly from Definition 2, for all
It is easy to verify that these sets form a field with the help of the equivalences above
(e.g. the equivalence of
ensures that if X and Y are members then X
-Y is
also). Furthermore, these meet, join and complement operators are clearly isomorphic
to the standard set-theoretic ", [ and complement. The further observation that ;
and U are the bounds of this field ensure that this system is a Boolean algebra. ([16],
p.
The following equivalences can also be derived directly:
f a L(x)
if a - 0
These demonstrate that the constant weights a i in the language L act as either
identity or complement and thus do not disturb the Boolean algebra.
Corollary 2 Each L/L operator has an associated Boolean function created by substituting
- and - for
- respectively, and by replacing each a i constant with
either the identity function if positive or : (negation) if negative. The truth value of
each expression / i (x) is True if /
Thus, continuing Ex. 1, the equivalent logical function -
F for F is
The second fundamental property of these operators, their conditional linearity,
is revealed by considering the minimal polynomials
in the binary representation of j is zero, q i
if bit i is one. Then,
Theorem 3 (Linear) Any L/L operator L is linear on the subspace f P j
of
any minimal polynomial P j (x).
Proof Any Boolean polynomial can be equivalently expressed as the join of minimal
polynomials or the lower bound ; ([17], p. 370). Thus f L(x)
can be expressed
as the
of a group of such minimal polynomials of the / i (x)'s (the disjunctive
canonical form (DCF) of L(x)). Without loss of generality, consider a particular such
polynomial Noting that every element / i (x) for x
has a fixed
value and thus fixed sign, Definition 2 guarantees that
- is linear on the subspace
defined
(for fixed sign arguments, the branch chosen in the
- is fixed).
Thus, any minimal polynomial P j (x) is linear on f P j
Consider now the DCF of L(x). We know that each P j (x) in this DCF is both
linear and of constant sign on f P j
. By the same reasoning as for
above, we
can state that +
- is linear if its arguments have constant sign, and thus the DCF of
L(x) is a linear combination of expressions which are guaranteed linear on f P j
Therefore, L(x) is also linear on every f P j
C. Logical/Linear Image Operators
By extension from the arithmetic operators, the L/L operators are applied pointwise
to sequences of vectors or images. Thus, reconsidering Ex. 1 above, the operator F
becomes
We are now ready to develop the class of L/L operators that we shall require to reason
about images, and begin with an example.
Example 2: Suppose that the linear operators / 1 and / 2 provide a pointwise
measure of two image properties (e.g.
x and
y , the second directional
derivatives) which are components of a more complex image property (e.g. locating
local convexity, the points where D 2
0). If this aggregate
property can be expressed as a logical combination of the signs of the linear properties,
then we can build a L/L operator \Psi on the image such that
positive, if x is a locally convex point;
negative, otherwise.
In this case, we would define
y I)(x):
This example reveals a class of L/L operators appropriate for reasoning about
images.
Definition 4 A Logical/Linear Convolution Operator \Psi is a L/L operator
on an image I such that all / i (I) are linear convolutions of the form
Z
The operation of such an operator on an image is termed the Logical/Linear
Convolution of I by \Psi, and is written
Note that the linear convolution / I is a special case.
Returning to Ex. 2, we can assert that
y I)
thus justifying the notation we will use for describing L/L convolution operators:
y
This operator will produce an image whose elements are positive only for convex
points of the input image.
An important relationship we will use to design image operators is that between
a L/L operator and its linear reduction.
Definition 5 The Linear Reduction / of a L/L operator \Psi is that linear operator
which is produced by substituting + for each L/L combinator in the L/L operator
description.
Corollary 4 Given the linear reduction /(x) of a L/L operator \Psi(x), a L/L convolution
of \Psi I is exactly equal to the linear convolution of / I if the logical expression
corresponding to the L/L expression is True.
Thus in fulfilling our goal of developing L/L image operators which retain some of
the optimal behaviour of a particular linear operator, we will seek to design L/L
operators which reduce to 'optimal' linear operators.
Before we move on to actual design, it will be important to examine a second,
equivalent definition of the L/L combinators which has useful computational consequences
Definition 6 The ae-approximate L/L combinators are given by:
where
0; otherwise.
The oe ae (x) function is used as a partition function since it is everywhere infinitely
differentiable but is only non-zero for values ? \Gamma1=ae: The 'logistic' sigmoid function
of [18] is another option for oe ae (x), but the fact that the function chosen is non-singular
(i.e. means that the "hard" logic of
xII.A. still applies for values outside of this region. All that is required for predictable
behaviour is that oe ae (x) has range [0; 1], is monotonic, and that lim ae/1 oe ae
the step function below. In fact, a linear ramp in the interpolation region if eq.
seems to be adequate for practical purposes.
The notion of ae-approximate is clarified by the following.
Theorem 5
lim
lim
Proof Note that
lim
0; otherwise.
This function is a choice operator pivoting around zero, and as such it can be used
to directly define the L/L combinators above. If this limit is substituted in eqs. (8),
then they can be rephrased as
fy
Figure
4: Graphs of the ae-approximate L/L combinators varying ae: (a),
(b), and (c) show x
y, (d), (e), and (f) show x
y. Note that as ae varies
between 0 and 1, the combinators vary from purely linear to purely logical,
with smooth interpolation in between.
It can be verified that these are equivalent to Def. 2.
The approximations represented by
expose another relationship between
the linear sum and the L/L combinators. Since oe 0 substitution of
this value into eq. (8) simplifies both L/L combinators to a linear combination
Thus, the ae-approximates
ae form a continuous deformation from a linear
combination to the absolute L/L operations as ae goes from 0 to 1 (see Fig. 4).
III. Designing L/L Operators for Image Curves
Using the framework defined above, we now proceed to derive a family of L/L image
operators to locate and describe image curves as defined in Definition 1. We begin
by observing that the conditions expressed in eqs. (2b,2c,2a) segregate into independent
one-dimensional conditions in orthogonal directions-along the tangent and
the normal. The normal condition selects the proper contrast cross-section to define
a (positive or negative contrast) line or edge, and the tangential condition ensures
local C 1 continuity of the inferred curve. Thus, our solution is a separable family
of two-dimensional operators expressed as the Cartesian product of orthogonal, one-dimensional
L/L operators, one normal N(y) and the other tangential T(x) to some
preferred direction. With (x; y) defining a local, orthonormal coordinate system, we
seek
Moreover, the tangential condition (C 1 continuity), and thus the tangential operator
T, is identical for all three image curve types.
Thus, we divide the design into three stages:
ffl First, derive a set of one-dimensional L/L operators N fP;N;Eg which verify the
cross-sectional (normal) conditions of Definition 1, while avoiding the pitfalls
discussed in xI.
derive a one-dimensional L/L operator T which is capable of discriminating
between locally continuous and discontinuous curves along their tangent
direction.
ffl Finally, form a family of direction-specific two-dimensional L/L image curve
operators by forming the Cartesian product of the two one-dimensional operators
A. The Normal Operators: Categorization
For the purpose of illustration, we will begin with the analysis of a positive contrast
line (2b). The methodology developed will apply naturally to the two other image
curve types.
Since a necessary condition for the existence of such a line is a local extremum in
intensity (fig. 5 is a display of typical 1D cross-sections of lines and edges), we will
first consider the operator structure normal to its preferred orientation. This is just
the problem of locating extrema in the cross-section fi s .
G' * I
G" * I
G"' * I
(a)
G' * I
G" * I
G"' * I
(b)
Figure
5: Cross sections of image lines and edges. A line in an intensity
image (a) is located at the peak of its cross-section. Note that this coincides
with a zero in the derivative fi 0 and a negative second derivative fi 00 . An
intensity edge (b) occurs at peaks in the derivative fi 0 of the cross-section.
The derivatives shown are derived from convolution by G 0 and G 00 operators
with
A local extremum in a one-dimensional differentiable signal fi(x) exists only at
those points where
Estimating the location of such zeroes in the presence of noise is normally achieved
by locating zero-crossings, thus in practice these conditions become
for some ffl ? 0. An operator which can reliably restrict its responses to only those
points where these conditions hold will only respond to local maxima in a one-dimensional
signal.
A set of noise-insensitive linear derivative operators (or 'fuzzy derivatives' [19])
are the various derivatives of the Gaussian,
G oe
which will be expressed as G 0
oe (x), G 00
oe (x), etc. These estimators are optimal for
additive, Gaussian, i.i.d. noise.
When convolved over a one-dimensional signal these give noise-insensitive esti-
Sum
G'l
G'r
(a)
Sum
G"l
G"r
(b)
Figure
Central differences suggest that an approximation to the n th
derivative can be obtained from a difference between two displaced (n \Gamma 1) th
derivatives. Thus in (a) the sum of two G 0 operators approximate \GammaG 00 , and
in (b) the sum of two G 00 operators approximate G (3) .
mates of the derivatives of the signal, for example
oe
Theorem 6 For the one-dimensional signal fi(x), the following conditions on the
smoothed signal
are sufficient
to indicate a local maximum in the signal fi(x).
The identity in (12) shows that these conditions are necessary and sufficient for
the existence of a local maximum in fi oe (x). The maximum principle
for the heat equation ([20], p. 161) implies that convolution by a Gaussian cannot
introduce new maxima. Thus the above conditions imply the existence of a maximum
in fi(x).
This suggests a practical method for locating maxima in a noisy one-dimensional
signal. Comparing the results of convolutions by derivatives of Gaussians will allow
us to determine the points where Theorem 6 holds. The loci of such points will
form distinct intervals with widths - 2ffl. The parameter oe determines the amount of
smoothing used to reduce noise-sensitivity.
Observe by central limits that:
Thus for the derivative estimates fi oe , one would expect that
with the accuracy of the approximation a function of ffl. Thus the conditions in
Theorem 6 can be verified from examination of the derivative fi 0
oe (x)-a linear combination
of two points will give \Gammafi 00
oe (x). More specifically, we adopt the approximation
\GammaG 00
=2ffl, where ffl=oe - 1. Figure 6a shows that for
this is an acceptable approximation.
Thus, convolution by G 0 allows testing of all three conditions in Theorem 6 si-
multaneously. Using the L/L combinators of xII.A. we are now able to define a
one-dimensional operator which has a positive response only within a small interval
around a local maxima.
Operator 1 The one-dimensional normal operator for Local Maxima NM is
r
where
Clearly then, the key advantage of this NM operator is that:
Observation 7 The response NM (fi)(x) will be positive only if there is local maximum
in fi oe within the region [x \Gamma ffl; x
By reference to Definition 2 we can see that NM (fi)(x) ? 0 implies that both
Equation 12 then implies that
l (x)
r (x)
Thus a positive response ensures that fi 0
oe (x+ ffl) ! 0, which in turn
imply the presence of a local maximum on fi oe between x \Gamma ffl and x
3 Observe that although the local maximum in fi oe is guaranteed to fall within this region, the
corresponding maximum in fi is not necessarily so restricted. Qualitatively however, we can rely on
the observation that the maxima for a signal will converge on the centroid of that signal under heat
propagation (or as we convolve with larger and larger Gaussians). Considering the features of fi in
isolation then, we can state that the smoothing will cause the location of the local maximum in fi oe
-0.250.250.751.251.75I
NP * I
(a)
(b)
-0.250.250.751.251.75I
NP * I
(c)
Figure
7: Responses of L/L positive contrast line operator and the linear
operator \GammaG 00 which it reduces to, near a step edge whose local profile varies
from the ideal. The graphs show the image profiles being operated on,
covering (a) a simple step edge, (b) a compound step with slope above ? 0,
and (c) a compound step with slope above ! 0. It can be seen that the L/L
operator blocks the unwanted response near a step which is not also a local
maximum (a,b), but that when the edge is also a local maximum (c) it does
respond. The linear \GammaG 00 operator, however, responds positively in each of
these cases, exhibiting consistent (and erroneous) displacement of the peak
response.
The performance improvement from introducing this non-linearity is considerable.
The linear operator exhibits consistent patterns of false positive responses. The
simplest example is the response near a step (see fig. 7). The linear operator displays
a characteristic (false) peak response when the step is centered over one of the zeroes
in the operator profile. The logical/linear
operation prevents this error since both
halves of the operator register derivatives in the same direction and so do not
fulfill the conditions of (11). The L/L operator will respond positively only in the
case that the slope above the step is negative (i.e. only when the transition point is
also a local maximum).
A more specific operator can be derived by examining the implications of (2b)
beyond the local maxima. A discontinuous peak, such as that shown Fig. 5a is not
only a negative local minimum in fi 00
oe , but a positive local maximum in fi (4)
oe . Thus
two additional conditions are required
to shift towards the centroid of the local intensity distribution, a phenomenon observed in studies
of biological visual systems (e.g. the vernier acuity studies of [21]).
This can again be captured by central differences, combining two offset third-derivative
estimates.
Operator 2 The one-dimensional normal operator for Positive Contrast Lines
N P is
r (16)
where
Clearly, the addition of these two conditions to the NM operator will select a proper
subset of the positive responses to NM . But since these new conditions were derived
from an analysis of differential structure around roof discontinuities, the N P operator
responses will include these points and be more specific than the NM responses.
The extension of this analysis to the other curve types in xI.A. is straightforward.
The above analysis can be repeated in its entirety with a simple change of sign so as
to be specific for an identical feature of the opposite contrast.
Operator 3 The one-dimensional operator for Negative Contrast Lines NN is
\Gamman 0
\Gamman (3)
\Gamman (3)
Slightly more complicated is the case for edges. In the simplest case, a rising
discontinuity is signalled by a local maxima of the first derivative, thus imposing the
following conditions
or
This condition is just the familiar zero-crossing of a second derivative, exactly the
condition used by Haralick [22] and Canny [2]. Note that this operator actually selects
any inflection points in the signal.
Mirroring the analysis above, the verification of these conditions can be realized
in an L/L operator selecting inflection points N I :
Operator 4 The one-dimensional operator N I for Inflection Points is
where
Now as with the line operators, selection of more truly edge-like features is possible
by examination of other derivatives. Note that a blurred step edge has vanishing even
derivatives and sign-alternating non-zero odd derivatives (see fig. 5). The description
of an edge adopted in (17) is clearly consistent with this observation, but incomplete.
Note also that an "edge" is the derivative of a "peak," which was used for analysing
line-like images. With this additional information, we can adopt a more selective
operator for image edges which requires that all of the following conditions must be
verified
or
These conditions 4 can be verified in an L/L edge operator
Operator 5 The one-dimensional operator NE for Edges is
r
where
oe
This operator is significantly more selective than the 'zero-crossing of a second-
derivative,' [23] which is only one of the logical preconditions on which this operator
depends. One can therefore expect less of a problem of non-edge signals generating
edge-like responses with this operator than with these other less specific operators.
It is important to realize that the operator family which forms the basis for this
analysis is the Derivatives of Gaussian family of operators. Koenderink [24, 25] de-
4 The condition fi 0 (x) ? 0 is actually also implemented by Haralick [22] and Canny [2], since their
lateral maxima selection is followed by a threshold fi 0 (x) ? ', where ' is positive.
rived this family as one orthonormal solution of the problem of deriving size-invariant
spatial samplings of images. Members of this operator family can be transformed into
each other via a set of simple, unitary transformations. This has definite computational
advantages, since the higher derivatives and spatial offsets from pixel centers
may be derived from a small canonical set of operators by linear combinations. In ad-
dition, Young [26] has persuasively argued that these are exactly the basis functions
which are used by primate visual systems.
B. The Tangential Operator: Continuity
far only the normal image structure (fi s ) has been discussed. In order to extend
this result to two-dimensions, we must examine the tangential (curvilinear) structure
of the curves (ff). By Definition 1 we must verify the local continuity of candidate
curves. In addition, the extraction of orientation-specific measures was deemed essential
for further processing. In this section, these problems will be addressed by
imposing a further tangential structure on the operator. We will follow the same
course as for the normal cross-section-first a linear structure will be proposed which
will then be decomposed to reveal linear measurement operators for the components
of the structural preconditions. The emphasis again is placed on these preconditions
and their L/L combination.
Consider the image cross-section that is tangent to the image curve ff at every
point. Assume that the intensity variation along this curve is everywhere smooth
and corrupted only by additive Gaussian noise. The local contrast along the curve
as compared against its background is an acceptable measure of the curve's salience.
This suggests filtering image noise with a linear Gaussian operator
along
the tangential direction.
Near a curve end-point, however, the tangential section will exhibit an abrupt
discontinuity (see fig. 8). The indiscriminate smoothing of the Gaussian will obscure
this contrast discontinuity by, in effect, assuming that no discontinuity is present
before it is applied, thus violating the third criterion of xI.A. The local continuity of
the curve must be verified prior to smoothing.
To resolve this, consider a definition of the local continuity of a function. The
function f(x) is said to be continuous at x 0 iff
lim
For our purposes, assume that the non-linearities associated with the normal components
of the L/L image curve operators are evaluated before 5 those in the tangential
5 With a pure linear operator expressed as the Cartesian product of normal and tangential one-0.51
Tangent
Linear
Tangent
Desired
(b)
Figure
8: The signal is the tangential section of an image line near the
discontinuous termination of the line (the endline). Note that the linear operator
(a) exhibits a smooth attenuation of response around the line ending.
We seek an operator (b) whose response attenuates abruptly at or near the
endline discontinuity.
(a) (b) (c)
Figure
9: Schematic of the half-field decomposition and line endings. The
elliptic regions in each figure represent the operator positions as the operator
is placed beyond the end of an image line. In (a) the operator is centered on
the image line and the line exists in both half-fields. In (b) the operator is
centered on the end-point and whereas the line only exists in one half-field,
the other half-field contains the end-point. In (c) the operator is centered
off the line and the line only exists in one half-field.
L/L operators. Then a curve termination point in the image must be signalled by
a contrast sign reversal in the image section seen by the tangential L/L operator-a
transition from a region which has been confirmed to be of the given category (posi-
tive response) to a region which has been rejected (negative response). We will call
the behaviour which the tangential operator must exhibit End-Line Stability. A
one-dimensional operator is end-line stable if and only if it responds positively only
when centered on a uniformly positive region of the image.
Representing the intensity variation along the curve ff as a function of the arc-length
I ff (s), the worst-case line-ending (or beginning) is a step in intensity at
End-line stability requires that the operator's response T(I ff )(s) be non-positive for all
positive for s ? ffl. 6 Given the requirement for symmetric approach outlined
in eq. (19), from fig. 9 we observe that this can be achieved by separately considering
the behaviour of the curve in each tangential direction around the operator centre.
We therefore adopt a partition which divides the operator kernel into two regions
along its length. Using the step function oe(x) of eq. (II.C.) a partition of G(x) around
0 is given by
Operator 6 The one-dimensional operator for Tangential Continuity T is
Note that required. The smooth partition
operator oe ae (x) of eq. (9) can be used for a smooth, stable partition.
Observation 8 The operator T is end-line stable.
Consider the component responses in the neighbourhood of the step edge I ff
oe(s). The response of t + to this step is given by
operators, order-of-evaluation is unimportant, but with Logical/Linear operators order-
of-evaluation can be essential.
6 This property must also operate symmetrically at the other end of the curve.
G
(a)
G
(b)
Figure
10: A one-dimensional Gaussian (representing the tangential operator
partitioned into two regions (a) to obtain the two half-field operators
defined by eq. (20). The addition of 'stabilizers' is shown in (b).
The L/L AND of t + and t \Gamma to produce T requires that both component responses
be strictly positive for a positive response, thus whenever s - 0 around the step
described above, the T response is also zero. It is obvious that the same analysis
applies to the symmetric t \Gamma component and the 1 \Gamma oe(s) step edge, which describes
behaviour around the other end of the line. Thus the T operator is end-line stable
symmetrically around a step edge.
The above proof, however, depends critically on the use of ideal L/L combinators,
while in most cases we would prefer to use non-ideal combinators
ae where ae ! 1).
When the non-ideal combinators are used, the 'end-line stable' operator described
above does not properly attenuate responses beyond the line ending (see Fig. 11a).
In order to achieve this attenuation, it is necessary to force the component responses
in the region just beyond a line ending significantly below zero.
This is achieved with the addition of the 'stabilizers' (shown in Fig. 10b):
Thus, a smooth partition of G(x) by oe a (x) is augmented with an overshoot \GammabG 0 (x).
The overshoot guarantees that when the center of the operator is near the line ending
(see Fig. 11b) one component will give a negative response over the region where the
Tangent
t- * I
(a)
Tangent
t- * I
(b)
Figure
11: Responses (including component responses) of an unstabilized
endline operator (a) and the stabilized version (b). Note that the L/L combination
of the unstabilized components (a) does not, in fact, reduce to zero
beyond the end of line. This is due to the use of the L/L
ae approximation
with ae ! 1. In order to produce stable attenuation at a line ending, inhibitory
regions (stabilizers) are added to the t \Gamma and t + components, which
have the effect of pushing the component responses 'near' but `off ' the line-
ending below zero (b).
operator is not centered on the line. Since the stabilizers are symmetric, it does not
matter whether the operator is near a rising or falling line-ending-if the operator
is centered over the positive region it will respond. Furthermore, since the integral
of the stabilizers is zero, they will have no effect whatsoever on a locally constant
signal. The candidate tangential operator is then the L/L AND of these stabilized
components. The parameters a and b are chosen so that the cutoff is exactly aligned
with the line ending.
We have, in addition, extended these principles to multiple (more than two) tangential
regions (e.g. four), whose responses are combined with the L/L AND com-
binator. In essence, this looks for more tangential structure than simple continuity.
Requiring that regions which do not overlap the center of the operator show positive
responses as well, can be used to impose a minimum-length criterion on detected
curves. Davis [27] suggested that such an approach would have the effect of decreasing
noise sensitivity. This technique is described in detail in [28]. Note the similarity
between this approach and the ANDing of LGN afferents proposed by Marr and
Hildreth [23]).
C. The Two-Dimensional Image Operators
Finally then, we can construct the two-dimensional image curve operators by taking
the Cartesian product of the normal and tangential components. Our original definition
of curvilinear image structure (Def. 1) points directly to this by defining an image
curve as the locus of points satisfying some normal condition along a differentiable
curve in the image. In order to test both normal and tangential conditions then we
construct the component two-dimensional operators by taking the cartesian products
of all of the tangential and normal components (see Fig. 12 for an example). For each
tangential region, we combine the outputs of the normal combinations producing a
confirmation of the hypothesis that the normal condition is satisfied within that tangential
region. These hypotheses are then combined using the Tangential Continuity
combination of Op. 6 to verify the local continuity hypothesis.
Operator 7 The Logical/Linear Image Curve Operators \Psi i (where i selects
the operator category) are given by
where
r for Positive Contrast Lines,
\Gamman
\Gamman (3) l
\Gamman (3) r for Negative Contrast Lines,
r for Edges.
Thus, the constructed two-dimensional operator, a L/L combination of linear
two-dimensional operators, has a positive response only when the normal conditions
(which categorize line types) are consistent through the tangential regions, thus verifying
local curvilinear continuity.
IV. Results
As per the decomposition into curve types described above, we create three different
classes of curve operator, for positive and negative contrast lines and for edges. The
operators in the following examples all have a tangential oe = 2:0 and a lateral
pixels. The ffl of the the lateral operator separations is 1:0. This ensures that all curves
are localized to connected regions with width - 2 pixels.
For the comparison images with Canny's algorithm, an upper threshold of - 15%
contrast was used. This value was adequate for suppressing most noise, although
some of the examples show that the noise has not been entirely eliminated. The low
threshold was set to 1% so as to come close to matching the sensitivity of the L/L
operators to very faint structures.
A natural but informal evaluation criterion for edge operators is the degree to
which the 'edge map' produced corresponds to a reasonable line drawing of an im-
age. We therefore use a test image of a statue "Paolina" (Fig. 13) not unlike the
subjects in Michelangelo's drawings (Fig. 14). This drawing is particularly suitable
because, as Koenderink has pointed out, the representation of creases and folds is
especially important for conveying a sense of three-dimensional structure [10, 11].
An examination of the Canny and L/L edge images for the statue reveals a marked
difference in the ability to distinguish perceptually salient edges from other kinds of
intensity changes. Comparison with Michelangelo's treatment reveals clearly that the
L/L operators represent more of the significant structure than the Canny operator.
Formal criteria for an image curve were established in xI.A., and these provide
less subjective demonstrations of where the Canny operator fails. We stress that
our goal here is not to focus on the shortcomings of the Canny operator, but rather
to indicate the shortcomings of the long tradition of edge operators that consist of
linear convolutions followed by thresholding, from Sobel [29] through Marr-Hildreth
[23] and most recently in Canny.
The first criterion, the need for predictable behaviour in the neighbourhood of multiple
image curves is examined in each of the details from the statue image (Figs. 15,
16, and 17). In these circumstances, The Canny operator either leaves large gaps
(Figs. 15b and 17b), or simply infers a smooth, undisturbed local contour (Fig. 16b).
This failure disrupts the ability to reconstruct the kind of information which gives a
sense of three-dimensional structure, since creases and folds involve the intersection
Figure
12: Illustration of the construction of the two-dimensional positive
contrast line operator. Each of the bottom row of operators is a linear
operator which is formed by one of the linear component operators n l;r \Theta t l;r .
The middle row represents the linear reduction of the operators t l;r \Theta N P ,
in other words the sum of the two operators below. The operator shown at
the top of the pyramid is the linear reduction of \Psi P , the sum of the middle
operators. The cross-hairs represent the centre of each operator and are
provided solely for purposes of alignment.
(a) (b)
Figure
13: Image of statue (a), provided by Pietro Perona, and edge maps
computed by: (b) Canny's algorithm (both
algorithms are run at the same scale). Compare these representations with
the human expert's line drawing in Fig. 14, especially around the chin and
neck. The Canny operator consistently signals non-salient 'edges', misses
edges in complex neighbourhoods (e.g. near the T-junction of the chin and
neck) and shows discontinuous orientation changes as smooth. (The boxes
represent the approximate locations of the details shown in subsequent fig-
ures).
Figure
14: Line drawings, such as this Michelangelo, demonstrate in a
clear and compelling manner the significance of image curves for the visual
system. A well-executed line drawing depends critically on curvature, line
terminations and junctions for its visual salience. Koenderink has stressed
how the "bifurcation structures" define the arm and shoulder musculature
and the manner in which the chin occludes the neck. Observe the similarity
between this and the L/L operator responses, and differences with the Canny
operator.
and joining of just such multiple image curves. In the worst case, nearby curves can
interfere with the Canny operator's ability to extract much meaningful structure at
all (Fig. 19b).
This leads us to the second criterion, the need to preserve line terminations and
discontinuities. In our approach to early vision, we take curve discontinuities to be
represented by multiple, spatially coincident edges [30, 31, 32]. This holds for both
"corners" and "T-junctions"-such discontinuities are inadequately captured by the
Canny operator. Where there are clear discontinuities and junctions in the image
curves, the Canny operator either leaves gaps or gives smooth output curves (see in
particular the detail in Fig. 16b). The L/L operators represent such curve crossings
and junctions by supporting multiple independent orientations in a local neighbour-
hood, just the representation we require. So not only do the L/L operators respond
stably in the neighbourhood of multiple coincident curves, but they are also able
to adequately represent this coincidence. Preceding attempts at edge operators have
relied on the a priori assumption (usually implicit) that only one edge need be considered
in each local neighbourhood, and thus that only one edge need be represented at
each point in the output image. By rejecting that assumption and ensuring that the
L/L operators perform stably in the neighbourhood of edge conjunctions, we provide
a stable, complete representation of these fundamental image structures.
Recently, there has been some attempt to define "steerable filters" for edge detection
[33, 34, 35], and to have them provide a representation for image curve discontinuities
analogous to ours (i.e., as multiple orientations at the same position).
However, the linear spatial support of these operators again causes problems, in this
case a "smearing" or blurring of the corner energy over a neighbourhood. An additional
search process is therefore introduced to find the locations and directions of
maximal response [35], analogous to what we called "lateral maxima selection" in
earlier implementations of our system [4]. While such search processes provide some
of the necessary non-linear behaviour, they introduce additional interpretative difficulties
that do not arise with the L/L decomposition. Search also further complicates
parallel implementations by introducing sequential bottlenecks. Finally, the standard
steerable filters still exhibit mislocalization of line endings (which led in [35] to the
introduction of end-line detectors). The steerable filters approach is useful, however,
for reasons of computational efficiency, and we suggest that they may be used as a
basis set for the linear components of our L/L operators.
Finally, the third criterion, the potential confusion between lines and edges, is
seen to be addressed by the L/L operator approach. This problem is acute with
the Canny operator, and is deliberately confounded by the "edge energy" methods
[36, 34], thus necessitating a second stage of analysis that refers back to absolute image
intensities to fully describe the local structure of the image curve. The fingerprint
(Fig. 19) and the composite image of the statue (Fig 18) show the utility and richness
of a representation which separates edge and line information. The fingerprint is
clearly more appropriately and parsimoniously represented by the line image, while
the highlights on the statue are only revealed by the line image. It has been argued
that most line-like structures can be revealed by looking for locally parallel edge
responses, but clearly not all (e.g. the many highlights on the statue's surface). We
submit that parsimonious representations will combine features from both edge and
line images and interpret them as appropriate.
It is also important to note that computing Canny's algorithm on a parallel architecture
requires a number of iterations of dilation in order to implement the 'hys-
teretic threshold'. Thus it's time complexity on a fully parallel implementation is
O(n), where n is the maximum length of a curve. Worst case, this is proportional to
the number of pixels in the image, thus representing a significant sequential bottle-neck
in an otherwise parallelizable algorithm. The L/L operator implementation is,
however, O(1) in both time and processors.
V. Conclusions
One of the major problems with linear operator approaches to detecting image curves
is their false-positive responses to uncharacteristic stimuli. After outlining the necessary
structural conditions for the existence of an image curve, we developed an
operator decomposition which allows for the efficient testing of these conditions, and
the elimination of the associated false-positive responses. To achieve this, it was essential
to consider both the cross-section of the intensity image and the low-order
differential structure of the curve itself.
The operator families which are used as a basis set for these computations consists
of spatial derivatives of Gaussians. It is a widely held assumption in the field that
measurements of higher order derivatives are unstable and therefore unusable. We
have deliberately chosen to highlight these higher order derivatives and have demonstrated
that in the context of L/L operators these measurements are not only stable
but extremely useful, even at high resolutions.
The output of the operators is unconventional since we chose at an early stage to
not impose a functional mapping between image points and local linear structure. Not
only may there be multiple line types at a single image position, there may even be
multiple lines of a single type. In [28] this representation is referred to as a Discrete
Image Trace. It it is independently justified for its ability to implicitly represent
continuity, intersection and some topological properties of differentiable structures
representable as fibre bundles on the image (e.g. image curves). Moreover, it is
shown how these discrete traces may be refined using relaxation labelling to verify
more global constraints and begin to construct higher-level representations.
Figure
15: Detail of statue (a) from lower left near jaw and neck, and edge
maps computed by: (b) Canny's algorithm, and (c) L/L operators (both
algorithms are run at the same scale). Note that Canny's algorithm does
not connect the two edges which join at the T-junction. The L/L operator
responses represent the discontinuity by supporting two independent
orientations in the same local neighbourhood.
Figure
16: Detail of statue (a) from upper right, and edge maps computed
by: (b) Canny's algorithm, and (c) L/L operators (both algorithms are run
at the same scale). The Canny operator misses much of the rich structure in
this small region as a result of interference between the nearby edges and the
choice of high threshold. A lower threshold would have the effect of exposing
more structure, but then the noisy responses seen in Fig. 13a would also be
expanded. The L/L operator exposes this structure and also represents the
discontinuities and bifurcations in the underlying edge structure.
Figure
17: Detail of statue (a) from lower right near shoulder, and edge
maps computed by: (b) Canny's algorithm, and (c) L/L operators (both
algorithms are run at the same scale). Again the Canny operator does not
represent the conjunction of edges in this neighbourhood, while the L/L
operators show the edge bifurcation clearly.
Figure
18: The statue as represented by the three categories of L/L opera-
tors. The black lines show the edge responses while the white and grey lines
show the bright and dark lines respectively. Note that some features, such
as the bottom of the palm of the hand, are only clearly represented by the
line images.
(a) (b)
(c) (d)
Figure
19: Fingerprint image (a), and edge maps computed by (b) Canny's
algorithm, and (c) L/L edge operators. The most appropriate representation
(d) is the L/L positive contrast line operator. The complexity of display
and the proximity between nearby image features are the most significant
contributors to the breakdown of Canny's algorithm in this case. These
problems are dealt with in the L/L operators by the explicit testing of local
consistency before combining component inputs. This serves to isolate features
even when other nearby structures exist within the spatial support of
the operator.
More generally, we have introduced a flexible language for describing a useful
class of non-linearities in operators. This language of Logical/Linear operators
serves to combine existing linear models with logical descriptions of structure to
produce operators which have guaranteed stable behaviour. This class of operators
represents a new approach to the problem of translating linear measures into logical
categories. Thus they may prove essential in the eventual solution of a wide variety
of classification problems, and in the principled and realistic modelling of neural
networks.
--R
"An operator which locates edges in digital pictures,"
"A computational approach to edge detection,"
"An evaluation of the two-dimensional gabor filter model of simple receptive fields in cat striate cortex,"
"The organization of curve detection: Coarse tangent fields and fine spline coverings,"
"On boundary detection,"
"Using Canny's criteria to derive a recursively implemented optimal edge detector,"
"Understanding image intensities,"
"The local structure of image discontinuities in one dimension,"
"The singularities of the visual mapping,"
"The shape of smooth objects and the way contours end,"
"The internal representation of solid shape and visual explo- ration,"
"Length summation in simple cells of cat striate cortex,"
"Spatial frequency analysis in the visual system,"
New York
A Survey of Modern Algebra (fourth edition).
"Learning internal representations by error propagation,"
"Representation of local geometry in the visual system,"
Maximum Principles in Differential Equa- tions
"Mechanisms responsible for the assessment of visual location: theory and evidence,"
"Digital step edges from zero-crossing of second directional derivatives,"
"Theory of edge detection,"
"Operational significance of receptive field assemblies,"
"Receptive field families,"
"The gaussian derivative theory of spatial vision: Analysis of cortical receptive field line-weighting profiles,"
"On models for line detection,"
"Toward Discrete Geometric Models for Early Vision,"
Pattern Classification and Scene Analysis.
"Corner detection in curvilinear dot grouping,"
"The computational connection in vision: Early orientation selec- tion,"
"Two stages of curve detection suggest two styles of visual computation,"
"The design and use of steerable filters for image analysis, enhancement and wavelet decomposition,"
"Detecting and localizing edges composed of steps, peaks and roofs,"
"Steerable-scalable kernels for edge detection and junction analysis,"
"Feature detection in human vision: a phase dependent energy model,"
--TR
--CTR
Song Wang , Feng Ge , Tiecheng Liu, Evaluating edge detection through boundary detection, EURASIP Journal on Applied Signal Processing, v.2006 n.1, p.213-213, 01 January
Doug DeCarlo , Adam Finkelstein , Szymon Rusinkiewicz , Anthony Santella, Suggestive contours for conveying shape, ACM Transactions on Graphics (TOG), v.22 n.3, July
P. Meer , B. Georgescu, Edge Detection with Embedded Confidence, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.12, p.1351-1365, December 2001
Jonas August , Steven W. Zucker, Sketches with Curvature: The Curve Indicator Random Field and Markov Processes, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.387-400, April
Stefano Casadei , Sanjoy Mitter, Hierarchical Image SegmentationPart I: Detection of Regular Curves in a Vector Graph, International Journal of Computer Vision, v.27 n.1, p.71-100, March, 1998
Gang Li , Steven W. Zucker, Contextual Inference in Contour-Based Stereo Correspondence, International Journal of Computer Vision, v.69 n.1, p.59-75, August 2006
Ohad Ben-Shahar , Steven W. Zucker, The Perceptual Organization of Texture Flow: A Contextual Inference Approach, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.4, p.401-417, April
Peter N. Belhumeur , James S. Duncan , Gregory D. Hager , Drew V. Mcdermott , A. Stephen Morse , Steven W. Zucker, Computational Vision at Yale, International Journal of Computer Vision, v.35 n.1, p.5-12, Nov. 1999
Benoit Dubuc , Steven W. Zucker, Complexity, Confusion, and Perceptual Grouping. Part I: The Curve-like Representation, International Journal of Computer Vision, v.42 n.1-2, p.55-82, April/May 2001
Benoit Dubuc , Steven W. Zucker, Complexity, Confusion, and Perceptual Grouping. Part I: The Curve-like Representation, Journal of Mathematical Imaging and Vision, v.15 n.1-2, p.55-82, July/October 2001
Jacob Feldman, Perceptual Grouping by Selection of a Logically Minimal Model, International Journal of Computer Vision, v.55 n.1, p.5-25, October
Song Wang , Toshiro Kubota , Jeffrey Mark Siskind , Jun Wang, Salient Closed Boundary Extraction with Ratio Contour, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.4, p.546-561, April 2005
Blaise Agera y Arcas , Adrienne L. Fairhall , William Bialek, Computation in a single Neuron: Hodgkin and Huxley revisited, Neural Computation, v.15 n.8, p.1715-1749, August
James H. Elder, Are Edges Incomplete?, International Journal of Computer Vision, v.34 n.2-3, p.97-122, Nov. 1999
Vicent Caselles , Bartomeu Coll , Jean-Michel Morel, Topographic Maps and Local Contrast Changes in Natural Images, International Journal of Computer Vision, v.33 n.1, p.5-27, Sept. 1999
Matthias S. Keil, Smooth Gradient Representations as a Unifying Account of Chevreul's Illusion, Mach Bands, and a Variant of the Ehrenstein Disk, Neural Computation, v.18 n.4, p.871-903, April 2006 | nonlinear operators;edge detection;image processing;feature extraction;computer vision |
628753 | Symmetry as a Continuous Feature. | AbstractSymmetry is treated as a continuous feature and a Continuous Measure of Distance from Symmetry in shapes is defined. The Symmetry Distance (SD) of a shape is defined to be the minimum mean squared distance required to move points of the original shape in order to obtain a symmetrical shape. This general definition of a symmetry measure enables a comparison of the amount of symmetry of different shapes and the amount of different symmetries of a single shape. This measure is applicable to any type of symmetry in any dimension. The Symmetry Distance gives rise to a method of reconstructing symmetry of occluded shapes. We extend the method to deal with symmetries of noisy and fuzzy data. Finally, we consider grayscale images as 3D shapes, and use the Symmetry Distance to find the orientation of symmetric objects from their images, and to find locally symmetric regions in images. | Introduction
One of the basic features of shapes and objects is symme-
try. Symmetry is considered a pre-attentive feature which
enhances recognition and reconstruction of shapes and objects
[4]. Symmetry is also an important parameter in physical
and chemical processes and is an important criterion
in medical diagnosis.
a. b. c.
Fig. 1. Faces are not perfectly symmetrical.
a) Original image.
b) Left half of original image and its reflection.
c) Right half of original image and its reflection.
However, the exact mathematical definition of symmetry
[23], [32] is inadequate to describe and quantify the symmetries
found in the natural world nor those found in the
visual world (a classic example is that of faces - see Figure
1). Furthermore, even perfectly symmetric objects lose
their exact symmetry when projected onto the image plane
or the retina due to occlusion, perspective transformations,
digitization, etc. Thus, although symmetry is usually considered
a binary feature, (i.e. an object is either symmetric
or it is not symmetric), we view symmetry as a continuous
feature where intermediate values of symmetry denote
some intermediate "amount" of symmetry. This concept of
continuous symmetry is in accord with our perception of
symmetry as can be seen, for example, in Figure 2.
Considering symmmetry as a continuous feature, we introduce
a "Symmetry Distance" that can measure and
quantify all types of symmetries of objects. This measure
H. Zabrodsky and S. Peleg are with the Institute of Computer Sci-
ence, The Hebrew University of Jerusalem, Jerusalem, 91904 Israel.
D. Avnir is with the Department of Organic Chemistry, The Hebrew
University of Jerusalem, Jerusalem, 91904 Israel.
a. b. c. d.
Fig. 2. Perceiving continuous symmetry.
a) A shape perceived as perfectly symmetric (the oblique mirror
axis passing through the vertex). b) Shortening an arm, the
shape is perceived as "almost" symmetric. c) Further shortening
of the arm, the shape is perceived as having "less" symmetry.
e) When the arm is eliminated the shape is again perfectly symmetric
(with a mirror axis perpendicular to the existing arm).
will enable a comparison of the "amount" of symmetry of
different shapes and the "amount" of different symmetries
of a single shape. Additionally, we present a simple and
general algorithm for evaluating this measure.
In Section II we define the Symmetry Distance and in
Section III describe a method for evaluting this measure.
Following, in Sections IV-VI we describe features of the
symmetry distance including its use in dealing with occluded
objects and with noisy data. Finally in Section VII
we describe the application of the symmetry distance to
finding face orientation and to finding locally symmetric
regions in images.
A. Definitions of Symmetry
In this paper we refer to the symmetries defined be-
low, although the definitions, methods and discussions presented
in this work apply to all other symmetries in any
dimension. For further details and a review see [34].
A 2D object is mirror-symmetric if it is invariant under
a reflection about a line (the mirror-symmetry axis).
A 3D object is mirror-symmetric if it is invariant under
a reflection about a plane.
A 2D object has rotational-symmetry of order n, denoted
Cn-Symmetry, if it is invariant under rotation of
radians about the center of mass of the object.
A 3D object has rotational-symmetry of order n, denoted
Cn-Symmetry, if it is invariant under rotation of
radians about a line (the rotational symmetry axis)
passing through the center of mass of the object.
Radial symmetry is the symmetry of a 2D object having
both mirror-symmetry and Cn -symmetry (note that such
objects have n axes of mirror-symmetry). Radial symmetry
of order n is denoted Dn-Symmetry. Circular symmetry
is C1 -symmetry (see Figure 3).
d.
c.
b.
a.
Fig. 3. Examples of symmetries: a) C 8 -symmetry b) mirror-
symmetry c) D 8 -symmetry d) circular symmetry.
B. Studies of Symmetry In Computer Vision
Detection of symmetry in images has been widely stud-
ied; with respect to circular (radial) symmetries [6], [29]
and with respect to mirror symmetries [22], [26]. Transformation
of the symmetry detection problem to a pattern
matching problem introduces efficient algorithms for detection
of mirror and rotational symmetries and the location
of symmetry axes [3], [1]. These algorithms assume noise
detect symmetry, if exists, on a global scale.
As an intrinsic characteristic of objects and shapes, symmetry
has been used to describe and recognize shapes and
objects both on a global scale [17], [21] and as a local feature
Symmetrical features of images have
been exploited for better image compression [20]. Symmetrical
descriptions of shapes and the detection of symmetrical
features of objects are useful in guiding shape matching,
model-based object matching and object recognition [27],
[29]. Reconstruction of 3D objects has also been implemented
using symmetry as a constraint [31], [24]. More
recently, symmetry has been used to discriminate textures
[9] and has been used in guiding robot grasping [7].
Considerable work has been done on skewed symmetries
including detection and exploiting them in the reconstruction
of 3D structure [26], [27], [18], [14], [28].
far, symmetry has been treated as a binary feature:
either it exists or it does not exist. The approach to symmetry
as a continuous feature is novel, although several
early studies have approached the question of measuring
symmetry: In an early work, Gr-unbaum [15] reviews methods
of geometrically measuring symmetry of convex sets.
Yodogawa [33] has presented an evaluation of symmetry
(namely "Symmetropy") in single patterns which uses information
theory to evaluate the distribution of symmetries
in a pattern. Marola [22] presents a coefficient of
mirror-symmetry with respect to a given axis where global
mirror-symmetry is found by roughly estimating the axis
location and then fine tuning the location by minimizing
the symmetry coefficient. Gilat [13], Hel-Or et.al.[16] and
Avnir et.al. [5] present the idea of a Measure of Chirality
(a measure of deviation from mirror-symmetry). Similar
to Marola, Gilat's chirality measure is based on minimizing
the volume difference between the object and it's reflection
through a varying plane of reflection. Hel-Or et al. present
a measure of chirality for 2D objects based on rotational
effects of chiral bodies on the surround.
These symmetry detection and evaluation methods are
each limited to a certain type of symmetry (mirror or circular
symmetry). In this paper we introduce the notion
of continuous symmetry and present a general continuous
measure the symmetry distance for evaluating all types of
symmetries in any dimension.
II. A Continuous Symmetry Measure - Definition
We define the Symmetry Distance (SD) as a quantifier
of the minimum 'effort' required to transform a given
shape into a symmetric shape. This 'effort' is measured by
the mean of the square distances each point is moved from
its location in the original shape to its location in the symmetric
shape. Note that no a priori symmetric reference
shape is assumed.
Denote
by\Omega the space of all shapes of a given dimension,
where each shape P is represented by a sequence of n points
i=0 . We define a metric d on this space as follows:
d
This metric defines a distance function between every two
shapes in \Omega\Gamma
We define the Symmetry Transform of a shape P as
the symmetric shape -
closest to P in terms of metric d.
The Symmetry Distance (SD) of a shape P is defined
as the distance between P and it's Symmetry Transform:
The SD of a shape
i=0 is evaluated by finding the
symmetry transform -
of P and computing:
This definition of the Symmetry Distance implicitly implies
invariance to rotation and translation. Normalization
of the original shape prior to the transformation additionally
allows invariance to scale (Figure 4). We normalize by
scaling the shape so that the maximum distance between
points on the contour and the centroid is a given constant
(in this paper all examples are given following normalization
to 10). The normalization presents an upper bound on
the mean squared distance moved by points of the shape.
Thus the SD value is limited in range, where SD=0 for
perfectly symmetric shapes. The general definition of the
Symmetry Distance enables evaluation of a given shape
for different types of symmetries (mirror-symmetries, rotational
symmetries etc). Moreover, this generalization allows
comparisons between the different symmetry types,
and allows expressions such as "a shape is more mirror-symmetric
than C 2 -symmetric".P'
c.
a. b.
symmetry transform
d.
normalizeFig. 4. Calculating the Symmetry Distance of a shape:
a) Original shape fP g.
b) Normalized shape fP 0
2 g, such that maximum distance
to the center of mass is a given constant (10).
c) Applying the symmetry transform to obtain a symmetric
shape f -
P2g.
d)
An additional feature of the Symmetry Distance is that
we obtain the symmetric shape which is 'closest' to the
given one, enabling visual evaluation of the SD.CCC
Mirror
a.
b. c. d. e.
Fig. 5. Symmetry Transforms of a 2D polygon.
a) 2D polygon and its Symmetry Transform with respect to
c) C 3 -symmetry
d) C6-symmetry 2.53).
An example of a 2D polygon and it's symmetry transforms
and SD values are shown in Fig. 5. Note that the
shape in Fig. 5e is the most similar to the original shape
Fig. 5a and, indeed, its SD value is the smallest. In the
next Section we describe a geometric algorithm for deriving
the Symmetry Transform of a shape. In Section IV
we deal with the initial step of representing a shape by a
collection of points.
III. Evaluating the Symmetry Transform
In this Section we describe a geometric algorithm for
deriving the Symmetry Transform of a shape represented
by a sequence of points fP i g n\Gamma1
i=0 . In practice we find the
Symmetry Transform of the shape with respect to a given
symmetry group. For simplicity and clarity of explanation,
we describe the method by using some examples. Mathematical
proofs and derivations can be found in Appendix A.
We first present the algorithm for Cn-symmetry and later
generalize it to any finite symmetry group.
Following is a geometrical algorithm for deriving the
symmetry transform of a shape P having n points with
respect to rotational symmetry of order n (Cn -symmetry).
This method transforms P into a regular n-gon, keeping
the centroid in place.
Algorithm for finding the Cn-symmetry transform:
1. Fold the points fP i g n\Gamma1
i=0 by rotating each point P i
counterclockwise about the centroid by 2-i=n radians
Figure
6b).
2. Let -
P 0 be the average of the points f ~
ure 6c).
3. Unfold the points, obtaining the Cn -symmetric points
i=0 by duplicating -
rotating clockwise
about the centroid by 2-i=n radians (Figure 6d).
d.
a. b. c.
~
~
~
Fig. 6. The C 3 -symmetry Transform of 3 points.
a) Original 3 points fP i
into f ~
c) Average f ~
obtaining -
~
d) Unfold the average point obtaining f -
The centroid ! is marked by \Phi.
The set of points f -
i=0 is the symmetry transform of the
points
i=0 . i.e. they are the Cn -symmetric configuration
of points closest to fP i g n\Gamma1
i=0 in terms of the metric
d defined in Section II (in terms of the average distance
squared). Proof is given in Appendix A.
The common case, however, is that shapes have more
points than the order of the symmetry. For symmetry of
order n, the folding method can be extended to shapes
having a number of points which is a multiple of n. A 2D
shape P having qn points is represented as q sets fS r g q\Gamma1
of n interlaced points S
(see discussion in
Appendix
C). The Cn -symmetry transform of P (Figure 7)
is obtained by applying the above algorithm to each set of
points seperately, where the folding is performed about
the centroid of all the points
a. b.
Fig. 7. Geometric description of the C 3 -symmetry transform
for 6 points.
The centroid ! of the points is marked by \Phi.
a) The original points shown as 2 sets of 3 points:
g.
b) The obtained C 3 -symmetric configuration.
The procedure for evaluating the symmetry transform for
mirror-symmetry is similar: Given a shape having
points we divide the points into q pairs of points (see Appendix
C) and given an initial guess of the symmetry axis,
we apply the folding/unfolding method as follows (see Figure
Algorithm for finding the mirror-symmetry transform:
1. for every pair of points fP
(a) fold - by reflecting across the mirror symmetry axis
obtaining f ~
g.
(b) average - obtaining a single averaged point -
(c) unfold - by reflecting back across the mirror symmetry
axis obtaining f -
g.
2. minimize over all possible axes of mirror-symmetry.
The minimization performed in step 2 is, in practice, replaced
by an analytic solution (see Appendix B).
~
a. b.
mirror axis
c.
mirror axis
mirror axis
~
~
origin
origin
origin
Fig. 8. The mirror-symmetry transform of a single pair of points
for angle ', where the centroid of the shape is assumed to
be at the origin.
a) The two points fP are folded to obtain f ~
g.
b) Points ~
P0 and ~
are averaged to obtain -
P0 .
c) -
P1 is obtained by reflecting -
P0 about the symmetry axis.
This method extends to any finite point-symmetry
group G in any dimension, where the folding and unfolding
are performed by applying the group elements about the
centroid (see derivations in Appendix A):
Given a finite symmetry group G (having n elements)
and given a shape P represented by
the symmetry transform of the shape with respect to G-
symmetry is obtained as follows:
Algorithm for finding the G-symmetry transform:
1. The points are divided into q sets of n points.
2. For every set of n points:
(a) The points are folded by applying the elements of
the G-symmetry group.
(b) The folded points are averaged, obtaining a single
averaged point.
(c) The averaged point is unfolded by applying the inverse
of the elements of the G-symmetry group. A
G-symmetric set of n points is obtained.
3. The above procedure is performed over all possible
orientations of the symmetry axis and planes of G.
Select that orientation which minimizes the Symmetry
Distance value. As previously noted, this minimization
is analytic in 2D (derivation is given in Appendix
but requires an iterative minimization process
in 3D (except for the 3D mirror-symmetry group
where a closed form solution has been derived [35]).
Thus, the Symmetry Distance of a shape may be evaluated
with respect to any finite symmetry group. One
may consider as the 'appropriate' symmetry of a shape,
that symmetry group which minimizes the Symmetry Dis-
tance. A minimization over all symmetry groups can be
applied, however, minimization over a finite number of low
order groups usually suffices (in 2D minimization over mirror
and prime-ordered rotational symmetries is sufficient.
Since evaluation of the Symmetry Distance in 2D is ana-
lytical, this minimization is inexpensive).
IV. Point Selection for Shape Representation
As symmetry has been defined on a sequence of points,
representing a given shape by points must precede the application
of the symmetry transform. Selection of points3P
PFig. 9. When measuring symmetry of shapes inherently created
from points we represent the shape by these points.
influences the value of SD and depends on the type of object
to be measured. If a shape is inherently created from
points (such as a graph structure or cyclically connected
points creating a polygon) we can represent a shape by
these points (Figure 9). This is the case when analysing
symmetry of molecules ([37], [35]). There are several ways
to select a sequence of points to represent continuous 2D
shapes. One such method is selection at equal distances -
the points are selected along the shape's contour such that
the curve length between every pair of adjacent points is
equal
Figure
10).l
l6
l6
Fig. 10. Point selection by equal
distance: points are selected
along the contour at equal distances
in terms of curve length.
In this example six points are
distributed along the contour
spaced by 1of the full contour
length.
In many cases, however, contour length is not a meaningful
measure, as in noisy or occluded shapes. In such
cases we propose to select points on a smoothed version of
the contour and then project them back onto the original
contour. The smoothing of the continuous contour is performed
by moving each point on the continuous contour to
the centroid of its contour neighborhood. The greater the
size of the neighborhood, the greater is the smoothing (see
Figure
11). The level of smoothing can vary and for a high
level of smoothing, the resulting shape becomes almost a
circle about the centroid [25]. In this case, equal distances
on the circular contour is equivalent to equal angles about
the center. For maximum smoothing we, therefore, use
selection at equal angles (Figure 12) where points are selected
on the original contour at equal angular intervals
around the centroid. For further details on the selection of
points by smoothing see [34].
V. Symmetry of Occluded Shapes - Center of
Symmetry
As described in Section IV, a shape can be represented
by points selected at regular angular intervals about the
centroid. Angular selection of points about a point other
than the centroid will give a different symmetry distance
value. We define the center of symmetry of a shape
as that point in the 2D plane about which selection at
equal angles gives the minimum symmetry distance value
(Fig. 13). Intuitively, the center of symmetry is that point
about which rotation of the shape aligns it as close as possible
with itself (in terms of the mean squared distances).
When a symmetric shape is complete the center of symmetry
coincides with the centroid of the shape. However,
the center of symmetry of truncated or occluded objects
does not align with its centroid but is closer to the centroid
of the complete shape. Thus the center of symmetry
of a shape is robust under truncation and occlusion.
To locate the center of symmetry, we use an iterative
procedure of gradient descent that converges from the centroid
of an occluded shape to the center of symmetry. Denote
by center of selection, that point about which points
are selected using selection at equal angles. We initialize
the iterative process by setting the centroid as the cen-
c. d. e. f.
a. b.
g. h. i. j.
Fig. 11. Selection by smoothing
a) Original continuous contour.
b) Points are selected at equal distances along the contour.
c-f) The smoothed shape is obtained by averaging neighboring
points of (b). The amount of smoothing depends on the
size of the neighborhood. The smoothed shapes (c-f) are obtained
when neighborhood includes 5,10,15 and 20 percent of
the points, respectively.
g-j) The sampling of points on the original shape using the
smoothed shapes (c-f) respectively. Notice that fewer points
are selected on the "noisy" part of the contour.2p
Fig. 12. Selection at equal angles:
points are distributed along
the contour at regular angular
intervals around the centroid.
Fig. 13. An occluded shape with sampled
points selected at equal angles
about the center of symmetry
(marked by \Phi). The symmetry
distance obtained using these
points is smaller than the symmetry
distance obtained using points
selected at equal angles about the
centroid (marked by +).
a. b. c.
Fig. 14. Reconstruction of an occluded shape.
a) Original occluded shape, its centroid (+) and its center of
symmetry(\Phi).
b,c) The closest C 5 -symmetric shape using angular selection
about the centroid (b) and about the center of symmetry (c).
Selection about the centroid gives a featureless shape, while
selection about the center of symmetry yields a more meaningful
shape.
ter of selection. At each step we compare the symmetry
value obtained from points selected at equal angles about
the center of selection with the symmetry value obtained
by selection about points in the center of selection's immediate
neighborhood. That point about which selection
at equal angles gives minimum symmetry value, is set to
be the new center of selection. If the center of selection
does not change, the neighborhood size is decreased. The
process is terminated when neighborhood size reaches a
predefined minimum size. The center of selection at the
end of the process is taken as the center of symmetry.
In the case of occlusions (Figure 14), the closest symmetric
shape obtained by angular selection about the center of
symmetry is visually more similar to the original than that
obtained by angular selection about the centroid. We can
reconstruct the symmetric shape closest to the unoccluded
shape by obtaining the symmetry transform of the occluded
shape using angular selection about the center of symmetry
(see Figure 14c). In Figure 15 the center of symmetry
and the closest symmetric shapes were found for several
occluded flowers.
The process of reconstructing the occluded shape can be
improved by altering the method of evaluating the symmetry
of a set of points. As described in Section III, the
symmetry of a set of points is evaluated by folding, averaging
and unfolding about the centroid of the points. We
alter the method as follows:
1. The folding and unfolding (steps 1 and 3) will be
performed about the center of symmetry rather than
about the centroid of the points.
2. Rather than averaging the folded points (step 2), we
apply other robust clustering methods. In practice we
average over the folded points, drop the points farthest
from the average and then reaverage (see Figure 16).
Intuitively, the robust clustering causes the reconstruction
to be strongly influenced by contour points on the
unoccluded portion of the shape while eliminating the influence
of the points on the contour representing the truncated
or occluded portion of the shape.
The improvement in reconstruction of an occluded shape
is shown in Figure 17. This method improves both shape
and localization of the reconstruction. Assuming that the
original shape was symmetric, this method can reconstruct
Fig. 15. Reconstruction of Occluded objects.
a) A collection of occluded asymmetric flowers.
b) Contours of the occluded flowers were extracted manually.
c) The closest symmetric shapes and their center of symmetry.
d) The center of symmetry of the occluded flowers are marked
by '+'.
b.
a. c. d.
Fig. 16. Improving the averaging of folded points
a) An occluded shape with points selected using angular selection
about the center of symmetry (marked as \Phi).
b) A single set (orbit) of the selected points of a) is shown.
c) folding the points about the centroid (marked as +), points
are clustered sparsely.
d) folding the points about the center of symmetry, points are
clustered tightly. Eliminating the extremes (two farthest points)
and averaging results in a smaller averaging error and better re-construction
an occluded shape very accurately.
VI. Symmetry of Points with Uncertain
Locations
In most cases, sensing processes do not have absolute
accuracy and the location of each point in a sensed pattern
can be given only as a probability distribution. Given
sensed points with such uncertain locations, the following
properties are of interest:
ffl The most probable symmetric configuration represented
by the sensed points.
ffl The probability distribution of symmetry distance values
for the sensed points.
A. The Most Probable Symmetric Shape
Figure
18a shows a configuration of points whose locations
are given by a normal distribution function. The dot
represents the expected location of the point and the rectangle
represents the standard deviation marked as rectangles
having width and length proportional to the standard
deviation. In this section we briefly describe a method
of evaluating the most probable symmetric shape under
the Maximum Likelihood criterion [12] for a given set of
measured points (measurements). Detailed derivations and
proofs are given in [38]. For simplicity we describe the
method with respect to Cn -symmetry. The solution for
mirror symmetry or any other symmetry is similar (see
[38]).
Given n ordered points in 2D whose locations are given
as normal probability distributions with expected location
covariance matrix
we find the Cn -symmetric configuration
of points at locations f -
0 which is optimal under the
Maximum Likelihood criterion [12].
Denote by ! the centeroid of the most probable Cn -
symmetric set of locations -
. The point
is dependent on the location of the measurements
and on the probability distribution associated with them
is positioned at that point about which
the folding (described below) gives the tightest cluster of
points with small uncertainty (small s.t.d. We assume for
the moment that ! is given (a method for finding ! is derived
in [38]). We use a variant of the folding method which
was described in Section III for evaluating Cn -symmetry of
a set of points:
1. The n measurements are folded by
a. b. c.
Fig. 17. Improving the reconstruction
The original shape is shown as a dashed line and the reconstructed
shape as a solid line.
a) The closest symmetric shape using angular selection about
the centroid.
b) The closest symmetric shape using angular selection about
the center of symmetry.
c) The closest symmetric shape using angular selection about
the center of symmetry and robust clustering.
~
~
~ ~
a. b.
402Fig. 18. Folding measured points.
a) A configuration of 6 measuremented points . The
dot represents the expected location of the point. The rectangle
represents the standard deviation marked as rectangles having
width and length proportional to the standard deviation.
Each measurement Q i was rotated by 2-i=6 radians about
the centroid of the expected point locations (marked as '+')
obtaining measurement ~
a. b. c. d. e.
Fig. 19. The most probable symmetric shape.
a) A configuration of 6 measured points.
b-e) The most probable symmetric shapes with respect to b)
C2-symmetry.
c) C 3 -symmetry.
d) C 6 -symmetry.
mirror-symmetry.
rotating each measurement Q i by 2-i=n radians about
the point !. A new set of measurements
~
obtained (see Figure 18b).
2. The folded measurements are averaged using a
weighted average, obtaining a single point -
Averaging
is done by considering the n folded measurements
~
measurements of a single point and -
the most probable location of that point under
the Maximum Likelihood criterion.
~
~
3. The "average" point -
unfolded as described in
Section III obtaining points f -
i=0 which are perfectly
Cn -symmetric.
When we are given we find the
most probable Cn -symmetric configuration of points, similar
to the folding method of Section III. The m measurements
i=0 , are divided into q interlaced sets of n measurements
each, and the folding method as described above
is applied seperately to each set of measurements. Derivations
and proof of this case is also found in [38].
Several examples are shown in Figure 19, where for a
given set of measurements, the most probable symmetric
shapes were found. Figure 20 shows an example of varying
the probability distribution of the measurements on the
resulting symmetric shape.
a. b. c. d. e.
Fig. 20. The most probable C 3 -symmetric shape for a set
of measurements after varying the probability distribution
and expected locations of the measurements.
a-c) Changing the uncertainty (s.t.d.) of the measurements.
d-e) Changing both the uncertainty and the expected location
of the measurements.
B. The Probability Distribution of Symmetry Values
Figure
21a displays a Laue photograph [2] which is an in-
terefrence pattern created by projecting X-ray beams onto
crystals. Crystal quality is determined by evaluating the
symmetry of the pattern. In this case the interesting feature
is not the closest symmetric configuration, but the
probability distribution of the symmetry distance values.
Consider the configuration of 2D measurements given in
Figure
18a. Each measurement Q i is a normal probability
distribution We assume the centroid of
the expectation of the measurements is at the origin. The
probability distribution of the symmetry distance values
of the original measurements is equivalent to the probability
distribution of the location of the "average" point ( -
given the folded measurements as obtained in Step 1 and
Step 2 of the algorithm in Section VI-A. It is shown in
[38] that this probability distribution is a - 2 distribution
of order n \Gamma 1. However, we can approximate the distribution
by a gaussian distribution. Details of the derivation
are given in [38].
In
Figure
21 we display distributions of the symmetry
distance value as obtained for the Laue photograph given
in
Figure
21a. In this example we considered every dark
patch as a measured point with variance proportional to
the size of the patch. Thus in Figure 21b the rectangles
which are proportional in size to the corresponding dark
patches of Figure 21a, represent the standard deviation of
the locations of point measurements. Note that a different
analysis could be used in which the variance of the measurement
location is taken as inversely proportional to the
size of the dark patch.
In
Figure
22 we display distributions of the symmetry
distance value for various measurements. As expected, the
distribution of symmetry distance values becomes broader
as the uncertainties (the variance of the distribution) of the
measurements increase.
a. b.
c.
Probability
Symmetry Value
Fig. 21. Probability distribution of symmetry values
a) Interference pattern of crystals.
Probability distribution of point locations corresponding to a.
c) Probability distribution of symmetry distance values with respect
to C 10 -symmetry was evaluated as described in the text.
Expectation
c
d
a
Probability
DensityFig. 22. Probability distribution of the symmetry distance value
as a function of the variance of the measured points.
a-d) Some examples of configurations of measured points.
Probability distribution of symmetry distance values with
respect to C 6 -symmetry for the configurations a-d.
Reflection plane
Image plane
Fig. 23. Selecting points on the 3D
object (a 2D analog is shown).
For a possible reflection plane, the
plane perpendicular to it is sam-
pled. Elevations are recomputed on
the object relative to the sampling
plane.
VII. Application to Images
A. Finding Orientation of Symmetric 3D Objects
When dealing with images, we let pixel values denote
elevation, and consider an image as a 3D object on which
we can measure 3D symmetries. We applied the SD to find
orientation of symmetric 3D objects by finding their 3D
mirror-symmetry. The 3D shape is represented by a set of
3D points: for a possible reflection plane, the plane perpendicular
to it is sampled. Each sampled points is projected
onto the 3D object and its elevation is recomputed relative
to the sampling plane (Figure 23). The symmetry value for
3D mirror-symmetry is evaluated using the projected sampling
points. The final reflection plane of the 3D object
is determined by minimizing the symmetry value over all
possible reflection planes. In practice only feasible symmetry
planes were tested (i.e. planes which intersect the 3D
image) and a gradient descent algorithm was used to increase
efficiency of convergence to the minimum symmetry
value (the SD). Examples are shown in Figure 24, where a
symmetric 3D object is rotated into a frontal vertical view
after the reflection plane was found.
B. Using a Multiresolution Scheme
In many images the process of finding the reflection plane
did not converge to the correct solution, i.e. the process
converged to a local minima due to the sensitivity of the
symmetry value to noise and digitization errors. To overcome
this problem we introduced a multiresolution scheme,
where an initial estimation of the symmetry transform is
obtained at low resolution (see Figure 25) and is fine tuned
using high resolution images. The solution obtained for
the low resolution image is used as an initial guess of the
solution for the high resolution image. The low resolution
images were obtained by creating gaussian pyramids [11].
C. Finding Locally symmetric Regions
Most images cannot be assumed to have a single global
symmetry, but contain a collection of symmetric and almost
symmetric patterns (be they objects or background).
We aim to locate these local symmetries and segment the
symmetrical objects from the background. The following
three staged process is used to extract locally symmetric
regions in images.
1. The first stage locates symmetry focals - those points
about which the image is locally symmetric. Several
methods can be used to find the symmetry focals ([29],
for example). We used a variant of the multiresolution
method presented in [36] for sampling and transmitting
an image, which uses a simple model of the human
visual attention mechanism.
We used the Quad Tree [30] structure which is a hierarchical
representation of an image, based on recursive
subdivisions of the image array into quadrants. The
process of locating symmetry focals builds a sequence
of quad trees and a sequence of corresponding divisions
of the image into quadrants. The process is initialized
by setting the current quad tree to a single root node
(corresponding to the whole image). At each step all
quadrants of the image corresponding to the leaves of
the quad tree are tested for a given "interest" func-
a. b.
c. d.
Fig. 24. Applying the SD with respect to 3D mirror-symmetry
in order to find orientation of a 3D object.
a,c) original depth maps.
b,d) The symmetry reflection plane has been found and the
image rotated to a frontal vertical view.
tion. That node of the quad tree that corresponds to
the quadrant with highest "interest" value is selected.
The current quad tree is expanded by creating the son
nodes of the selected node and accordingly, subdividing
the image quadrant associated with the selected
node. Several steps of this procedure are shown in Figure
26. In our case the "interest" function was chosen
as to be a function of the Symmetry Distance value
obtained for the image quadrant. Thus the procedure
described focuses onto regions of high symmetry content
and finds symmetry focals. In practice, the "in-
terest" function also took into account the busyness of
the image quadrant, thus regions that had low busyness
values (i.e. the grey-scale values were almost con-
stant) gave low "interest" values although they were
highly symmetric (and gave low Symmetry Distance
values). In Figure 27b, a mirror-symmetry focal and
an associated reflection plane (passing through the fo-
cal) were found.
2. Given a symmetry focal and a reflection plane, a symmetry
map of the image is created as follows: the original
image is sampled at points which are pairwise symmetric
with respect to the given reflection plane. The
symmetry distance value obtained using the folding
method described in Section III for each pair of sam-
a. b.
c. d.
Fig. 25. Using multiresolution to find symmetry.
The grey-level image is treated as a depth map and 3D mirror-
symmetry is computed. The computed symmetry plane is used
to bring the image into a frontal vertical view.
a) Original image.
b) Applying the mirror-symmetry transform on (a) does not find
correct reflection plane.
c) A low resolution image obtainined by convolving (a) with a
gaussian.
d) Applying the mirror-symmetry transform on the low resolution
image (c) gives a good estimation of the reflection plane
and face orientation.
e) Using the reflection plane found in (d) as an initial guess, the
process now converges to the correct symmetry plane.
pled points is recorded and marked in the symmetry
map at the location corresponding to the coordinates
of the sampled points. Thus the symmetry map displays
the "amount" of mirror-symmetry at every point
(with respect to the given reflection plane) where low
grey values denote low SD values (i.e. high symmetry
content) and high grey values denote high SD values
(i.e. low symmetry content) (Figure 27c).
3. Starting from the symmetry focals, regions are expanded
using "active contours" [19] to include compact
symmetric regions. The expansion is guided by
the symmetry map and continues as long as the pixels
included in the locally symmetric region do not
degrade the symmetry of the region more than a pre-defined
threshold (Figure 27d).
The process can be continued to extract several locally
symmetric regions, as shown in the example of Figure 28.
VIII. Conclusion
We view symmetry as a continuous feature and define
a Symmetry Distance (SD) of shapes. The general definition
of the Symmetry Distance enables a comparison
of the "amount" of symmetry of different shapes and the
"amount" of different symmetries of a single shape. Fur-
thermore, the SD is associated with the symmetric shape
which is 'closest' to the given one, enabling visual evaluation
of the SD. Several applications were described including
reconstruction of occluded shapes, finding face orientation
and finding locally symmetric regions in images. We
also described how we deal with uncertain data, i.e. with a
configuration of measurements representing the probability
distribution of point location. The methods described
here can be easily extended to higher dimensions and to
more complex symmetry groups. Further extensions will
deal with other symmetry classes such as planar symmetry
(including translatory symmetry and fractals). Additional
work has been done on evaluating reflective symme-
Fig. 26. Several steps in the process of finding symmetry focals.
A sequence of quad trees (bottom row) and the corresponding
recursive division of the image (top row) is created. The process
is initialized by creating a quad tree with a single root node
corresponding to the whole image (left). At each step all quadrants
of the image corresponding to the leaves of the quad tree
are tested for the "interest" function. The leaf of the quad tree
that corresponds to the quadrant with highest value (marked
in grey) is expanded to create the quad tree of the next step.
Three additional steps are shown.
a. b.
c. d.
Fig. 27. Applying the Multiresolution scheme to detect a
mirror-symmetric region
a) Original image.
b) A mirror-symmetry focal was found.
c) Symmetry map of image for the symmetry focal found in b).
d) Extracted locally symmetric region.
try (and chirality) of graph structures ([35]). The methods
described here have also been extended to deal with skewed
and projected mirror symmetries [39].
Appendix
I. Mathematical Proof of the Folding Method
Given a finite point symmetry group G, we assume without
loss of generality that G is centered at the origin (i.e.
every element of G leaves the origin fixed). Given an ordering
of the n elements of the G-symmetry group fg
I and given n general points P
we find n points -
and find a rotation matrix R
and translation vector w such that the points -
translated by ! and rotated by R form an ordered orbit
under G and bring the following expression to a minimum:
Since G has a fixed point at the origin and the centroid of
the orbit formed by the rotated and translated -
P i is a fixed
point under G, we can assume without loss of generality
that the translation vector w is the centroid of points -
The points -
translated by ! and rotated by R,
form an orbit of G, thus the following must be satisfied:
a. b.
c. d.
Fig. 28. Applying the Multiresolution scheme to find multiple
mirror-symmetric regions.
a) Original image.
b) The mirror-symmetry focals found.
c) Symmetry maps for each mirror-symmetry are merged into a single
image.
d) Extracted locally symmetric regions.
Using Lagrange multipliers with Equations 1-3 we minimize
the following:
are the Lagrange multipliers.
Setting the derivatives equal to zero we obtain:
and using the last constraint (Eq 2) we obtain:
i.e. the centroid of P coincides with the centroid
of -
(in terms of the symmetry distance
defined in Section I, the centroid of a configuration and the
centroid of the closest symmetric configuration is the same
for any point symmetry group G).
Noting that R t g i R for are isometries and
distance preserving, we have from the derivatives:
Expanding using the constraints we obtain:
or
The derivation of the rotation matrix R is given in the
next section, however, given R, the geometric interpretation
of Equation 6 is the folding method, as described in
Section III, proving that the folding method results in the
G-symmetric set of points closest to the given set.
The common case, however, is that shapes have more
points than the order of the symmetry. For symmetry of order
n, the folding method can be extended to shapes having
a number of points which is a multiple of n. Given
points (i.e. q sets of n points) fP j
we follow the above derivation and obtain a result similar
to that given in Equation 6. For each set of n points, i.e.
i is the centroid of all m
points. The geometric interpretation of Equation 7 is the
folding method, as described in Section III for the case of
a shape represented by
The folding method for the cases where the number of
points is not a multiple of the number of elements in G is
not derived here. Details of this case can be found in [37].
II. Finding the Optimal Orientation in 2D
The problem of finding the minimizing orientation is irrelevant
for the Cn symmetry groups since every element g
of these groups is a rotation and R t g. In the case of
Cn -symmetry groups, R is usually taken as I (the identity
matrix). We derive here a solution for the orientation in
the case where G is a Dn symmetry group.
The 2n elements of the Dn-symmetry group can be
described as the n elements
n (where R i
n is the rotation of 2-i=n
radians about the origin) and the n elements obtained by
applying a reflection R f on each of these elements:
n .
We denote the orientation of the symmetry group as the
angle ' between the reflection axis and the y axis. Thus
sin ' cos '
and the reflection operation R f is given by:
sin ' cos '
'' cos ' sin '
sin ' cos '
sin 2' cos 2'
Without loss of generality, we assume the centroid w is at
the origin Following Appendix I.B, we minimize
Equation 1 over '. Using Equation 3 and noting that R t g i R
are distance preserving, we minimize the
following over ':
Substituting Equation 6 we minimize:
Rearranging and noting that R t g t
we minimize:
\Gamman
Denote by x the coordinates of the point g t
i\Gamman P i for Taking
the derivative of Equation 8 with respect to ' we obtain:
tan
which is an analytic solution for the case of optimal orientation
in 2D. In higher dimensions, however, no analytic
solution was found and a minimization procedure is used
(except for the mirror symmetry group in 3D where a closed
form solution is given - see [35]).
III. Dividing Points of a Shape into sets
As described in Section III, when measuring Cn -
symmetry of a shape represented by a multiple of n points,
the points must be divided into sets of n points. In gen-
eral, this problem is exponential, however when the points
are ordered along a contour, as in our case, the possible
divisions into sets are more restricted since the ordering is
preserved under the symmetry transform of a shape. For
example, points in 2D along the contour of a Cn -symmetric
shape form orbits which are interlaced. An example is
shown in Figure 29a for C 3 -symmetry, where 3 interlaced
orbits are shown marked as ffl, ffi and 2. Thus, given a
set of ordered points there is only one possible
division of the points into q sets of n points such that
the ordering is preserved in the symmetric shape - the q
sets must be interlaced (as was shown in Figure 7). In
the case of Dn-symmetry (rotational and reflective symmetry
of order n) the
orbits which are interlaced and partially inverted to account
for the reflection symmetry. An example is shown in
Figure
29b for D 4 -symmetry where instead of 3 interlaced
every other run is inverted:
ffl. Thus, given a set of
ordered points there are possible division of the
points.
a b.
Fig. 29. Dividing m selected
points into interlaced
sets: a) Cn -
ity. b) Dn-symmetry -
m=2n possibilities
--R
Congruence, similarity and symmetries of geometric objects.
On symmetry detection.
Symmetry information and memory for patterns.
Quantifying the degree of molecular shape deformation.
Recognition of local symmetries in gray value images by harmonic functions.
Grasping visual symmetry.
Shape description using weighted symmetric axis features.
Texture discrimination by local generalized symmetry.
Smoothed local symmetries and their implementation.
The Laplacian pyramid as a compact image code.
Probability and Statistics.
Chiral coefficient - a measure of the amount of structural chirality
Analyzing skewed symmetry.
Measures of symmetry for convex sets.
Characterization of right handed and left handed shapes.
Visual pattern recognition by moment invariants.
Recovery of the three-dimensional shape of an object from a single view
Snakes: active contour models.
Application of the karhunen-loeve procedure for the characterization of human faces
MIT press
On the detection of the axes of symmetry of symmetric and almost symmetric planar images.
Symmetry Groups and their Applications.
A theory of multiscale
Model based matching using skewed symmetry information.
On characterizing ribbons and finding skewed sym- metries
Robust detection of facial features by generalized symmetry.
The quadtree and related hierarchical data structures.
Symmetry seeking models and object reconstruction.
Princeton Univ.
Symmetropy, an entropy-like measure of visual symmetry
Computational Aspects of Pattern Characterization - Continuous Symmetry
Continuous symmetry measures
Attentive transmission.
Continuous symmetry measures II: Symmetry groups and the tetrahedron.
Symmetry of fuzzy data.
3D symmetry from 2D data.
--TR
--CTR
Dinggang Shen , Horace H. S. Ip , Kent K. T. Cheung , Eam Khwang Teoh, Symmetry Detection by Generalized Complex (GC) Moments: A Close-Form Solution, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.5, p.466-476, May 1999
P. J. Sanz , J. M. Iesta , A. P. Del Pobil, Planar Grasping Characterization Based on Curvature-Symmetry Fusion, Applied Intelligence, v.10 n.1, p.25-36, January 1999
Aurlien Martinet , Cyril Soler , Nicolas Holzschuch , Franois X. Sillion, Accurate detection of symmetries in 3D shapes, ACM Transactions on Graphics (TOG), v.25 n.2, p.439-464, April 2006
S. J. Tate , G. E. M. Jared , K. G. Swift, Detection of symmetry and primary axes in support of proactive design for assembly, Proceedings of the fifth ACM symposium on Solid modeling and applications, p.151-158, June 08-11, 1999, Ann Arbor, Michigan, United States
Alexander V. Tuzikov , Stanislav A. Sheynin, Symmetry Measure Computation for Convex Polyhedra, Journal of Mathematical Imaging and Vision, v.16 n.1, p.41-56, January 2002
M. Li , F. C. Langbein , R. R. Martin, Detecting approximate incomplete symmetries in discrete point sets, Proceedings of the 2007 ACM symposium on Solid and physical modeling, June 04-06, 2007, Beijing, China
Kuo-Liang Chung , Jhin-Sian Lin, Faster and more robust point symmetry-based K-means algorithm, Pattern Recognition, v.40 n.2, p.410-422, February, 2007
Mu-Chun Su , Chien-Hsing Chou, A Modified Version of the K-Means Algorithm with a Distance Based on Cluster Symmetry, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.674-680, June 2001
Michael Kazhdan , Thomas Funkhouser , Szymon Rusinkiewicz, Symmetry descriptors and 3D shape matching, Proceedings of the 2004 Eurographics/ACM SIGGRAPH symposium on Geometry processing, July 08-10, 2004, Nice, France
Allen Y. Yang , Kun Huang , Shankar Rao , Wei Hong , Yi Ma, Symmetry-based 3-D reconstruction from perspective images, Computer Vision and Image Understanding, v.99 n.2, p.210-240, August 2005
H. L. Zou , Y. T. Lee, Skewed mirror symmetry detection from a 2D sketch of a 3D model, Proceedings of the 3rd international conference on Computer graphics and interactive techniques in Australasia and South East Asia, November 29-December 02, 2005, Dunedin, New Zealand
Alexander V. Tuzikov , Olivier Colliot , Isabelle Bloch, Evaluation of the symmetry plane in 3D MR brain images, Pattern Recognition Letters, v.24 n.14, p.2219-2233, October
Bertrand Zavidovique , Vito Di Ges, The S-kernel: A measure of symmetry of objects, Pattern Recognition, v.40 n.3, p.839-852, March, 2007
Niloy J. Mitra , Leonidas J. Guibas , Mark Pauly, Partial and approximate symmetry detection for 3D geometry, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
David Milner , Shmuel Raz , Hagit Hel-Or , Daniel Keren , Eviatar Nevo, A new measure of symmetry and its application to classification of bifurcating structures, Pattern Recognition, v.40 n.8, p.2237-2250, August, 2007
Hanzi Wang , David Suter, Using symmetry in robust model fitting, Pattern Recognition Letters, v.24 n.16, p.2953-2966, December
Joshua Podolak , Philip Shilane , Aleksey Golovinskiy , Szymon Rusinkiewicz , Thomas Funkhouser, A planar-reflective symmetry transform for 3D shapes, ACM Transactions on Graphics (TOG), v.25 n.3, July 2006
Nahum Kiryati , Yossi Gofman, Detecting Symmetry in Grey Level Images: The Global Optimization Approach, International Journal of Computer Vision, v.29 n.1, p.29-45, Aug. 1998
Wei Hong , Allen Yang Yang , Kun Huang , Yi Ma, On Symmetry and Multiple-View Geometry: Structure, Pose, and Calibration from a Single Image, International Journal of Computer Vision, v.60 n.3, p.241-265, December 2004
Computational Model for Periodic Pattern Perception Based on Frieze and Wallpaper Groups, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.3, p.354-371, March 2004
Kenichi Kanatani, Geometric Information Criterion for Model Selection, International Journal of Computer Vision, v.26 n.3, p.171-189, Feb./March 1998
Focus-of-Attention from Local Color Symmetries, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.7, p.817-830, July 2004
Toby P. Breckon , Robert B. Fisher, Amodal volume completion: 3D visual completion, Computer Vision and Image Understanding, v.99 n.3, p.499-526, September 2005 | local symmetry;fuzzy shapes;face orientation;occlusion;similarity measure;symmetry distance;symmetry |
628768 | Thermophysical Algebraic Invariants from Infrared Imagery for Object Recognition. | AbstractAn important issue in developing a model-based vision system is the specification of features that are invariant to viewing and scene conditions and also specific, i.e., the feature must have different values for different classes of objects. We formulate a new approach for establishing invariant features. Our approach is unique in the field since it considers not just surface reflection and surface geometry in the specification of invariant features, but it also takes into account internal object composition and state which affect images sensed in the nonvisible spectrum. A new type of invariance called Thermophysical Invariance is defined. Features are defined such that they are functions of only the thermophysical properties of the imaged objects. The approach is based on a physics-based model that is derived from the principle of the conservation of energy applied at the surface of the imaged object. | Introduction
Non-visible modalities of sensing have been shown to greatly increase the amount of information
that can be used for object recognition. A very popular and increasingly affordable sensor
modality is thermal imaging - where non-visible radiation is sensed in the long-wave infrared
(LWIR) spectrum of 8-m to 14-m. The current generation of LWIR sensors produce images
of contrast and resolution that compare favorably with broadcast television quality visible light
imagery. However, the images are no longer functions of only surface reflectance. As the
wavelength of the sensor transducer passband increases, emissive effects begin to emerge as
the dominant mode of electromagnetic energy exitance from object surfaces. The (primarily)
emitted radiosity of LWIR energy has a strong dependence on internal composition, properties,
and state of the object such as specific heat, density, volume, heat generation rate of internal
sources, etc. This dependence may be exploited by specifying image-derived invariants that
vary only if these parameters of the physical properties vary.
In this paper we describe the use of the principle of conservation of energy at the surface
of the imaged object to specify a functional relationship between the object's thermophysical
properties (e.g., thermal conductivity, thermal capacitance, emissivity, etc.), scene parameters
(e.g., wind temperature, wind speed, solar insolation), and the sensed LWIR image gray level.
We use this functional form to derive invariant features that remain constant despite changes in
scene parameters/driving conditions. In this formulation the internal thermophysical properties
play a role that is analogous to the role of parameters of the conics, lines and/or points that are
used for specifying geometric invariants (GI's) when analyzing visible wavelength imagery [1].
Thus, in addition to the currently available techniques of formulating features that depend only
on external shape [2] - [10], and surface reflectance properties [11] - [16], the phenomenology of
image generation can be used to establish new features that "uncover" the composition
and thermal state of the object, and which do not depend on surface reflectance characteristics.
An intuitive approach to thermo-physical interpretation of LWIR imagery is given in [17].
This approach rests upon the following observation, termed the "Thermal History Consistency
Constraint" and analogous to Lowe's well known Viewpoint Consistency Constraint [18]: "The
temperature of all target features for a passive target must be consistent with the heat flux
transfer resulting from exposure to the same thermal history." In [17] this constraint is exploited
by analyzing objects to locate components that are similar in terms of thermo-physical
properties and then examining a temporal sequence of calibrated LWIR data to experimentally
assess the degree to which such thermo-physically similar components exhibit similar temperature
state temporal behavior. Such analysis was shown to lead to formulation of simple intensity
ratio features exhibiting a strong degree of temporal stability that could be exploited provided:
(1) thermally homogeneous regions in the LWIR image corresponding to the thermo-physically
similar object components could be reliably segmented, and (2) a target-specific geometric reference
frame is available in order to correctly associate extracted regions with candidate object
components.
To avoid the difficulties inherent in assumptions (1) and (2) above an alternative technique
applicable to overall object signatures was suggested in [17]. An analysis of typical LWIR
'lumped parameter' object temperature modeling approaches suggests that over small time
scales object temperature can be crudely modeled by a small dimensional linear system with
algebraically separable spatial and temporal components. Ratios of spatial integrals of temperature
with a simple set of orthonormal 2D polynomials (obtained from applying Gramm-Schmidt
to of the resulting functions were nearly constant
with time when measured against 24 hours of LWIR imagery of a complex object (a tank)
but no experimentation was done with multiple objects to examine between and within-class
separation, so little can be drawn in the way of a substantive conclusion with respect to utility
as an object identification technique.
A physics-based approach that attempts to establish invariant features which depend only
on thermophysical object properties was reported in [19], [20]. A thermophysical model was
formulated to allow integrated analysis of thermal and visual imagery of outdoor scenes. This
method used estimations of the energy flux into and out of the surface of the object. The surface
orientation and absorptivity were obtained from the visual image using a simplified shape-from-
shading method. The surface temperature was estimated from the thermal image based on an
appropriate model of radiation energy exchange between the surface and the infrared camera.
A normalized feature, R, was defined to be the ratio of energy fluxes estimated at each pixel.
The value of R was lowest for vehicles, highest for vegetation and in between for buildings and
pavements. This approach is powerful in that it makes available features that are completely
defined by internal object properties. The computed value of R may be compared with accurate
ground truth values computed from known physical properties of test objects - one of the major
advantages of using physics-based/phenomenological models as compared to statistical models.
Classification of objects using this property value is discussed in [21], [22].
There are several factors that limit the performance of the above thermophysical approach.
The thermal and visual image pairs may not be perfectly registered. Also, segmentation errors
typically cause a large portion of an object to be included with small portions of a different
object in one region. This results in meaningless values of the surface energy estimates at/near
the region boundaries. These errors give rise to a significant number of inaccurate estimates
of the surface energy exchange components. The histogram of values of the ratio of energy
fluxes tends to be heavy-tailed and skewed. A statistically robust scheme has been proposed to
minimize this drawback [22]. However, the computational complexity for such a technique is
very high, and the following drawbacks below were not adequately overcome: (1) The value of
R was found to be only weakly invariant - while separation between classes was preserved, the
range of values of this feature for each class was observed to vary with time of day and season of
year, (2) The feature was able to separate very broad categories of objects, such as automobiles,
buildings, and vegetation - but lacked the specificity to differentiate between different models of
vehicles. Although the thermophysical feature, R, is limited in its application for recognition,
the energy exchange model on which it is based forms the groundwork for the derivation of
more powerful thermophysical invariant features. That approach is extended in this paper by
applying the concepts of algebraic invariance to the energy exchange model resulting in new
thermophysical invariant features for object recognition.
The derivation of thermophysical invariants (TI's) from non-visible wavelength imagery, the
evaluation of the performance of these invariants, and their use in object recognition systems
poses several advantages. The main advantage of this approach is the potential availability of
a number of new (functionally independent) invariants that depend on internal compositional
properties of the imaged objects. Note that it is possible to evaluate the behavior of thermophysical
invariants using ground truth data consisting of images of objects of known composition
and internal state. This additional information can be used to augment/complement the behavior
of GI's. One way in which GI's can be integrated with TI's for object recognition is as
follows: (1) Parametric curves and/or lines are extracted from an LWIR image. (2) The curves
are used to compute GI's which are in turn used to hypothesize object identity and pose, and
(3) TI's are computed for this hypothesis and compared to a stored model library for verifica-
tion. Some details of this approach are presented later. Although the TI's are used solely for
the verification of hypothesis generated by other means, this task is of primary importance in
a number applications such as site change detection, monitoring, and surveillance.
The ideas presented in this paper are continuations/extensions of previous and ongoing
research in thermophysical model-based interpretation of LWIR imagery. A brief description
of this thermophysical approach is presented in section 2. The formulation of a new method to
T , a
cv
Air T amb
W cnd
st
Figure
1: Energy exchange at the surface of the imaged object. Incident energy is primarily
in the visible spectrum. Surfaces loses energy by convection to air, and via radiation to the
atmosphere. An elemental volume at the surface is shown. Some of the absorbed energy raises
the energy stored in the elemental volume, while another portion is conducted into the interior
of the object.
derive thermophysical invariants is described in section 3. Section 4 describes a context based
approach that proposes the thermophysical feature in a hierarchical framework. Experimental
results of applying this new approach to real imagery are presented in section 5, which is
followed by a discussion of the behavior of the new method, issues to be considered in using
this method for object recognition, and issues that remain to be explored.
Thermophysical Approach to IR Image Analysis
Consider an infinitesimal volume at the surface of the imaged object (figure 1). Energy absorbed
by the surface equals the energy lost to the environment.
lost (1)
Energy absorbed by the surface (per unit surface area) is given by
where, W I is the incident solar irradiation on a horizontal surface per unit area and is given by
available empirical models (based on time, date and latitude of the scene) or by measurement
with a pyranometer, ' i is the angle between the direction of irradiation and the surface normal,
and ff s is the surface absorptivity which is related to the visual reflectance ae s by ff
Note that it is reasonable to use the visual reflectance to estimate the energy absorbed by the
surface since approximately 90% of the energy in solar irradiation lies in the visible wavelengths
[23].
The energy lost by the surface to the environment was given by
lost st
where, W cv denotes the energy (per unit surface area) convected from the surface to the air
which has temperature T amb and velocity V , W rad is the energy (per unit surface area) lost
by the surface to the environment via radiation and W cnd denotes the energy (per unit surface
area) conducted from the surface into the interior of the object. The radiation energy loss is
computed
amb
where, oe denotes the Stefan-Boltzman constant, T s is the surface temperature of the imaged
object, and T amb is the ambient temperature. Assume ffl for the atmosphere is equal to ffl for the
imaged object. This assumption is reasonable if the objects are not uncoated metals [24]. This
assumption may not hold if the imaged surface is exposed or unoxidized metal which is usually
rare.
The convected energy transfer is given by
where, h is the average convected heat transfer coefficient for the imaged surface, which depends
on the wind speed, thermophysical properties of the air, and surface geometry [23].
abs
st
R cv
R rad
W rad W cv
W cnd
Figure
2: The equivalent thermal circuit for the extended model that separates the stored
energy component and the conduction component to the interior of the object.
The equivalent thermal circuit for the surface is shown in figure 2. Lateral conduction from
the elemental volume at the surface is assumed negligible since the temperature of the material
adjacent to the surface volume under consideration may be assumed to be similar. In general,
the internal temperature of the material will be different from that at the surface. The energy
flow due to this gradient is expressed as the conducted energy, W cnd = \Gammak dT=dx, where k is
the thermal conductivity of the material, and x is distance below the surface. Since we are
considering an elemental volume at the surface this can be written in finite difference form:
\Deltax
, for infinitesimal \Deltax. W cnd is also expressed in units of energy flowing
across unit area.
Within the elemental volume, the temperature is assumed uniform, and the increase in the
stored energy given by W st
dTs
dt
, where C T is the thermal capacitance for the material of
the elemental surface volume. This is given by C is the density of the surface
material, V is the elemental volume, and c is the specific heat. Again, W st is expressed in units
of energy per unit surface area. The equivalent circuit (shown in figure 2) have resistances given
by:
and R
3 Thermophysical Algebraic Invariants, TAI's
The energy balance equation, W abs st +W cnd may be rewritten in the following
linear
a
Using the expressions for the various energy components as presented in the previous section
we can express each term in the above expression as:
dt
dx
a
a
a
Note that a calibrated LWIR image provides radiometric temperature. However, this requires
knowledge of the emissivity, ffl, of the surface. For common outdoor materials, and
common paints and surface coatings, the value of ffl is around 0.9 [25],[26]. Therefore, the radiometric
temperature, T s , may be computed based on the assumption, 0:9. Hence a 3 and
a 4 can be computed from the LWIR image alone (and knowledge of the ambient temperature),
while a 1 , a 2 and a 5 are known when the identity and pose of the object is hypothesized. The
"driving conditions", or unknown scene parameters that can change from scene to scene are
given by the x 5. Thus each pixel in the thermal image eqn (8) defines a point in
5-D thermophysical space.
Consider two different LWIR images of a scene obtained under different scene conditions and
from different viewpoints. For a given object, N points are selected such that (a) the points are
visible in both views, and (b) each point lies on a different component of the object which differs
in material composition and/or surface orientation. Assume (for the nonce) that the object pose
for each view, and point correspondence between the two views are available (or hypothesized).
A point in each view yields a measurement vector
defined by eqn (8) and a corresponding driving conditions vector -
a collection of N of these vectors compose a 5 \Theta N matrix, for the first
scene/image. These same points in the second scene will define vectors that compose a 5 \Theta N
). The driving condition matrix,
, from the
first scene and X
from the second scene, are each of size N \Theta 5.
Since the N points are selected to be on different material types and/or different surface
orientations, the thermophysical diversity causes the N vectors -a
also the vectors, -a 0
N . Without loss of generality, assume that five vectors -a
span ! 5 and also that -a 0
These five points in ! 5 specify the 5 \Theta 5 measurement
matrices, in the first scene and A
5 ), in the second scene.
The point selection process here is analogous to the selection of characteristic 3D points in
the construction of geometric invariants. Since A and A 0 are of full rank, there exists a linear
transformation
Since from (7),
we have,
Thus each driving condition vector also undergoes a linear transformation.
Consider the measurement vector -a of a point as defined in (8). From one scene to the next
we expect the two object properties - thermal capacitance, C T , and conductance, k - to remain
constant. Thus the transformation we need to consider is seen to be a subgroup of GL(5) which
has the form M
The transformation of a measurement vector from one scene to the next is given
a 2
a 3
a 4
a
The first two elements, the thermal capacitance and the thermal conductance, are held constant
and the other scene dependent elements are allowed to change. In general, we have
where A and A 0 are the 5 \Theta 5 matrices derived from the two scenes and for the chosen points.
Now that the transformation of the point configuration has been established in 5D thermophysical
space, we ask the question - what function of the coefficients, a i;j ; is invariant to
transformations of the form - equation (12)? In answer to this question we first determine the
expected number of invariant relationships. The transformation group represented by M has
parameters. The measurement matrix A has 25 degrees of freedom. The counting argument
described by Mundy [1] and others is that the number of invariant relationships is equal to "the
degrees of freedom of the configuration" minus "the number of transformation parameters".
Here a count yields 10 invariant relationships for a configuration of five points. However, the
counting argument also shows us that it is unnecessary to use all five points. Each point has
five degrees of freedom. In order for invariant relationships to exist, a minimum of four points
can be used. Using four points the counting argument gives
relations. Note that this further simplifies the point selection since we now require only four
points in the two views. The measurement vectors in each view being required to span ! 4 .
Algebraic elimination of the transformation parameters using four copies of the linear form
(7) subjected to the transformation (12) provides us with the invariants. This elimination may
be performed by using recently reported symbolic techniques [27].
The five invariant functions derived by this elimination process can be divided into two
types. Each is a ratio of determinants. Indeed, it is well known that absolute invariants
of linear forms are always ratios of powers of determinants [28]. The first type of invariant
function is a determinant formed from components of three of the four vectors.
a 1;3 a 2;3 a 4;3
a 1;4 a 2;4 a 4;4
a 1;5 a 2;5 a 4;5
a 2;3 a 3;3 a 4;3
a 2;4 a 3;4 a 4;4
a 2;5 a 3;5 a 4;5
where a i;j is a jth component of the ith vector (ith point).
The second type is formed from components of all four vectors.
a 1;1 a 2;1 a 3;1 a 4;1
a 1;2 a 2;2 a 3;2 a 4;2
a 1;4 a 2;4 a 3;4 a 4;4
a 1;5 a 2;5 a 3;5 a 4;5
a 1;1 a 2;1 a 3;1 a 4;1
a 1;3 a 2;3 a 3;3 a 4;3
a 1;4 a 2;4 a 3;4 a 4;4
a 1;5 a 2;5 a 3;5 a 4;5
where a i;j is a jth component of the ith vector (ith point). Since the 4 measurement vectors
we can assume without loss of generality that the denominator determinants in (14)
and (15) are non-zero. The number of independent functions that can be formed with four
points must add up to the expected number from the counting argument. The first type hasB B @3C C independent functions given four points and second type has one. The counting
argument is thus satisfied.
The derivation of thermophysical algebraic invariant features, as described above, relies on
a number of assumptions. These assumptions are summarized. (1) Four points are chosen
such that measurement vectors (in each scene) are linearly independent, i.e. one (or more) of
the four points have different material properties and/or surface normal. (2) The four points
are related by a linear transformation of the form (12) from one scene to another. (3) The
points are corresponded. (4) Object identity is hypothesized (which is verified or reputed by
the feature value). (5) The thermal capacitance and conductance of the object (surface) do
not change while other other scene variable are allowed to vary from one scene to another. (6)
Emissivity of the imaged surface in the 8-m \Gamma 12-m band is 0.9. The first two assumptions go
hand in hand, the vectors are linearly independent by proper choice of the points, i.e. model
formation. Most objects have sufficient diversity in surface material types to easily satisfy this
requirement. Given such proper model formation, the existence of linear transformation M is
trivially ensured. The point correspondence and the hypothesis assumptions are satisfied in
the method of application of the features, described in section 3.1. Assumption (6) is satisfied
except for bare metal surfaces and esoteric low emissivity coatings.
In order for the invariant feature to be useful for object recognition not only must the
values of the feature, be invariant to scene conditions but the value must be different if the
measurement vector is obtained from a scene that does not contain the hypothesized object,
and/or if the hypothesized pose is incorrect. Since the formulation above takes into account only
feature invariance but not separability, a search for the best set of points that both identifies
the object and separates the classes must be conducted over a given set of points identified on
the object. The search may be conducted over all the combination of the points in a set or
until an acceptable feature is found. We have examined all combinations, first rating each set
for their intra-class invariance, then further evaluating it for inter-class separability. Results on
real imagery are described in section 5.
3.1 Employing TAI's for Object Recognition
The feature computation scheme formulated above is suitable for use in an object recognition
system that employs a hypothesize-and-verify strategy. The scheme would consist of the following
steps: (1) extract geometric features, e.g., lines and conics, (2) for image region, r,
hypothesize object class, k, and pose using, for example, geometric invariants as proposed by
Forsyth, et al [2], (3) use the model of object k and project visible points labeled
onto image region r using scaled orthographic projection, (4) for point labeled i in the image
region, assign thermophysical properties of point labeled i in the model of object k, (5) using
gray levels at each point and the assigned thermophysical properties, compute the measurement
matrices A and A 0 , and hence compute the feature f k (r) using equation (14) or equation (15),
and finally, (6) compare feature f k (r) with model prototype -
f k to verify the hypothesis.
For example, consider two class of vehicles - a van and a car as shown in figure 3. Here,
the correct hypotheses (models) are shown in the top row. The bottom row indicate incorrect
hypotheses with model points being assigned to the image regions. The front and rear wheels are
detected and used to establish an object centered reference frame in the image to be interpreted.
The coordinates of the selected points are expressed in terms of this 2D object centered frame.
Thus, when a van vehicle is hypothesized for an image actually obtained of a car or some
unknown vehicle, the material properties of the van are used, but image measurements are
obtained from the image of the car at locations given by transforming the coordinates of the
van points (in the van center coordinate frame) to the image frame computed for the unknown
vehicle.
4. Varying Contextual Support
It is widely known that the explicit use of contextual knowledge can improve scene interpretation
performance. Such contextual knowledge has been, typically, in the form of spatial
and geometric relationships between scene objects or between different components of an ob-
ject. Contextual support for an image region (point) being labeled consists of the neighboring
regions (points), the class memberships of which influence the class assignment for the region
(point) under consideration. In the feature extraction scheme described in section 3, contextual
knowledge is used in an implicit manner. Recall that a set of points is hypothesized to belong
to a specific type of object, and this hypothesis is verified. Thus, the class identity of each of
Figure
3: Top row - the car and van object types with points selected on the surface with
different material properties and/or surface normals. For the van - Point 1: Vulcanized Rubber,
2: Aluminum Alloy, 3: Polystyrene-like polymer, 4: Steel, 5: Polypropylene-like polymer,
Steel, 7: Polypropylene-like polymer For the car - Point 1: Vulcanized Rubber, 2: Steel, 3: Steel,
4: Chromed Steel, 5: Steel, 7: Carbon Steel. Bottom row - assignment
of point labels under erroneous hypotheses. First, the two wheels of the vehicle are used to
establish a local, 2D, object-centered coordinate frame. Then, the 2D to 2D transformation
between model and image is determined, and the model points and labels are transformed to
the image.
these points is made in the context of the identities of the other points.
The computation of feature I2 described in section 3 requires 4 points while the computation
of I1 requires only 3 points. Thus, I1 and I2 require different amounts of contextual support.
In some situations the available, useful, contextual support may be even lower, since it may be
possible to extract only one or two points reliably, e.g., where the object is partially occluded
or if the specific aspect view contains only a homogeneous surface. The formulation of feature
I1 does allow for this reduction in contextual support as explained below.
The variation in context may also be viewed as a variation in the dimensionality of a
specific predetermined subspace of the thermophysical feature space. The dimensionality of
the subspace determines the number of measurements required from the thermal image, and
hence the degree of context used. As the number of image measurements is increased, fewer
dimensions are required in the precomputed subspace and more context is derived from the
imaged object. In the method presented below, a subspace in thermophysical measurement
space is precomputed for each invariant feature established for an object class.
Consider the formulation of feature I1 which can be expressed in the form
where the column vectors - b i that compose -
. For non-zero invariants, the column
vectors -
b i that compose B also span ! 3 . In the above, we have points in an n-D space,
3. The points are divided into two sets each consisting of n points, and the two sets
differ by one element, and share
The measurement matrices in (16) may be expressed, in general, as:
and -
Consider ~j, which spans the null space of B
0: Note that B
3. The feature value, jBj
; is non-infinite and
non-zero if and only if - b T
Furthermore, it can be shown that:
The proof of this statement is included in appendix A.
For convenience of terminology, let us call the vector ~j the null-space vector. The components
of the null-space vector can be found by,
Consider instances of the null-space vector defined above for different scene conditions.
Each scene condition, in which the object is imaged, results in a different null space vector ~j(j)
For a given object, it may be possible to find a collection of n
and a decomposition into two sets of n points each such that the corresponding the null-space
vectors, ~j(j), for the different scenes are tightly clustered in the measurement space, i.e., they
vary minimally from scene to scene. If this condition is met it is reasonable to use a single
predetermined average null-space vector, ~j , that characterizes the object irrespective of the
scene conditions. Now, only two measurement vectors, -
bn and -
are needed to compute
the invariant feature - using the image and hypothesized object class, as described in section 3.
Equation (19) becomes,
where f is the feature value obtained using the average (representative) null-space vector.
In general, one needs to search (during a training phase) for an appropriate set of n
points, and a decomposition of this set into two sets of n points each that share
with the constraint that the null space vectors vary minimally from scene to scene. This will
establish the optimal two-point invariants for the object.
In order for the null-space vector approach to be useful in an object recognition scheme,
separability between the object classes must be ensured. That is, for the correct object hypoth-
esis, the feature value given by equation (21) must be as expected and must remain invariant
to scene conditions while in the case that the object hypothesis is erroneous a value other than
the expected invariant feature value is obtained. Consider a two-class separation problem. The
null-space vectors from different scenes for each object will ideally form a tight cluster. Separability
between these clusters can be measured using any of the many established statistical
measures, e.g. the Mahalanobis distance [29].
The variation in contextual support offered by the above methods may be exploited in a
hierarchal system where the two-point formulation is used for a quick, initial classification, with
low missed detection rate but perhaps high false alarm rate, to eliminate fruitless branches in
a broad search tree. In the following section we present results illustrating the classification
ability of the method described in section 3 as well as that of the null-space vector approach
for multiple objects.
4 Experimental Results
The method of computing thermophysical affine invariants discussed above was applied to real
imagery acquired at different times of the day. Several types of vehicles were imaged:
A van containing mostly plastic (composite) body panels, a car made entirely of sheet-metal
body panels, a military tank, and two different military trucks (Figs. 3,4). Several points were
selected (as indicated in the figures) on the surfaces of different materials and/or orientation.
The measurement vector given by eqn (8) was computed for each point, for each image/scene.
Figure
4: Three of the vehicles used to test the object recognition approach, (from top left
clockwise) tank, truck 1 and truck 2. The axis superimposed on the image show the object
centered reference frames. The numbered points indicate the object surfaces used to form the
measurement matrices. These points are selected such that there are a variety of different
materials and/or surface normals within the set.
Hypothesis: Truck 1 Truck 1 Truck 1 Truck 1 Truck 1
Data From: Truck 1 Tank Van Car Truck 2
9 am -0.68 1.38 -1.00 -6.66e14 -7.03
Table
1: Values of the I1-type feature used to the identify the vehicle class, truck 1. The feature
consisted of point set f4; 7; 8; 10g, corresponding to the points labeled in figure 6. The feature
value is formed using the thermophysical model of truck 1 and the data from the respective
other vehicles. When this feature is applied to the correctly hypothesized data of the tank it
has a mean value of -0.57 and a standard deviation of 0.13. This I1-type feature produces a
good stability measure of 4.5, and good separability between correct and incorrect hypotheses.
The feature values for incorrect hypotheses are at least 3.32 standard deviations away from the
mean value for the correct hypothesis.
Hypothesis: Truck 1 Truck 1 Truck 1 Truck 1 Truck 1
Data From: Truck 1 Van Car Tank Truck 2
9 am -0.31 -454.87 -252.78 -9.38 29.9e5
Table
2: Values of the I2-type feature used to the identify truck 1. The feature consisted of point
set f2; 3; 5; 9g, corresponding to the points labeled in figure 4. The feature value is formed using
the thermophysical model of truck 1 and the data from the respective other vehicles. When
this feature is applied to the correctly hypothesized data of truck 1 it has a mean value of -0.49
and a standard deviation of 0.46. The feature values under an incorrect hypothesis are at least
15.2 standard deviations away from the mean value for the correct hypothesis.
Given N points on the object to form the feature of type I1, or of type I2, there are
Each of the nN features have a distribution of values from scene to scene. The non-zero variance
is due to many factors including: (a) how well the linear thermophysical model is satisfied, (b)
calibration errors, and (c) numerical computation errors. In our experiments the features were
computed for all the combinations of points for the different scenes. A large number of four-
point-sets yielded features with low variance from scene to scene. First, the point sets were
rated based on their stability/low variance, where the stability is defined as the average value
of the feature over time, divided by the standard deviation of the feature for that time period.
Next, the most stable features were evaluated for inter-class separation as explained below.
As mentioned in section 3 the feature is computed based on the hypothesized identity of the
object (and it's thermophysical properties). Hence, if the object identity (class membership) k
is hypothesized in the image region r, then the image measurements are used along with the
thermophysical properties of object class k to generate a feature value f k (r). Verification of this
hypothesis may be achieved by comparing f k (r) with a class prototype -
f k . This comparison
may be achieved using any of a number of available classifier rules, e.g., minimum distance
rule. Since it is important that erroneous hypotheses be refuted, one must consider inter-class
behavior as well as intra-class behavior of the feature. To experimentally investigate such
behavior using real imagery we adopted the following procedure. Given an image of a vehicle,
(1) assume the pose of the vehicle is known, then (2) use the front and rear wheels to establish
an object centered reference frame. The center of the rear wheel is used as the origin, and center
of the front wheel is used to specify the direction and scaling of the axes. The coordinates of
the selected points are expressed in terms of this 2D object centered frame. For example, when
a van vehicle is hypothesized for an image actually obtained of a car or some unknown vehicle,
the material properties of the van are used, but image measurements are obtained from the
image of the car at locations given by transforming the coordinates of the van points (in the
van center coordinate frame) to the image frame computed for the unknown vehicle.
Table
shows inter-class and intra-class variation for a feature of type I1 for the truck
object class - for images obtained at eight different times over two days. The behavior of
the invariant feature formed by one choice of a set of 4 points is shown in table 1 for correct
hypothesis. Also, table 1 shows the case where the hypothesized object is the truck 1 while
the imaged object is either a car, van, truck 2, or tank. As can be seen, the correct hypothesis
generates a feature value that is invariant and distant from the feature values generated by
erroneous hypotheses. Thus, the feature is shown to have good characteristics in both (high)
inter-class separability and (low) intra-class stability. Table 2 shows similar results for a feature
of type I2 also used for separating truck 1 from other objects. Table 3 describes the performance
of a feature of type I2 used to separate the tank object class from other classes. Features for
each of the other object classes were also examined, and performance similar to that described
above was observed. Also, each class produced a large number of features (of each type) that
demonstrated good inter-class separation and intra-class stability.
The method described in section 4 was used to compute two-point invariant features for the
5 types of classes examined above. The ten points chosen on the truck 1 vehicle corresponding
to the points in Fig. 4 provide 210 different features. For each selection of a pair of points, the
average null vector is computed over different images under the correct hypothesis. This vector
is then used to compute the feature under mistaken hypotheses. The ability of this feature to
separate correct hypotheses from mistaken hypotheses can be described by the measure
and -m are the mean values for the correct hypothesis and mistaken hypothesis,
respectively, and oe c and oe m are the corresponding standard deviations [30]. The performance
Hypothesis: Tank Tank Tank Tank Tank
Data From: Tank Van Car Truck 1 Truck 2
9 am 1.07 -2.42e5 1 -13.19 -5.31
Table
3: Values of a feature of type I2 used to the identify the tank, for correct and mistaken
hypothesis. The feature consisted of point set f3; 5; 7; 9g, corresponding to the points labeled
in figure 4. The feature value is formed using the thermophysical model of tank and the data
from the respective other vehicles. When this feature is applied to the correctly hypothesized
data of the tank it has a mean value of 2.52 and a standard deviation of 1.82. The feature value
under a mistaken hypothesis is at least 3.3 standard deviations away from the average value
under the correct hypothesis.
Hypothesis: Truck 1 Truck 1 Truck 1 Truck 1
Data From: Tank Van Car Truck 2
Feature 1: 78.13 14.89 3.55 9.33
Feature 2: 75.72 13.01 3.30 7.25
Feature 3: 29.46 6.46 3.29 6.97
Feature 4: 24.98 5.62 3.23 6.81
Feature 5: 16.29 5.42 3.02 5.86
Table
4: Separability measure for the two-point features derived in section 4. The features
were derived for separating the class Truck 1 from other classes. Larger values indicate better
separation between correct hypotheses and erroneous ones.
of the best five features identified for the truck 1 object class is given in table 4. Similar features
are available for the other object classes.
The approach described above is promising in that it makes available features that are (1)
invariant to scene conditions, (2) able to separate different classes of objects, and (3) based on
physics based models of the many phenomena that affect LWIR image generation.
It is important to note that although the derivation of the features explicitly used the
constraint that the value be invariant from one scene to another for a given object class, class
separation was not explicitly incorporated in the derivation of the features. Hence, practical
use of the approach for recognition requires searching all the possible features for the best
separation. It is not clear that a solution will always exist for a collection of object classes.
Note that different aspects of an object may be imaged - the set of visible points being different
for each aspect. The complexity of the search task is compounded by attempting to ensure inter-class
separation in the presence of erroneous pose hypothesis, which we have not considered in
this paper.
The hypothesis of object pose and identity is best achieved by employing geometrical invariance
techniques [2]. For example, conics may be fit to wheels which manifest high contrast
in LWIR imagery, and their parameter values may be used to compute GI's. This may be
employed to generate object identity and pose that may be verified by the thermophysical
invariance scheme described above. This approach is also being investigated at present.
Note that the formulation described in this paper assumes that the scene objects are passive,
i.e., internal thermal sources are not explicitly modeled. The experiments were conducted on
vehicles that had not been exercised prior to (or during) image acquisition. Reformulation of
the scheme presented in this paper to explicitly incorporate internal sources will be a interesting
area of future research.
The specification of optimal classifiers that use as input the features established in this
paper is another area that merits investigation. In our experiments, we have found that the
distribution of the features, especially under erroneous hypotheses, are poorly modeled by the
commonly used Gaussian distribution. We are investigating the use of Symmetric Alpha Stable
Distributions which appear to model the behavior of the features more closely. Our research
in this area is directed at realizing classifiers with low false alarm rates and high detection
probabilities that use features based on the the above physics based approach for a variety of
applications.
6
Acknowledgments
The authors thank Tushar Saxena and Deepak Kapur of the Institute for Logic and Program-
ming, Computer Science Dept, State University of New York at Albany, for making available
to us their algorithmic elimination methods. This research was supported by the AFOSR contract
F49620-93-C-0063, the AFOSR grant LRIR-93WL001, an AFOSR Laboratory Graduate
Fellowship, ARPA contract F33615-94-C-1529 and the National Science Foundation Contract
IRI-91109584.
--R
Geometric Invariance in Computer Vision
"Bayesian Inference in Model-Based Vision"
"View Variation of Point-Set and Line-Segment Features"
"Recognizing Planar Objects Using Invariant Image Features"
"Semi-Local Invariants"
"Direct Computation of Qualitative 3D Shape and Motion Invariants"
"Model-based Invariants for 3D Vision"
"Noise-Resistant Invariants of Curves"
"Quasi-Invariant Properties and 3D Shape Recovery of Non- Straight, Non-Constant Generalized Cylinders"
"Polarization-based material classification from specular reflection"
"Photometric Invariants Related to Solid Shape"
"Surface Descriptions from Stereo and Shading"
"3-D Stereo Using Photometric Ratios"
"Using Illumination Invariant Color Histogram Descriptors for Recognition"
"Reflectance Ratio: A Photometric Invariant for Object Recog- nition"
"Thermal Invariants for Infrared Target Recognition"
"The Viewpoint Consistency Constraint,"
"Integrated Analysis of Thermal and Visual Images for Scene Interpretation"
"A Phenomenological Approach to Multisource Data Integration: Analyzing Infrared and Visible Data"
"Thermal and Visual Information Fusion for Outdoor Scene Perception"
"Robust Physics-based Sensor Fusion"
Fundamentals of Heat Transfer
"Thermal Radiation,"
Thermal Radiative Properties of Coatings
Thermal Radiative Properties of Nonmetallic Solids
"Computing Invariants using Elimination Methods,"
"Foundations of the Theory of Algebraic Invariants"
"On the Generalized Distance in Statistics,"
Introduction to Statistical Pattern Recognition
--TR
--CTR
Horst W. Haussecker , David J. Fleet, Computing Optical Flow with Physical Models of Brightness Variation, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.23 n.6, p.661-673, June 2001
Ronald Alferez , Yuan-Fang Wang, Geometric and Illumination Invariants for Object Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.21 n.6, p.505-536, June 1999 | image understanding;thermal image;invariance;physics-based computer vision;model-based vision |
628779 | The Complex Representation of Algebraic Curves and Its Simple Exploitation for Pose Estimation and Invariant Recognition. | AbstractNew representations are introduced for handling 2D algebraic curves (implicit polynomial curves) of arbitrary degree in the scope of computer vision applications. These representations permit fast, accurate pose-independent shape recognition under Euclidean transformations with a complete set of invariants, and fast accurate pose-estimation based on all the polynomial coefficients. The latter is accomplished by a new centering of a polynomial based on its coefficients, followed by rotation estimation by decomposing polynomial coefficient space into a union of orthogonal subspaces for which rotations within two-dimensional subspaces or identity transformations within one-dimensional subspaces result from rotations in $x,y$ measured-data space. Angles of these rotations in the two-dimensional coefficient subspaces are proportional to each other and are integer multiples of the rotation angle in the $x,y$ data space. By recasting this approach in terms of a complex variable, i.e., $x+iy=z$, and complex polynomial-coefficients, further conceptual and computational simplification results. Application to shape-based indexing into databases is presented to illustrate the usefulness and the robustness of the complex representation of algebraic curves. | Introduction
For shape recognition involving large databases, position-invariant 2D shape-recognition and
pose-estimation have to be performed by fast algorithms providing robust accurate estimates
subject to noise, missing data (perhaps due to partial occlusion) and local deformations. There
is a sizeable literature on alignment and invariants based on moments [1], B-splines [2], superquadrics
[3], conics [4], combinations of straight lines and conics, bitangeants [5], dierential
invariants [6, 7, 8, 9], and Fourier descriptors. Two observations are: these two problems (pose
estimation and pose-independent recognition) are often studied independently; though the preceding
approaches have their own signicant strengths and handle certain situations very well,
the two problems {pose-independent recognition and pose-estimation{ are unsolved if there is
large noise and large shape deformation present, there is missing data, and maximum estimation
speed and estimation accuracy are important. This paper presents an approach based on
algebraic (also referred to as implicit polynomial) curve models which meets these requirements
for pose estimation and for pose-invariant object recognition.
2D algebraic curves of degrees 4 or 6 are able to capture the global shape of curve data of
interest (see Fig. 1). However, our primary interest in algebraic curves in this paper is that
they have unparalleled features crucial to fundamental computer vision applications. First, we
derive a complete set of invariants for fast pose-invariant shape recognition. By a complete set
of independent invariants, we mean that it is possible to reconstruct, without ambiguity, the
algebraic curve shape from the set of invariants only. Since this set species the shape in a
unique way, these invariants can be used as \optimal" shape descriptors. (Of conceptual interest
is that this set of invariants, dened in the paper, is not necessarily complete algebraically).
Second, algorithms are given in the paper which permit single-computation pose estimation,
and slightly slower but more accurate iterative pose estimation based on all the polynomial
coe-cients. These features are due to the following contributions.
1. A complex basis is introduced for the space of coe-cient vectors leading to the complex
representation of algebraic curves of degree n, where n is arbitrary. The components of
the basis vectors are complex numbers, even though the resulting polynomial is still real.
This provides a representation from which we derive a complete set of rotation-invariants.
We fully describe how real and complex vector representations are related.
The complex basis arises not from consideration of the geometry of the algebraic curve
but rather from consideration of the geometry of the transformation of its coe-cients and
is built on the fact that when the (x; y) data set is rotated, the resulting coe-cient vector
undergoes an orthogonal transformation [1].
2. A new accurate estimate of an \intrinsic center" for an algebraic curve, which is based
on all of the polynomial coe-cients. The algebraic curve can then be centered by moving
its intrinsic center to the origin of the data coordinate system. This centering is invariant
to any prior translations a shape may have undergone. Computing the center requires a
single computation followed by a few iterations.
3. Pose-invariant shape recognition is realized by centering an algebraic curve, as in 2., and
then basing shape recognition on the complete set of rotation-invariant shape descriptors
indicated above in 1.
4. Fast pose-estimation. Estimating the Euclidean transformation, that has taken one shape
data-set into another using all the polynomial coe-cients is realized by: initial translation
estimation as the dierence in the estimated intrinsic centers, based on 2., of the two
curves; this is followed by rotation estimation, based on 1.; and is completed by one or
two iterations of translation estimation followed by rotation estimation, where coe-cients
for the two polynomials are compared using the representation in 1.
What is most important in the preceding methodology is that estimators used are linear
or slightly nonlinear functions, which are iterated a few times, of the original polynomial co-
e-cients, thus being stable. Highly nonlinear functions of polynomial coe-cients, which have
been used previously, usually are not as robust and repeatable.
Note, the invariant representations we use with algebraic curves are global. At this stage of
the research, we do not know if these representations are well suited to highly-accurate nely-
discriminating shape recognition and therefore look upon the recognition, when dealing with
very very large image databases, to be used for the purpose of indexing into these databases.
The idea is to reduce the number of images that must be considered in the database by a large
factor using our invariant recognition, and then do more careful comparison on the shapes that
remain. This more careful comparison would involve pose estimation for alignment followed
by careful comparison of aligned shape data. This careful comparison of aligned shapes could
then be done through our PIMs measure [10] or through other measures.
How do Fourier descriptors compare with the algebraic curve model? Fourier descriptors,
like algebraic curves, provide a global description for shapes from which pose and recognition
can be processed. But the Fourier approach has di-culty in general with open patches or is
restricted to star shapes, depending on the parameterization used. In particular, when dealing
with missing data, small extra components, and random perturbations, heuristic preprocessing
must be applied to the curve data in order to close and clean it, and then arc-length normalization
problems arise in the comparison between shapes. This is also the case with curvature
descriptors [11]. In matching open curves having inaccurately known end points, both of these
approaches require extensive computation for aligning starting and stopping points.
For algebraic 2D curves and 3D surfaces, the most basic approach to comparison of two
shapes is iterative estimation of the transformation of one algebraic model to the other followed
by recognition based on comparison of their coe-cients or based on comparing the data set
for one with the algebraic model for the other [12, 1, 10]. But the problem of initialize this
iterative process still remains. A major jump was the introduction of intrinsic coordinate
systems for pose estimation and Euclidean algebraic invariants for algebraic 2D curves and 3D
surfaces [1, 6]. These are eective and useful, but as published do not use all the information
in the coe-cients.
The present paper is an expansion of the complex representation rst presented in [13]. A
later paper [14] presented a partial complex representation for algebraic curves for obtaining
some recognition invariants related to our complete set of invariants, and for pose estimation
based on only a few polynomial coe-cients. It also uses a dierent center for an algebraic
curve. Moreover, the authors are not concerned with concepts developed in this paper such as
complex bases and invariant subspaces, complete sets of invariants, and pose estimation using
and combining information available in all the polynomial coe-cients.
In Sec. 2, we introduce the decomposition of the coe-cient space with two examples: conics
and cubics under rotation. This leads to the complex representation of algebraic curves. Then, in
Sec. 3, the proposed pose estimation technique is described with validation experiments. Sec. 4
is dedicated to recognition with invariants. The proposed recognition algorithm is applied in
the context of indexing into a database of silhouettes, where algebraic representations allow us
to easily handle missing parts along the contour.
Algebraic Curve Model
2.1 Denition
An algebraic curve is dened as the zero set of a polynomial in 2 variables. More formally, a
2D implicit polynomial (IP) curve is specied by the following polynomial of degree n:
0j;k;j+kn a jk x j y
| {z }
| {z }
a
| {z }
| {z }
Hn
(1)
Here H r (x; y) is a homogeneous binary polynomial (or form) of degree r in x and y. Usually,
we denote by H n (x; y) the leading form. An algebraic curve of degree 2 is a conic, degree 3 a
cubic, degree 4 a quartic, and so on.
Polynomial f n is conveniently represented by coe-cient vector A, having components (a jk ),
(number of coe-cients is 1(n
where
a 01 a 20 a 11 a
2.2 A Useful Basis for Conics and Cubics under Rotation
We rst consider the representation of conics and cubics under rotation in order to exhibit
properties we want to exploit. A cubic curve is dened by 10 coe-cients:
(2)
When a cubic is rotated through angle , the are transformed
as a messy function of . The rotation matrix R() for the data is:6 4
cos sin
sin cos 7 56 4
x
which species the counter-clockwise rotation of the curve by radians, equivalently the clock-wise
rotation of the coordinate system by radians. The original cubic coe-cients are vector A
and the transformed one is A 0 . We denote with a prime the representation after transformation.
By substituting (3) in (2), and after expansion, we obtain the linear relation between the two
vectors A is a function of the rotation angle only. This
matrix can be put into a block diagonal form as shown
where the block L j transforms the coe-cients of the homogeneous polynomial of degree j, i.e
the j th form. Therefore, the size of the block L j is (j 1). We have
The elements of these blocks are non-linear functions of sin . For a second
degree form, to put things into a form exhibiting invariance and simple dependence on angle ,
we dene a new parameterization, 20 , 20 ,
11 , of the coe-cients a 20 , a 11 , a 02 of the polynomial,
by applying the following matrix transformation, N 2 :6 6 6 6 6 4
203
| {z }
a 20
a 11
a
These new parameters 20 , 20 and
are linear functions of the original polynomial coe-
cients. With this new representation, the matrix L 2 is mapped into a matrix where 2 appears:
That is, [ 0
. The reason for this , notation is that
jk and jk are the real and imaginary parts, respectively, of the complex coe-cient c jk 2 j+k
introduced in the next section. When the complex coe-cient c jj is always real, and
we have
. For L 3 , it turns out that a similar simplication is possible with the
transformation
a
a 21
a 12
a
and L 3 is mapped into:
In summary, when a cubic is rotated, there exists a natural basis determined by the square
matrices mapped into diagonal 2 2 and 1 1 sub block form:
The coe-cient vector of the cubic in the new basis is B, and B 0 after rotation R(). It is
clear that in this new basis, the coe-cient space is decomposed into a union of orthogonal
one or two dimensional subspaces invariant under rotations. More specically, the vector
vectors [ jk jk ] T which rotate
with angles , 2, or 3. This leads directly to a simple and stable way to compute the relative
orientation between cubics, namely, estimate by comparing the angle ij of the pairs of
coe-cients [ jk jk ] T in B with their transformations in B 0 . Moreover, it is easy to compute a
complete set of independent invariants under rotation for a conic:
linear invariants (i.e., linear functions of the IP coe-cients): coe-cients
invariants (i.e., second degree functions of the coe-cients): squared radiuses
2+ 2= a 2+ a 2, and 2+ 2= (a 20 a 02 a 2, of the 2D vectors
and 1 relative angle: the angle between 20
Note, to understand the angular invariant, under coordinate system rotation of we see from (4)
that 20 transforms to 2 Hence, the angular dierence
is invariant to rotations. For a cubic, we complete the set of independent invariants
invariants: squared radiuses 2+ 2= (a
and 2 relative angles: the angle between
In order to generalize this approach to IPs of arbitrary degree, we turn to complex numbers
and thus the complex representation of IPs.
2.3 Complex Representation of Algebraic Curves
Since we are dealing with rotations and translations of 2D curves, complex representation
provides a simplication in the analysis and implementation of pose estimation or pose-invariant
object recognition. Given the polynomial f n (x;
0j;k;j+kn a jk x j y k , the main idea is to
rewrite f n (x; y) as a real polynomial of complex variables z
a jk
Using binomial expansions for (z
z) j and (z z) k , we rewrite f n (z) with new complex coecients
c jk
Notice that coe-cients (c jk ) are complex linear combinations of the (a jk ), and that c
since the polynomial is real. We call the vector the complex vector representation of
an algebraic curve which is dened by a real polynomial in
z and z:
z z 2 zz
z zz 2
z 3 z is the vector of complex monomials. With
this notation, j +k species the degree of the multinomial in z and
z associated with c jk . Notice
that the sub-set of polynomials in z only is the well-known set of harmonic polynomials.
For example, the complex representation of a conic is f 2
c 11
are real numbers. From the previous section, it is easy to
show that
are the real and imaginary parts of the expression within
parenthesis. The complex representation of a cubic is f 3
The principal benet of the vector complex representation is the very simple way in which
complex coe-cients transform under a rotation of the polynomial curve. Indeed, we see that if
the IP shape is rotated through angle (see (3)), z transforms as z
and by substituting in (5):
0j;k;j+kn
e i(j
Hence, the coe-cients of the transformed polynomial are
Moreover, as presented in the appendix, there is a recursive and thus fast way to compute the
matrix providing the transformation of a given polynomial coe-cient vector A to the new basis
C for any degree (it is the N j matrices introduced in the previous section).
3 Pose Estimation
As described in the previous section, the relation between the coe-cients C of a polynomial
and C 0 of the polynomial rotated is particularly simple when using the complex representation,
allowing us to compute the dierence of orientation between two given polynomial curves, C
and C 0 (see (6)). It turns out that the complex representation also has nice properties under
translation, allowing pose estimation under Euclidean transformation in a very fast way and
using all the polynomial coe-cients. At present, we view the most computationally-attractive
approach to pose estimation to consist of two steps. 1) Compute an intrinsic center for each
algebraic curve based on the coe-cients of its IP representation. This generalizes [1] to
optimally use all the information in the coe-cients. It is an iterative process with only a few
iterations. Center each algebraic curve at the origin of the coordinate system (i.e., move the
intrinsic center to the origin of the coordinate system), and then compute the rotation of one
algebraic curve with respect to the other based on the coe-cients of their IP representations.
This optimally uses all the information about the rotation in the coe-cients. It is not an
iterative process. Note, the translation of one curve with respect to another is then given in
terms of the dierence in their intrinsic centers. Hence, we are treating location and rotation
estimation separately in dierent ways. When processing speed is less important, maximum
accuracy under Euclidean transformations is achieved by following the preceding by a few
iterations as discussed in Sec. 3.4.
Figure
1: 3L ts of 4 th degree polynomials to a butter
y, a guitar body, a mig 29 and a sky-hawk
airplane. These are followed by two 6 th degree polynomials ts.
3.1 Implicit Polynomial Fitting
Since object pose estimation and recognition are realized in terms of coe-cients of shape-
modeling algebraic curves, the process begins by tting an 2D implicit polynomial to a data
set representing the 2D shape of interest. For this purpose, we use the gradient-one tting [15],
which is a least squares linear tting of a 2D explicit polynomial to the data set where the
gradient of the polynomial at any data point is soft-constrained to be perpendicular to the
data curve and to have magnitude equal to 1. This tting is of lower computational cost and
(a) (b) (c) (d)
Figure
2: In (a), the original data set is perturbed with a colored Gaussian noise along the
normal with a standard deviation of 0:1 for a shape size of 3 (equivalently, the data is contained
in a box having side length 375 pixels and the noise has 12.5 pixels standard deviation). In (b),
10% of the curve is removed. In (c), 5 4th degree ts are superimposed with associated noisy
data sets each having standard deviation of 0:1. In (d), 5 ts are superimposed when 10% of
the curve is removed at random starting points.
has better polynomial estimated-coe-cient repeatability than all previously existing IP tting
methods [15]. This algorithm is an improved version of the 3L tting [16], taking advantage of
the ridge regression approach. The algebraic curve is the zero set of this explicit polynomial.
A side benet of use of the gradient-one soft-constraint is that the tted polynomial is then
normalized in the sense that the polynomial multiplicative constant is uniquely determined.
Fig. 1 shows measured curve data and the t obtained. This tting is numerically invariant
with respect to Euclidean transformations of the data set, and stable with respect to noise and
a moderate percentage of missing data as shown in Fig. 2. 4 th degree ts allows us to robustly
capture the global shape, and higher degree polynomials provide more accurate ts as shown
in Fig. 1 with 6 th degree polynomials.
3.2 Translation
It is well known that a non degenerate conic has a center. Given two conics, the two centers are
very useful for estimating the relative pose since each conic can be centered before computing
the relative orientation. The goal of this section is to compute a stable center in the complex
representation with similar properties for polynomials having any degrees.
From (5), if z is transformed as z translated by t and rotated with an angle
t, and
After expansion, we obtain H 0
leading form:
c jk e i(j
Consequently, complex coe-cients (c 0
jk ) of the transformed leading form H 0
are unaected by
translation in the Euclidean transformation. Continuing expansion, we obtain the transformed
next-highest degree form H 0
having the coe-cients (c 0
z
1kn
z
z
These coe-cients are linear functions of the translation component t:
The rst interesting property of (7) is that the term depending on the angle is a multiplicative
factor in this set of equations. This means that, given any polynomial, the translation
which minimizes the linear least squares problem:
does not depend on the rotation applied to the polynomial. Therefore, the algebraic curve can
be translated by t linearcenter to center it at the origin of the coordinate system. This centering is
invariant to any Euclidean transformation the algebraic curve may have originally undergone.
This center is not dierent than the Euclidean center of a polynomial derived with the real
representation of IPs in [1].
Even if t linearcenter is computed by solving a linear system as above, it is not using all the
information available about the curve location, in particular, coe-cients c jk ; j < n 1, are not
involved. To use these, we proceed as follows. Compute linearcenter . Translate the polynomial,
having coe-cient vector C, by t linearcenter . The resulting polynomial coe-cient vector ~
C will
be independent of any previous translations the original data set may have undergone (in
practice approximately independent due to measurement noise and other deviations from the
ideal). Now recenter the ~
C polynomial to obtain ( ~
C) by computing and using translation
C) jj 2 is minimum. Note, ~
C is an nth degree polynomial in ~ t, so we
compute ~ t iteratively by using a rst order Taylor series approximation and thus only the linear
~ t monomials in ( ~
C). All the ~ c jk are used in this computation, which is why t center has smaller
variance than does t linearcenter . Generally, the optimum jj ~ t jj is small and only 2 or 3 iterations
are needed to converge to the ~
minimizing jj ~
This denes the center t center which we use.
Note, this t center , determined solely by curve coe-cients C, is of sub-optimal accuracy because
we do not weight the components of C in an optimal way to achieve maximum likelihood or
minimum mean square error estimates.
The t center of a high degree polynomial has the same property as the conic center, namely,
it is covariant with the Euclidean transformation applied to the data. Consequently, it can
be used in the same way for comparing polynomial coe-cients: each polynomial C and C 0 is
centered by computing t center and t 0
center , before computing the relative orientation using all the
transformed coe-cients.
For illustration, consider the simple case of a conic. For a conic f 2
, the set of equations (7) is reduced to c 0
t) and
its conjugate. The well known center of a conic is dened as the point for which the linear
terms vanish, i.e, its position linearcenter 2c 20
linearcenter
is dierent from the more stable t center introduced before which is the one minimizing 2 j
center 2c 20
center
that this criterion involve all the IP coe-cients of the conic.
The second advantage of the complex representation is that we can derive not only one center
but several, a dierent one for each summand in (8), with the same covariance to Euclidean
Transformations. The property used here is that c 0
depends only on c n 1 k k , c n k k , and
i.e, the complex representation decouples the rotation and the translation which
is not the case with the coe-cients a jk . For every equation in (7), we are able to dene a
new Euclidean center for the IP as t k for which the right side is 0, as done in the conic case.
Consequently, an IP of degree n has [ n] extra centers t k ([u] denotes the greatest integer not
exceeding u), dened by:
A nice property of these centers for two curves is that they match one to another without
a matching search problem since we know the degree k associated with the center t k . Hence,
given two polynomials C and C 0 of degree n, after the computation of the [ n] centers for each
polynomial, approximative but simple pose estimation is determined by [ n] matched points.
Pose estimation of 4 th degree algebraic curves can be solved by doing the pose estimation
between two centers, pose estimation of 6 th degree polynomials by doing the pose estimation
between two triangles, and so on. Of course, these centers do not use all the pose information
contained in the polynomial coe-cients, so this pose estimation can be used either for fast
computation or for a rst approximation to maximum accuracy Euclidean pose estimation.
3.3 Rotation
Experimentally, it turns out that the center is very stable in the presence of noise and small
perturbations (see Fig. 3). The computation of the rotation is in practice less accurate. There-
fore, it is important to take advantage of all the information available in the polynomial to
obtain the most accurate orientation estimation possible. Assume that the two polynomials
are each centered by using Euclidean center t center dened in the previous section.
For a cubic, under rotation R(), C transforms to the vector C
. In this equation, as seen in Sec. 2.2, complex coefcients
are rotated by angle , c 20 by angle 2, and c 30 by angle 3. We use all of
this information to estimate .
In the general case, from (6), C 0 as a function of C under a rotation is given by c 0
c jk e i(j Given C and C 0 , we simply used least squares to estimate
min
which leads to maximization of
l jk is an
unknown integer when j k 6= 1, and 2l jk
is the unknown phase. Integer l jk is between 0 and
k 1. It is inserted to make the argument of the cosine close to 0, thus permitting the cosine
to be well approximated by its second order Taylor expansion. Then an explicit approximated
solution is derived:
where weights w jk are (j
(These weights can be easily used to test if an IP has , 2,
, or any other kind of symmetries. Consequently, we are not limited to non-symmetric shapes
in computing the pose estimate). The obtained solution is a good approximate solution, and a
few iterations may be used to obtain the closest sub-optimal solution. An optimal estimate can
be obtained using a Bayesian formulation and iterative globally optimizing techniques. In this
case, the summands in (10) would be weighted by an appropriate inverse covariance matrix.
Since the computational cost is very small, the best estimated angle can be obtained by
computing the estimate for all possible integers l jk and choosing the one which minimizes the
weighted standard deviation of arg(c 0
. An even faster alternative is to get
a good rst estimate of by combining arg(c 0
or by using the
centers dened with (9).
3.4 Estimation of Euclidean transformations
To estimate the Euclidean transformation between two shapes:
noise missing data 20% missing data
5.7% 72.1% 2.9% 37.8%
translation 5.9% 13.4% 3.0% 6.0%
Table
1: Standard deviation in percentage of the average of the angle and the norm of one
translation component, with various perturbations for object in Fig. 2. Added colored Gaussian
data noise has standard deviations 0:1 and 0:2 (12.5 and 25 pixels respectively). Occlusions are
10% and 20% of the curve at random starting points. Statistics are for 200 dierent random
perturbations of each kind on the original shape data. As in Fig. 3, true rotation is 1 radian,
true translation is 1. (Data lies in box having side-length 375 pixels).
First each polynomial is centered by computing its center t center using information in all
the coe-cients of f n as discussed in Sec. 3.2.
Then the rotation alignment is performed by using information in all the coe-cients of
using (11) and the discussion in Sec. 3.3.
A rst estimate of the translation and rotation are the displacement from one center to
the other, and the rotation alignment.
To remove remaining small translation and rotation estimation errors, the translation and
the orientation alignment are iterated one or two times by minimizing the sum of squared
errors between the two sides of (7) and for all the other c jk , most of which involve higher
degree monomials in t and t. This estimation of translation and rotation jointly results
in maximum accuracy.
The proposed pose estimation is numerically stable to noise and a moderate percentage of
missing data as illustrated in Fig. 3. The pose estimation error due to missing data increases
nicely in the range from 0 to 15% (19 pixels). Similar results are obtained in the range [0; 0:12]
for the standard deviation of the noise. The added noise is a colored noise [15], i.e, a Gaussian
noise in the direction normal to the shape curve at every point and then averaged along 10
consecutive points. We want to emphasis the fact that even if a noise standard deviation of 0:1
(equivalently, 12.5 pixels), is only 3% of the size of shape of the butter
y, this value of the noise
represent a very large perturbation of the shape as shown on Fig. 2(a). For greater amounts
of noise or missing data, we have the well know threshold eect [17] in estimation problems as
shown in Table 1. It arises in our problem because of the nonlinear computations in the angle
estimation.
angle
translation
error
noise std. dev. x
angle
translation
error
Figure
3: Left, variation of the standard deviation of the angle and the x component of the
translation as a function of increasing colored noise standard deviation. Right, variation as a
function of increasing percentage of missing data. Plotted values are std. deviation of the error
as a percentage of the average values of the pose components. Measurements based on 200
realizations. True rotation is 1 radian, translation is 1.
4 Recognition Using Invariants
In this section we solve the pose-independent shape recognition problem based on invariants
arising from the complex representation.
4.1 Stable Euclidean Invariants
|c40|
error
noise std. dev. x 105.0015.0025.0035.0045.000.00 100.00 200.00 300.00
|c40|
error
removed x 105.0015.0025.0035.0045.00
Figure
4: Left, variation of the standard deviations of invariants jc 40 j, c 11 , j2
as a function of increasing colored Gaussian noise std. deviation ( jk denotes
arg(c jk )). Right, variation of the standard deviation of the same invariants as a function of an
increasing percentage of missing data at random starting points. Values are std. dev. of the
error as a percentage of the average value of the invariant. 200 realizations of the shape data
were used.
When the IP is centered with the computation of the Euclidean center as described previously
in Sec. 3.2, we have canceled the dependence of the polynomial on translation, and the
only remaining unknown transformation is the rotation.
noise missing data 20% missing data
c 11 14.0% 26.8% 8.2% 16.7%
Table
2: Standard deviations as a percentage of the means of a few invariants in response
to various data perturbations. Gaussian colored noise has standard deviations 0:1 or 0:2.
Occlusions are 10% or 20% of the curve at random starting points. Statistics for each case are
computed from 200 dierent random realizations.
Since the number of coe-cients of f n is 1(n+1)(n+2) and the number of degrees of freedom
of a rotation is 1, the counting argument indicates that the number of independent geometric
invariants [18] is 1(n We directly have linear invariants which are c jj .
From (6), we deduce that all other jc jk j 2 are invariants under rotations. The number of these
independent quadratic invariants (2 nd degree functions of the c jk ) is the number of complex
This number is even degrees and
for odd degrees. Invariants jc jk j are geometric distances, but there are angles which also are
Euclidean invariants for an IP. Indeed, the o(o+1)relative angles (l m)arg(c jk ) (j k)arg(c lm )
are preserved under rotations. We can choose a maximal independent subset of these relative
angles and these along with the preceding linear and quadratic invariants provides a complete
set of independent rotation invariants for an IP of degree n, as for the cubic case in Sec. 2.2. We
want to emphasis the fact that the obtained invariants are linear, quadratic, or arctan functions
of ratios of linear combinations of coe-cients, even for high degree polynomials. This leads to
invariants less sensitive to noise than are others such as algebraic invariants which are rational
functions perhaps of high degrees, of the polynomial coe-cients. Moreover, these are the rst
complete set of Euclidean invariants for high degree IP curves appearing in the computer vision
literature.
As shown in Fig. 4 and Table 2, invariants are individually less stable than pose parameters.
In particular, angle invariants are more sensitive to curve-data perturbations. Nevertheless,
these angular invariants are useful for discriminating between shapes as illustrated in Fig. 5.
Moreover, as shown in Fig. 3, the better stability of the translation estimation in comparison
to the angle estimation allows rotation invariants to be computed out of the range of stability
of the angle estimation (0:2 noise std. dev. and 20% missing data). We observed that a few
angular invariants have a standard deviation several times larger than the others. It turns out
butterfly
guitar
butterfly
guitar
Figure
5: Left, scatter of invariants vector (jc perturbed data sets (colored noise
with 0:05 standard deviation) of the 4 IPs of degree 4 in Fig. 1. Right, scatter, in radians, of
invariants vector (3 10
that, for particular shapes, a few angular invariants become bimodal up to a particular amount
of noise such as 20 31 as shown in Fig. 5 for the sky-hawk.
4.2 Invariant Recognition
guitar butter
y sky-hawk mig
guitar 100% 0% 0% 0%
butter
y 0% 100% 0% 0%
sky-hawk 0% 0% 100% 0%
mig 0% 0% 0% 100%
guitar 95% 1.5% 3.5% 0%
butter
y 27.5% 72.5% 0% 0%
sky-hawk 2.5% 0% 97.5% 0%
mig 9.5% 0% 52% 38.5%
guitar 100% 0% 0% 0%
butter
y 0% 100% 0% 0.0%
sky-hawk 0.5% 0% 96% 3.5%
mig 4% 0% 0% 96%
Table
3: Percentage recognition on 3 sets of 200 perturbed shapes for colored noise of standard
deviations 0:05 and 0:1, and 10% missing data, respectively.
Fig. 5 shows scatter plots vectors of pairs of invariants for the 4 shapes of degree 4 of Fig. 1.
Though the scatter of individual components of invariant vectors are not always well separated,
the use of the complete set of invariants appears to yield highly accurate recognition. The
recognizer used is Bayesian recognition based on a multivariate colored Gaussian distribution
for each object and having a diagonal covariance matrix estimated from 200 noisy perturbed
shapes for each object with noise perturbation standard deviations 0:05 in the normal direction.
This model is used to do recognition on two other noisy sets having standard deviation 0:05
and 0:1 (the latter is 12.5 pixels and is at the limit of the stability for pose) and on one set with
10% missing data. Results are quite good (see Table 3). For large noise perturbations (0:1 std.
dev. colored noise), the sky-hawk becomes di-cult to recognize from the other airplane, since
details are lost in noise, but is still dierent from the guitar or the butter
y shapes. Better
accuracy would be achieved by using 6 th degree IP curves [15].
4.3 Indexing in a silhouette database
In the previous section, recognition is applied to datasets deformed by synthetic perturbations:
colored noise and missing data. To test the proposed recognition algorithm on real data, we
used a database of 1100 boundary contours of sh images obtained from a web site [11]. The
number of data points in each silhouette in this database varies from 400 to 1600. This database
contains not only shes but more generally sea animals, i.e, the diversity of shapes is large.
So as not to use size for easy discrimination between shapes, all shapes are normalized to the
same size. To prepare the database, every shape is t by a 4 th degree polynomial and then
polynomial curves are centered by setting t center at the origin. The last step is to compute
the rotation invariants as in the previous section. We rst run queries by example to test the
rotation invariants. Two examples are shown in Fig. 6 where the rotation invariance is clear.
Figure
Queries by example invariant to Euclidean Transformations and re
ections.
Then, we test the stability to small perturbations such as removing data on a few shapes and
running queries with these modied shapes. In Fig. 7, small parts are removed in the query. The
capability of the IPs to handle missing data (especially if small patches are removed at many
locations throughout a silhouette) and to handle open (non closed) curves is one of the main
advantage of this description in comparison to descriptions using arc length parameterization
such as Fourier descriptors or B-splines.
With our approach query by sketch is also possible. For every query, a Bayesian recognizer
is used since the variability of each invariant can have very dierent standard deviations. This
variability is estimated from a training set. This training set is synthetically generated from the
given sketch by adding perturbations: sample functions of a colored noise. Obtained standard
deviations are used to weight each invariant during the comparison between shapes in the
database, i.e, the Mahalanobis distance is used. (The optimal weighting is to use a full inverse
Figure
7: Queries by example where relative small parts as fans are removed.
covariance matrix in the quadratic-form recognizer, rather than the diagonal covariance matrix
approximation used in these experiments.) It turns out that depending on the talent of the
drawer and on the number of occurrences of and variability in the target shape in the database,
the user may want to control the similarity measure used between the query and the searched-for
database shapes. To handle this, the standard deviation used in generating training sets
is decreased or increased depending on whether more or less similarity is needed. Therefore,
in addition to the query, the user has to specify what degree of similarity he/she wants to
use: very similar (std. dev. is 0:02), similar (std. dev. is 0:1), weakly similar (std. dev. is
0:2). Fig 8 illustrate the database shapes found for the same query but using three dierent
similarity criteria.
Obviously, a 4 th degree polynomial is not able to discriminate shapes with only small scale
dissimilarities. Better discrimination power can be achieved by using 6 th degree polynomials,
see [15].
Figure
8: Queries using the same query example but with a recognizer trained on dierent
standard deviations of the colored noise (0:02, 0:1, and 0:2 respectively). The rst three closest
shapes are always retrieved, but increasing variability can be observed in the other retrieved
shapes.
Conclusions
Though the shape-representing IP's that we use may be of high degree, we have introduced fast
accurate pose estimation, and fast accurate pose-independent shape recognition based on geometric
invariants. Approximate initial single-computation estimates are computed, and these
are iterated 2 or 3 times to achieve the closest local minimum of the performance functionals
used. The pose estimation uses all the IP coe-cients, but is not optimal because it does not use
optimal weightings. The pose-independent recognition uses estimated centering based on all of
the IP coe-cients followed by rotation invariant recognition based on a complete set of geometric
rotation invariants. However, optimal weightings were not used here either. Nevertheless,
the eectiveness of the pose-invariant recognition is illustrated by the indexing application in a
database of 1,100 silhouettes. Though some of the invariants may not eective discriminators,
the complete set is. If put into a Bayesian or Maximum Likelihood framework, we can achieve
fully optimal pose estimation and pose-independent shape recognition.
Extensions to 3D based on tensors are undergoing further development [19], as well as
is handling local deformations. Of great importance is to extend the pose estimation and
transformation-invariant shape recognition to handle two situations. First is that for which
considerable portions of a silhouette are missing, perhaps due to partial occlusion or where the
silhouette is much more complicated. Then, pose estimation and recognition can be based on
\invariant patches". These invariant patches are discussed in [10], and the ideas in the present
paper should be applicable. The second extension is to handle a-ne rather than just Euclidean
transformations of shapes. The intermediate transformation, scaled Euclidean, should be easy
to handle, since an isotropic scaling of the data set by simply multiplies every monomial of
degree d in the polynomial by the factor d . The full a-ne transformation is more challenging,
and we are studying it. One approach is to convert the A-ne Transformation Problem to a
Euclidean Transformation Problem through a normalization based on the coe-cients of the
polynomials t to the data [1]. The challenge here is to develop a normalization that uses much
of the information contained in the polynomial coe-cients [18] and which is highly stable.
Another subject of interest is to consider small locally a-ne deformations along a silhouette.
6
Appendix
From Complex to Real Representation
What is the transformation relating the coe-cients in the real and complex polynomial repre-
sentations? Computing the coe-cients of the complex representation given a real IP appears to
be complicated. The transformation from the complex C to the real A vector representation,
i.e, the reverse way, is easier to compute. Vector C duplicates information since c
Therefore, it is in practice more e-cient to use the vector representation B with components
introduced in Sec. 2.2. Indeed, B
is a minimal description of C.
Transformation matrix T between B and A diagonal since the coe-cients
for a form transform independently of the coe-cients for each other form. Thus,
where T l is the transformation matrix of homogeneous polynomial H l of degree l.
The goal of this section is to nd a recursive way to compute T l . Consider the following
family of formal real homogeneous polynomials in complex representation:
D l (z) =2
0j;kl;j+k=l
with
d l j . We deduce the following second order recursive formula for D l (z):
D l (x;
l
l
where the second and third lines are the expansion of Re(d l z l ). From the last equation, we
deduce a recursive computation for
l
kC A denotes the binomial coe-cient l!
(l k)!k!
. As an illustration, the rst three iterations
and one can check that from Sec. 2.2,
These matrices T l specify T (see (12)) which describes how a 2D polynomial is transformed
from its complex C to real A representation. In practice, to obtain the real and imaginary parts
of the complex polynomial coe-cients that represent a data set, we rst t a real polynomial
to the data set to estimate the coe-cient vector A, and then obtain B by
Acknowledgments
This work was supported in part by a postdoctoral grant from INRIA, Domaine de Voluceau,
Rocquencourt, France.
--R
Estimation of planar curves
Recovery of Parametric Models from Range Images: The Case for Superquadrics with Global Deformations.
Recognizing Planar Object Using Invariant Image Features.
Geometric Invariance in Computer Vision.
resistant invariants of curves.
Pims and invariant parts for shape recognition.
Robust and e-cient shape indexing through curvature scale space
On using CAD models to compute the pose of curved 3D objects.
A new complex basis for implicit polynomial curves and its simple exploitation for pose estimation and invariant recognition.
Complex representations of algebraic curves.
Improving the stability of algebraic curves for applications.
Information Transmission
Covariant conics decomposition of quartics for 2D object recognition and a-ne alignment
Pose estimation of free-form 3D objects without point matching using algebraic surface models
--TR
--CTR
Amir Helzer , Meir Barzohar , David Malah, Stable Fitting of 2D Curves and 3D Surfaces by Implicit Polynomials, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.10, p.1283-1294, October 2004
Cem nsalan, A model based approach for pose estimation and rotation invariant object matching, Pattern Recognition Letters, v.28 n.1, p.49-57, January, 2007
Jean-Philippe Tarel , William A. Wolovich , David B. Cooper, Covariant-Conics Decomposition of Quartics for 2D Shape Recognition and Alignment, Journal of Mathematical Imaging and Vision, v.19 n.3, p.255-273, November
Thomas B. Sebastian , Benjamin B. Kimia, Curves vs. skeletons in object recognition, Signal Processing, v.85 n.2, p.247-263, February 2005
Michael J. Black , Benjamin B. Kimia, Guest Editorial: Computational Vision at Brown, International Journal of Computer Vision, v.54 n.1-3, p.5-11, August-September | implicit polynomial curves;algebraic curves;shape recognition;complete-sets of rotation invariants;complex polynomials;euclidean invariants;pose estimation;shape representation;curve centers;pose-independent curve recognition |
628780 | A Cooperative Algorithm for Stereo Matching and Occlusion Detection. | AbstractThis paper presents a stereo algorithm for obtaining disparity maps with occlusion explicitly detected. To produce smooth and detailed disparity maps, two assumptions that were originally proposed by Marr and Poggio are adopted: uniqueness and continuity. That is, the disparity maps have a unique value per pixel and are continuous almost everywhere. These assumptions are enforced within a three-dimensional array of match values in disparity space. Each match value corresponds to a pixel in an image and a disparity relative to another image. An iterative algorithm updates the match values by diffusing support among neighboring values and inhibiting others along similar lines of sight. By applying the uniqueness assumption, occluded regions can be explicitly identified. To demonstrate the effectiveness of the algorithm, we present the processing results from synthetic and real image pairs, including ones with ground-truth values for quantitative comparison with other methods. | Introduction
Stereo vision can produce a dense disparity map. The resultant disparity map
should be smooth and detailed; continuous and even surfaces should produce a region of
smooth disparity values with their boundary precisely delineated, while small surface
elements should be detected as separately distinguishable regions. Though obviously
desirable, it is not easy for a stereo algorithm to satisfy these two requirements at the
same time. Algorithms that can produce a smooth disparity map tend to miss the details
and those that can produce a detailed map tend to be noisy.
For area-based stereo methods [13], [18], [29], [7], [2], [12], which match
neighboring pixel values within a window between images, the selection of an
appropriate window size is critical to achieving a smooth and detailed disparity map.
The optimal choice of window size depends on the local amount of variation in texture
and disparity [20], [2], [6], [21], [12]. In general, a smaller window is desirable to avoid
unwanted smoothing. In areas of low texture, however, a larger window is needed so
that the window contains intensity variation enough to achieve reliable matching. On the
other hand, when the disparity varies within the window (i.e., the corresponding surface
is not fronto-parallel), intensity values within the window may not correspond due to
projective distortion. In addition to unwanted smoothing in the resultant disparity map,
this fact creates the phenomena of so-called fattening and shrinkage of a surface. That
is, a surface with high intensity variation extends into neighboring less-textured surfaces
across occluding boundaries.
Many attempts have been made to remedy these serious problems in window-based
stereo methods. One earlier method is to warp the window according to the
estimated orientation of the surface to reduce the effect of projective distortion [23]. A
more recent and sophisticated method is an adaptive window method [12]. The window
size and shape are iteratively changed based on the local variation of the intensity and
current depth estimates. While these methods showed improved results, the first method
does not deal with the difficulty at the occluding boundary, and the second method is
extremely computationally expensive. A typical method to deal with occlusion is bi-directional
matching. For example, in the paper by Fua [6] two disparity maps are
created relative to each image: one for left to right and another for right to left. Matches
which are consistent between the two disparity maps are kept. Inconsistent matches
create holes, which are filled in by using interpolation.
The fundamental problem of these stereo methods is that they make decisions
very locally; they do not take into account the fact that a match at one point restricts
others due to global constraints resulting from stereo geometry and scene consistency.
One constraint commonly used by feature-based stereo methods is edge consistency [22],
[14]; that is, all matches along a continuous edge must be consistent. While constraining
matches using edge consistency improves upon local feature-based methods [1], [26],
they produce only sparse depth maps.
The work by Marr and Poggio [15], [16] is one of the first to apply global
constraints or assumptions while producing a dense depth map. Two assumptions about
stereo were stated explicitly: uniqueness and continuity of a disparity map. That is, the
disparity maps have unique values and are continuous almost everywhere. They devised
a simple cooperative algorithm for diffusing support among disparity estimates to take
advantage of the two assumptions. They demonstrated the algorithm on synthetic
random-dot images. The application of similar methods to real stereo images has been
left largely unexplored probably due to memory and processing constraints at that time.
Recently, Scharstein and Szelski [27] proposed a Bayesian model of stereo matching. In
creating continuity within the disparity map, support among disparity estimates is non-linearly
diffused. The derived method has results similar to that of adaptive window
methods [12]. Several other methods [3], [8], [11] have attempted to find occlusions and
disparity values simultaneously using the ordering constraint along with dynamic
programming techniques.
In this paper we present a cooperative stereo algorithm using global constraints to
find a dense depth map. The uniqueness and continuity assumptions by Marr and Poggio
are adopted. A three-dimensional array of match values is constructed in disparity space;
each element of the array corresponds to a pixel in the reference image and a disparity,
relative to another image. An update function of match values is constructed for use with
real images. The update function generates continuous and unique values by diffusing
support among neighboring match values and by inhibiting values along similar lines of
sight. Initial match values, possibly obtained by pixel-wise correlation, are used to retain
details during each iteration. After the match values have converged, occluded areas are
explicitly identified.
To demonstrate the effectiveness of the algorithm we provide experimental data
from several synthetic and real scenes. The resulting disparity maps are smooth and
detailed with occlusions detected. Disparity maps using real stereo images with ground-truth
disparities (University of Tsukuba's Multiview Image Database) are used for
quantitative comparison with other methods. A comparison with the multi-baseline
method and the multi-baseline plus adaptive window method is also made.
2. A Cooperative Stereo Algorithm
Marr and Poggio [15], [16] presented two basic assumptions for a stereo vision
algorithm. The first assumption states that at most a single unique match exists for each
pixel; that is, each pixel corresponds to a single surface point. When using intensity
values for matching this uniqueness assumption may be violated if surfaces are not
opaque. A classic example is a pixel receiving contribution from both a fish and a fish
bowl. The second assumption states that disparity values are generally continuous, i.e.
smooth within a local neighborhood. In most scenes the continuity assumption is valid
since surfaces are relatively smooth and discontinuities occur only at object boundaries.
We propose a cooperative approach using disparity space to utilize these two
assumptions. The 3D disparity space has dimensions row r, column c and disparity d.
This parameterization is different from 3D volumetric methods [5], [17], [27], [28] that
use x, y, and z world coordinates as dimensions. Assuming (without loss of generality)
that the images have been rectified, each element (r, c, d) of the disparity space projects
to the pixel (r, c) in the left image and to the pixel (r, c+d) in the right image, as
illustrated in figure 1. Within each element, the estimated value of a match between the
pixels is held.
To obtain a smooth and detailed disparity map an iterative update function is used
to refine the match values. Let L d) denote the match value assigned to element (r,
c, d) at iteration n. The initial values d) may be computed from images using:
c
r
I
c
r
I
d
c
r
where d is an image similarity function such as squared differences or normalized
correlation. The image similarity function should produce high values for correct
matches, however the opposite does not need to be true, i.e. many incorrect matches
might also have high initial match values.
The continuity assumption implies neighboring elements have consistent match
values. We propose iteratively averaging their values to increase consistency. When
averaging neighboring match values we need a concept of local support. The local
support area for an element determines which and to what extent neighboring elements
should contribute to averaging. Ideally, the local support area should include all and only
those neighboring elements that correspond to a correct match if the current element
corresponds to a correct match. Since the correct match is not known beforehand, some
assumption is required on deciding the extent of the local support. Marr and Poggio, for
example, used elements having equal disparity values for averaging - that is, their local
support area spans a 2D area (d=const.) in the r-c-d space. This 2D local support area
corresponds to the fronto-parallel plane assumption. However, sloping and more general
surfaces require using a 3D area in the disparity space for local support. Many 3D local
support assumptions have been proposed [9], [25], [26], [12]; Kanade and Okutomi [12]
present a detailed analysis of the relationship and differences among them. For simplicity
we use a box-shaped 3D local support area with a fixed width, height and depth, but a
different local support area could be used as well.
Current d)
Inhibition Area, y(r, c, d)
d)
Figure
Illustration of the inhibitory and support regions between elements for a 2D slice of the 3D
disparity space with the row number held constant.
Left Camera Right Camera
c)
Let us define S d) to be the amount of local support for (r, c, d), i.e. the sum
of all match values within a 3D local support area F.
d
c
r
d
c
c
r
r
d
c
r
, (2)
The uniqueness assumption implies there can exist only one match within a set of
elements that project to the same pixel in an image. As illustrated in figure 1 by dark
squares, let Y(r, c, d) denote the set of elements which overlap element (r, c, d) when
projected to an image. That is, each element in Y(r, c, d) projects to pixel (r, c) in the left
image or to pixel (r, c+d) in the right image. With the uniqueness assumption, Y(r, c, d)
represents the inhibition area to a match at (r, c, d).
d) denote the amount of inhibition S d) receives from the
elements in Y(r, c, d). Many possible inhibition functions are conceivable; we have
chosen the following for its computational simplicity:
a
d
c
r
d
c
r
c
r
d
c
r
d
c
r
R
The match value is inhibited by the sum of the match values within Y(r, c, d). The
exponent a controls the amount of inhibition per iteration. To guarantee a single element
within Y(r, c, d) will converge to 1, a must be greater than 1. The inhibition constant a,
should be chosen to allow elements within the local support F to affect the match values
for several iterations while also maintaining a reasonable convergence rate.
Summing the match values within a local support in equation (2) can result in
oversmoothing and thus a loss of details. We propose restricting the match values
relative to the image similarity between pixel (r, c) in the left image and pixel d) in
the right image. In this way, we allow only elements that project to pixels with similar
intensities to have high match values (though pixels with similar intensity do not
necessarily end up with high match values.) The initial match values L 0 , which are
computed using a measure of intensity similarity, can be used for restricting the current
match values L n . Let T d) denote the value R d) restricted by L
d
c
r
R
d
c
r
d
c
r
Our update function is constructed by combining equations (2), (3) and (4) in their
respective order.
a
d
c
r
d
c
r
c
r
d
c
r
d
c
r
d
c
r
While our method uses the same assumptions as Marr and Poggio's, this update
function differs substantially. Using the current notation, the update function that Marr
and Poggio proposed is:
d
c
r
d
c
r
c
r
d
c
r
d
c
r
d
c
r
where s is a sigmoid function and e is the inhibition constant.
Marr and Poggio [15], [16] used discrete match values and a 2D local support for
F, possibly due to memory and processing constraints. Their results using synthetic
random-dot images with step function disparities were excellent. Since real stereo image
pairs have multiple intensity levels and sloping disparities, continuous match values and a
3D local support for F are needed. Marr and Poggio did not address the steps needed to
apply equation (6) to real stereo image pairs [24]. Equation (5) possesses two main
advantages over (6), in addition to supporting the use of real images. First, the values in
are restricted by the initial match values to maintain details. In (6) the initial values
are added to the current values to bias the results towards values which were initially
high. Since (6) does not restrict values that were initially low, oversmoothing and a loss
of details may still occur. Second, the inhibition function in (5) is simpler so a costly
sigmoid function does not need to be computed; for their experiments Marr and Poggio
actually used a threshold function instead of a sigmoid function due to processing
constraints.
3. Explicit Detection of Occluded Areas
Occlusion is a critical and difficult phenomena to be dealt with by stereo
algorithms. With any reasonably complex scene there exists occluded pixels that have no
correct match. Unfortunately most stereo algorithms do not consider this important case
explicitly, and therefore they produce gross errors in areas of occlusion or find disparity
values similar to the foreground or background. Several methods have attempted to
explicitly detect occlusions, including methods using intensity edges [10], multiple
cameras with camera masking [19] and bi-directional (left-to-right and right-to-left)
matching [6]. Recently, several stereo algorithms, Belhumeur and Mumford [3], Geiger,
Ladendorf and Yuille [8] and Intille and Bobick [11] have proposed finding occlusions
and matches simultaneously to help in identifying disparity discontinuities. By imposing
an additional assumption called the ordering constraint these methods have been able to
successfully detect occlusions. The ordering constraint states that if an object a is left of
an object b in the left image then object a will also appear to the left of object b in the
right image. While powerful, the ordering constraint assumption is not always true, and
is violated when pole-like objects are in the foreground.
In our algorithm we try to identify occlusions by examining the magnitude of the
converged match values in conjunction with the uniqueness constraint. Since no correct
match exists in areas of occlusion, all match values corresponding to occluded pixels
should be small. Consider a pixel p in the left image, whose correct corresponding point
is not visible in the right image. Referring to figure 2, for an element v of the array along
the line of sight of p, there are two cases that occur for its projection q on the right image.
The first case, depicted in figure 2(a), is when q's correct corresponding point is visible in
the left image. Then there exists an element v' which corresponds to the correct match
between a pixel p' in the left image and q. Since elements v and v' both project to pixel q,
their match values will inhibit each other due to the uniqueness assumption. Generally,
the correct element v' will have a higher match value, causing the value for element v to
decrease. The second case, depicted in figure 2(b), is a more difficult case. This occurs
when q's true corresponding element is occluded in the left image. Since neither p nor q
has a correct match, the value of a match between p and q will receive no inhibition from
elements corresponding to correct matches, and false matches could have high values. In
such cases additional assumptions must be made to correctly find occluded areas. The
ordering constraint could be one such assumption that may label these correctly as
occluded. However, a tradeoff exists; enforcing the ordering constraint could in turn lead
to other pixels being mislabeled as occluded. Due to this tradeoff we have chosen not to
enforce the ordering constraint.
In general, provided mutually occluded areas within the disparity range do not
have similar intensities, all match values corresponding to occluded pixels will be small.
After the match values have converged, we can determine if a pixel is occluded by
finding the element with the greatest match value along its line of sight. If the maximum
match value is below a threshold the pixel is labeled as occluded.
v'
Right
Image
Image
. q
Right
Image
Image
Occluded Surface
False Match
Non-occluded Surface
Correct Match
Figure
2;(a) If q is not occluded there is a correct match with p' which will inhibit the false match with p; (b) If q is occluded it
is possible for a false match to occur with p.
4. Summary of Algorithm
The cooperative algorithm is now summarized as follows:
1. Prepare a 3D array, (r, c, d): (r, c) for each pixel in the reference image
and d for the range of disparity.
2. Set initial match values L 0 using a function of image intensities, such
as normalized correlation or squared differences.
3. Iteratively update match values L n using (5), until the match values
converge.
4. For each pixel (r, c), find the element (r, c, d) with the maximum
match value.
5. If the maximum match value is higher than a threshold, output the
disparity d, otherwise classify it as occluded.
The running time for steps 1 through 5 is on the order of N 2 *D*I, where N 2 is the
size of the image, D is the range of disparities, and I is the number of iterations. The
amount of memory needed is on the order of N 2 *D. In practice the algorithm takes about
8 seconds per iteration with 256x256 images on a SGI Indigo 2ex.
5. Experimental Results
To demonstrate the effectiveness of our algorithm we have applied it to several
real and synthetic images. The input images are rectified. Initial match values are set by
using the squared difference of image intensities for each pixel. The squared difference
values were linearly adjusted so that their values distribute between 0 and 1. The
threshold for detecting occlusions was set constant for all image pairs at 0.005.
5.1 Random Dot Stereogram
Figure
3(a) and (b) present a synthetic random dot image pair with random noise.
A sinusoidal repetitive pattern is also inserted for part of the image to make it more
difficult. The disparity map shown in figure 3(c) has step-function as well as curved
disparities. The algorithm was run with three different sizes of local support (3x3x3,
5x5x3 and 7x7x3.) Table 1 shows the performance summary after 10 iterations.
Approximately 99% of the disparity values were found correctly for each size of local
support area. Pixels labeled occluded in the true disparity map are not used in computing
the disparity errors. A disparity is labeled as correct if it is within one pixel of the correct
disparity. It is worth noting that at the beginning of iteration one, only 35% of the
maximum initial matches L 0 that were computed using a local image intensity similarity
measure were correct. As observed in Figure 3(d), the disparity errors mainly occur
within the repetitive texture and at disparity discontinuities. We found, however, that if
enough iterations are completed, incorrect disparities due to repetitive textures are
completely removed [30]. Of the detected occlusions, 81% to 97% were indeed
occlusions and 58% to 80% of the true occlusions were found depending on local support
area size. Occlusions created by the three vertical bars, which violate the ordering
constraint, were found correctly. The inhibition constant a controls the convergence
properties of the algorithm. Figure 4 illustrates the convergence properties for different
values of a. Higher values for the inhibition constant lead to slightly faster convergence
with a minimal loss of accuracy.
5.2 U. of Tsukuba Data with Ground Truth
The University of Tsukuba's Multiview Image Database provides real stereo
image pairs with ground truth data. The ground-truth data allows us to do a quantitative
comparison between our method and others. Figure 5(a), (b) and (c) shows a stereo
image pair from the University of Tsukuba data with a ground-truth disparity map. In
this stereo pair, 59% of the maximum initial match values L 0 were correct. We tested our
algorithm using three different sizes of local support (3x3x3, 5x5x3, and 7x7x3) with the
inhibition constant set to 2. After 15 iterations, as shown in Table 2, at least 97% of the
disparities were found correctly over the range of local support area sizes. The best result
is a 1.98% disparity error for a 5x5x3 local support area. Most errors occurred around
less-textured object boundaries. Approximately 60% of the occlusions detected were
correct with 50% of the true occlusions found.
We allowed the match values to completely converge using 80 iterations. The
resulting disparity map is shown in Figure 5(d). Table 3 shows a detailed analysis of
correct and erroneous matches in the obtained disparity map. Of the 84,003 pixels
labeled non-occluded in the ground truth data, 82,597 pixels had the correct disparity,
1,121 had incorrect disparities and 285 were labeled as occluded using our algorithm. Of
the 1,902 pixels labeled occluded in the ground truth data, our algorithm labeled 860
correctly as occluded and 1,042 incorrectly as non-occluded. Ignoring the occlusion
labeling, of the 84,003 pixels labeled non-occluded in the ground truth data 1.44% had
incorrect disparity values of greater than one pixel using our algorithm. Table 4 shows a
comparison of various stereo algorithms on the U. of Tsukuba data. The GPM-MRF
algorithm [4] had approximately twice as many errors. The results of more standard
algorithms also provided by [4], had an error rate of 9.0% for LOG filtered L 1 and 10.0%
for normalized correlation. The University of Tsukuba group has obtained the best
results so far using multiple images (more than two) and camera masking [19] with errors
of 0.3% for 25 images and 0.9% for 9 images. The error results for the camera masking
method are evaluated on fewer pixels since the chance of a pixel being occluded
increases with the number of camera angles used.
5.3 CMU Coal Mine Scene
Figure
6 presents the stereo image pair of the "Coal Mine" scene and the
processing results. For comparison, the multi-baseline method [21] using sums of
squared differences and the adaptive window method [12] are applied to the image set.
The multi-baseline result (Figure 6 (f)) that uses three input images is clearly the noisiest
of the three. The result of the adaptive window approach (Figure 6 (g)) also using three
images is smooth in general, but a few errors remain. Especially, the small building
attached to the tower in the center of the image is not well delineated and the slanted roof
in the upper corner of the scene is overly smoothed. For our approach we used
normalized correlation within a 3x3 window to create the initial match values instead of
squared differences, since intensity values varied between the input images. The results
Figure
are smooth while recovering several details at the same time. The slanted
roof of the lower building and the water tower on the rooftop are clearly visible. Depth
discontinuities around the small building attached to the tower are preserved. 15
iterations were used and the inhibition constant was set to 2.
(a) (b)
(c) (d)
Figure
3: Synthetic Scene, 50% density; (a) Reference (left) image; (b) Right image; (c) True Disparity
black areas are occluded; (d) Disparity map found using 3x3x3 local support area, black areas are
detected occlusions.
Random Dot Stereogram
Area RxCxD
Correct
Correct
Found
3x3x3 99.44 97.11 79.61
5x5x3 99.29 95.41 71.05
7x7x3 98.73 81.10 58.42
Table
1: The percentage of disparities found correctly, the percentage of the detected occlusions that are
correct and the percentage of the true occlusions found for three different local support area sizes using the
random dot stereo pair.
Figure
4: Convergence rate for inhibition constant a of 1.5, 2 and 4 over 20 iterations using the random dot
stereogram.
(a) (b)
(c) (d)
Figure
5: Head scene provided by University of Tsukuba: (a) Reference (left) image; (b) Right image; (c)
Ground truth disparity map with black areas occluded, provided courtesy of U. of Tsukuba; (d) Disparity
map found using our algorithm with a 5x5x3 local support area, black areas are detected occlusions. The
match values were allowed to completely converge. Disparity values for narrow objects such as the lamp
stem are found correctly.
94.00%
95.00%
96.00%
97.00%
98.00%
99.00%
100.00%
Iterations
Disparities
Correct 1.54
a
U. of Tsukuba Stereo Image Pair
Area RxCxD
Correct
Correct
Found
5x5x3 98.02 66.58 51.84
7x7x3 97.73 63.23 44.85
Table
2: The percentage of disparities found correctly, the percentage of the detected occlusions that are
correct and the percentage of the true occlusions found for three different local support area sizes using the
U. of Tsukuba stereo pair.
Confusion matrix for the disparity map
obtained from U. of Tsukuba data.
Ground Truth
Occluded
Ground Truth
Non-occluded Total
Occluded 860 285 1,145
Non-Occluded 1,042
Correct
Incorrect
Total 1,902 84,003 85,905
Table
3: The number of occluded and non-occluded pixels found using our algorithm compared to the
ground truth data provided by University of Tsukuba. A 5x5x3 area was used for the local support and the
disparity values were allowed to completely converge.
Zitnick and Kanade
GPM-MRF [4]
LOG-filtered
Normalized correlation [4]
Nakamura et al. [19] (25 images)
Nakamura et al. [19]
Table
4: Comparison of various algorithms using the ground truth data supplied by University of Tsukuba.
rates of greater than one pixel in disparity are for pixels labeled non-occluded in the ground truth
data. GPM-MRF [4] has approximately twice the error rate of our algorithm. LOG-filtered L 1 and
Normalized correlation are supplied for comparison to more conventional algorithms. The University of
Tsukuba group provides their results using a 3x3 and 5x5 camera array. The error results for their method
use fewer pixels since the chance of a pixel being occluded increases with the number of camera angles
used.
(a) (b) (c)
(d) (e)
Figure
Coal mine scene; (a) Reference (left) image; (b) Right image; (c) Disparity map obtained by using
proposed method with a 3x3x3 local support area, black areas are detected occlusions; (d) Real oblique
view of the coal mine model; (e) Isometric plot of the disparity map of Figure 6(c); (f) Isometric plot of the
disparity map using multi-baseline stereo with three images as presented in [21]; (g) Isometric plot of the
disparity map using multi-baseline stereo with adaptive window with three images as presented in [12].
6. Conclusion
One of the important contributions of Marr and Poggio [15], [16], in addition to
the cooperative algorithm itself, is that they insisted the explicitly stated assumptions be
directly reflected in their algorithm. Many other stereo algorithms in contrast do not state
assumptions explicitly or the relationship between the assumptions and the algorithm is
unclear. In following Marr and Poggio's positive example we have attempted to directly
reflect the continuity and uniqueness assumptions in our algorithm. To find a continuous
surface, support is diffused among neighboring match values within a 3D area of the
disparity space. A unique match is found by inhibition between match values along
similar lines of sight. Additionally, after the values have converged, occlusions can be
explicitly identified by examining match value magnitudes.
As demonstrated by using several synthetic and real image examples, the resulting
disparity map is smooth and detailed with occlusions detected. The quantitative results
obtained using the ground truth data supplied by University of Tsukuba demonstrates the
improvement of our algorithm over other current algorithms.
7.
Acknowledgements
We would like to thank Dr. Y. Ohta and Dr. Y. Nakamura for supplying the
ground truth data from the University of Tsukuba.
--R
"Depth from edge and intensity based stereo,"
"Stereo Vision,"
"A bayesian treatment of the stereo correspondence problem using half-occluded regions,"
"Markov Random Fields with Efficient Approximations,"
"A space-sweep approach to true multi-image matching,"
"A parallel stereo algorithm that produces dense depth maps and preserves image features,"
Photogrammetric Standard Methods and Digital Image Matching Techniques for High Precision Surface Measurements.
"Occlusions and Binocular Stereo,"
"Computational experiments with a feature based stereo algorithm,"
"Incorporating intensity edges in the recovery of occlusion regions,"
"Disparity-space images and large occlusion stereo,"
"A stereo matching algorithm with an adaptive window: Theory and experiment,"
"Computer determination of depth maps,"
"A parallel binocular stereo algorithm utilizing dynamic programming and relaxation labelling,"
"Cooperative computation of stereo disparity,"
"A computational theory of human stereo vision,"
"Robot spatial perception by stereoscopic vision and 3D evidence grids,"
"An iterative prediction and correction method for automatic stereo comparison,"
"Occlusion detectable stereo - - Occlusion patterns in camera matrix,"
"Stereo vision for robotics,"
"A multiple-baseline stereo,"
"Stereo by intra- and inter-scanline search using dynamic programming,"
"A flexible approach to digital stereo mapping,"
"The Marr and Poggio algorithm for real scenes was not defined so any implementation will be a change; however the algorithm worked well on random-dot stereograms and there are several papers to support this."
"Pmf: A stereo correspondence algorithm using a disparity gradient limit,"
"Detection of binocular disparities,"
"Stereo matching with nonlinear diffusion,"
"Stereo matching with transparency and matting,"
"Realities of automatic correlation problem,"
"A volumetric iterative approach to stereo matching and occlusion detection,"
--TR
--CTR
Zheng-dong Liu , Ying-nan Zhao , Jing-yu Yang, Fast stereo matching method using edge traction, Intelligent information processing II, Springer-Verlag, London, 2004
S. Someya , K. Okamoto , G. Tanaka, 3D Shape Reconstruction from Synthetic Images Captured by a Rotating Periscope System with a Single Focal Direction, Journal of Visualization, v.6 n.2, p.155-164, April
Ayoub K. Al-Hamadi , Robert Niese , Axel Panning , Bernd Michaelis, Toward robust face analysis method of non-cooperative persons in stereo color image sequences, Machine Graphics & Vision International Journal, v.15 n.3, p.245-254, January 2006
Omni-Directional Stereoscopic Images from One Omni-Directional Camera, Journal of VLSI Signal Processing Systems, v.42 n.1, p.91-101, January 2006
Philip Kelly , Noel E. O'Connor , Alan F. Smeaton, Pedestrian detection in uncontrolled environments using stereo and biometric information, Proceedings of the 4th ACM international workshop on Video surveillance and sensor networks, October 27-27, 2006, Santa Barbara, California, USA
Heiko Hirschmller , Peter R. Innocent , Jon Garibaldi, Real-Time Correlation-Based Stereo Vision with Reduced Border Errors, International Journal of Computer Vision, v.47 n.1-3, p.229-246, April-June 2002
Qiuming Luo , Jingli Zhou , Shengsheng Yu , Degui Xiao, Stereo matching and occlusion detection with integrity and illusion sensitivity, Pattern Recognition Letters, v.24 n.9-10, p.1143-1149, 01 June
Steven M. Seitz , Jiwon Kim, The Space of All Stereo Images, International Journal of Computer Vision, v.48 n.1, p.21-38, June 2002
Minglun Gong , Yee-Hong Yang, Genetic-Based Stereo Algorithm and Disparity Map Evaluation, International Journal of Computer Vision, v.47 n.1-3, p.63-77, April-June 2002
L.-Q. Xu , B. Lei , E. Hendriks, Computer Vision for a 3-D Visualisation and Telepresence Collaborative Working Environment, BT Technology Journal, v.20 n.1, p.64-74, January 2002
Michael Bleyer , Margrit Gelautz, Graph-cut-based stereo matching using image segmentation with symmetrical treatment of occlusions, Image Communication, v.22 n.2, p.127-143, February, 2007
Gustavo Olague , Francisco Fernndez , Cynthia B. Prez , Evelyne Lutton, The Infection Algorithm: An Artificial Epidemic Approach for Dense Stereo Correspondence, Artificial Life, v.12 n.4, p.593-615, October 2006
Olga Veksler, Dense Features for Semi-Dense Stereo Correspondence, International Journal of Computer Vision, v.47 n.1-3, p.247-260, April-June 2002
Tim Burkert , Jan Leupold , Georg Passig, A photorealistic predictive display, Presence: Teleoperators and Virtual Environments, v.13 n.1, p.22-43, February 2004
Gang Li , Steven W. Zucker, Contextual Inference in Contour-Based Stereo Correspondence, International Journal of Computer Vision, v.69 n.1, p.59-75, August 2006
Sang Hwa Lee , Yasuaki Kanatsugu , Jong-Il Park, MAP-Based Stochastic Diffusion for Stereo Matching and Line Fields Estimation, International Journal of Computer Vision, v.47 n.1-3, p.195-218, April-June 2002
C. Lawrence Zitnick , Sing Bing Kang, Stereo for Image-Based Rendering using Image Over-Segmentation, International Journal of Computer Vision, v.75 n.1, p.49-65, October 2007
Changming Sun, Fast Stereo Matching Using Rectangular Subregioning and 3D Maximum-Surface Techniques, International Journal of Computer Vision, v.47 n.1-3, p.99-117, April-June 2002
Sing Bing Kang , Richard Szeliski, Extracting View-Dependent Depth Maps from a Collection of Images, International Journal of Computer Vision, v.58 n.2, p.139-163, July 2004
Myron Z. Brown , Darius Burschka , Gregory D. Hager, Advances in Computational Stereo, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.8, p.993-1008, August
Daniel Scharstein , Richard Szeliski, A Taxonomy and Evaluation of Dense Two-Frame Stereo Correspondence Algorithms, International Journal of Computer Vision, v.47 n.1-3, p.7-42, April-June 2002 | stereo vision;3D vision;occlusion detection |
628789 | Combining Belief Networks and Neural Networks for Scene Segmentation. | We are concerned with the problem of image segmentation, in which each pixel is assigned to one of a predefined finite number of labels. In Bayesian image analysis, this requires fusing together local predictions for the class labels with a prior model of label images. Following the work of, we consider the use of tree-structured belief networks (TSBNs) as prior models. The parameters in the TSBN are trained using a maximum-likelihood objective function with the EM algorithm and the resulting model is evaluated by calculating how efficiently it codes label images. A number of authors have used Gaussian mixture models to connect the label field to the image data. In this paper, we compare this approach to the scaled-likelihood method of where local predictions of pixel classification from neural networks are fused with the TSBN prior. Our results show a higher performance is obtained with the neural networks. We evaluate the classification results obtained and emphasize not only the maximum a posteriori segmentation, but also the uncertainty, as evidenced e.g., by the pixelwise posterior marginal entropies. We also investigate the use of conditional maximum-likelihood training for the TSBN and find that this gives rise to improved classification performance over the ML-trained TSBN. | Introduction
We are concerned with the problem of image segmentation in which each pixel will be assigned to one out of
a nite number of classes. Our work is applied to images of outdoor scenes, with class labels such as \sky",
\road", \vegetation", etc. Such scenes are typically complex, involving many dierent objects, and some of
these objects can be highly variable (e.g. trees). This means that model-based approaches are not readily
applicable.
Much work on scene segmentation has been based on approaches which rst segment the whole image
into regions, and then classify each region. To carry out this task successfully it is important to classify
each region using not only its own attributes, but also to take account of the context of the other regions.
Taking account of context can be handled in two ways; either by searching for a consistent interpretation of
the whole scene, or by taking account of the local context in which a region nds itself [16]. Examples of the
whole-scene method are [34, 29], while the local-context method is used in [50, 51]. A major problem for these
approaches is that the process of region creation can be unreliable, leading to under- or over-segmentations.
An alternative approach allows the segmentation to emerge along with the classication process. This
can be formulated in the Bayesian framework, using a prior model which represents our knowledge about
the likely patterns of labels in the image, and a likelihood function which describes the relationship of
the observations to the class labels 1 . Two main types of prior model have been investigated, called non-causal
Markov random elds (MRFs) and causal MRFs in the statistical image modelling literature. In
the graphical models community these two types of models are known as undirected and directed graphical
models respectively [21].
Early work on Bayesian image modelling concentrated on non-causal MRFs, see e.g. [3, 13, 27]. One
disadvantage of such models is that they suer from high computational complexity, for example the problem
of nding the maximum a posteriori (MAP) interpretation given an image is (in general) NP-hard.
The alternative causal MRF formulation uses a directed graph. The most commonly used form of these
models is a tree-structured belief network (TSBN) structure, as illustrated in Figure 1. For image modelling
the standard dependency structure is that of a quadtree. One attractive feature of the TSBN model is its
hierarchical multiscale nature, so that long-range correlations can readily be induced. By contrast, non-causal
MRFs typically have a \
at", non-hierarchical structure. Also, we shall see that inference in TSBNs
can be carried out in a time linear in the number of pixels, using a sweep up the tree from the leaves to the
root, and back down again. In the graphical models literature this inference procedure is known as Pearl's
message-passing scheme [35]. This algorithm is also known as the upward-downward algorithm [40, 24],
being a generalisation to trees of the standard Baum-Welch forward-backward algorithm for HMMs (see e.g.
[37]). One disadvantage of TSBNs is that the random eld is non-stationary. For example, in Figure 1, the
common parent of the fourth and fth pixels from the left in X L is the root node, whilst the third and
fourth pixels share a parent in the layer above. This can give rise to \blocky" segmentations.
TSBN models have been used by a number of authors for image analysis tasks. Bouman and Shapiro [5]
introduced such a model using discrete labels in the nodes for an image segmentation task. Perez et al [36]
have discussed MAP and MPM (maximum posterior marginal) inference on TSBNs for image processing
1 Some early work in this direction was carried out by Feldman and Yakimovsky [10].
tasks. Laferte et al [19, 20] have extended this model using a multiscale feature pyramid image decomposition
and by using the EM algorithm for parameter estimation. Cheng and Bouman [6] have investigated trainable
multiscale models, using decision trees to compactly represent conditional probability tables (CPTs) in their
model.
TSBN models can also be used for continuously-valued Gaussian processes in one and two dimensions.
These are the generalisation of Kalman lter models from chains to trees. They have been studied by a
number of groups, notably Prof. Willsky's group at MIT, who have developed the theory and derived fast
algorithms for problems such as optical
ow estimation, surface reconstruction and texture segmentation
[2, 26, 11, 25]; see also [17]. Also, Crouse et al [7] have used a multiscale TSBN to model wavelet coe-cients,
and DeBonet and Viola [9] have used an interesting tree-structured network for image synthesis using non-Gaussian
densities. However, we require a prior over class-label images, and so the work on TSBNs with
discrete-valued nodes is the most germane for our purposes.
As mentioned above, exact inference procedures for non-causal MRFs are, in general, NP-hard. However,
we note that there has been important recent work on approximate inference procedures for such graphs,
using local probability propagation schemes [46]. These local schemes (which are guaranteed to give the
correct answer on graphs without loops, see [35]) give only approximate answers for the grid-structured
graphs typically used in image analysis. Other work on message-passing schemes in \loopy" graphs includes
the decoding of error-correcting codes [12]. The work in [46] can be criticised on the basis that \
at" (i.e.
non-hierarchical) non-causal MRF models are used, although it should be possible to apply similar message
passing schemes on loopy hierarchical graphs (see section 7 for further discussion).
Our work is carried out on a database of images of outdoor scenes (see Section 4 for details) for which both
colour images and the corresponding label images are available. This paper makes a number of contributions:
We investigate the eects of the adaptation (or training) of the parameters of the tree in response to
data. Two methods are considered. In the rst the tree is trained to maximise the probability of the
labelled images; we call this maximum likelihood (ML) training. This is similar to the work of Laferte
et al in [20], but we allow a non-stationary parameterisation of the tree which re
ects regularities in
the image database. In the second method the tree is trained to maximise the probability of the correct
label image given the raw input image; we call this conditional maximum likelihood (CML) training.
The quality of the ML-trained TSBN model has been evaluated by comparing how well it codes a test
set label images. This performance is compared with a number of other coding schemes including the
JPEG-LS lossless codec. We also provide an analysis of TSBN coding which allows us to quantify the
benets of using higher levels of the tree (which correspond to longer-range correlations).
The TSBN comprises the prior aspect of the Bayesian model, and we also require a likelihood term
whereby the image data in
uences the segmentation. The direct approach is to produce a generative
model of the probability density of pixel features given the image class. For example in [5] the class
conditional densities were modelled using Gaussian mixture models. We compare this approach with
an alternative one, where neural networks are used to make local predictions given the pixel features.
These predictions can then be combined with the prior in a principled manner using the scaled-
likelihood method (see Section 2.2 for further details). The eects of combining these methods with
the ML and CML trained trees are also investigated. We evaluate the performance of the segmentation
algorithms not only using pixelwise classication rates, but also through the analysis of posterior
pixelwise entropies and through the conditional probabilities of the label images given the colour
images.
The remainder of this paper is organised as follows: In Section 2 we describe the TSBN model and the
scaled-likelihood method in more detail, and explain how the inference can be carried out. In Section 3 we
derive equations for training the TSBN using maximum likelihood (ML) and conditional maximum likelihood
(CML) methods, and image coding using TSBNs. In Section 4 we give details on the data and the training
of various models. In Section 5 results are presented concerning image coding and the estimation of the
CPTs and in Section 6 we analyse the classication results.
network and neural network models for image segmenta-
tion
2.1 Generative model
Our model for the data is illustrated in Figure 1. The observed data Y is assumed to have been generated
from an underlying process X, where X is a TSBN. The network is arranged in layers. At the highest level
(level there is one node X 0 , which has children in level 1. The lower levels are denoted
on, down to X L in level L. The fundamental property of belief networks is their encoding of conditional
independences [35, 15]. The layered structure means that the distribution of X n , for 1 n L, given all
coarser scale nodes is only dependent on X n 1 . Indeed, the tree-structure of the network means that any
node in X n is only dependent on a single node in X n 1 . Typically in our experiments each parent node has
four children, giving rise to a quadtree-type architecture.
Each node is a multinomial variable, taking on one of C class labels. These labels are those used for the
segmentation, e.g. road, sky, vehicle etc 2 . The links between the nodes are dened by conditional probability
tables (CPTs). As it does not have any parent, the root node has an unconditional prior distribution instead
of a CPT.
The nodes in X L have a one-to-one correspondence with the observed data Y . (The observed data
in our case will be features derived from blocks of pixels, rather than individual pixels in the raw image.)
The model for the observations illustrated in Figure 1 is that the observation Y i is dependent only on the
corresponding variable X L
is
Y
2 Note that it is not necessary for the hidden nodes to be C-class multinomial variables. We have used this for convenience,
and because it gives rise to a simple initialisation of the TSBN-training, as described in Section 4.5.
Y
Figure
1: A 1-D graphical model illustrating a small tree-structured belief network. The layers in X are
denoted on, down to X L ; in this case denotes the raw image information.
2.2 Likelihood model
Above we have described a fully generative model for Y , dened by CPTs from the root downwards. Since
each pixel is composed of a number of components in feature space, P (Y
density function. One method is to use Gaussian mixture density, as used in [5] and described in section 4.4.
However, the database we are using provides both the raw data Y and labelled segmentations X L .
Given this information it is natural to train classiers (e.g. neural networks) to predict X L
is well known that neural networks trained with various error functions (including cross-entropy and mean-squared
error) approximate the posterior probabilities P (X L
Fusion of these predictions into the
belief network for X cannot then be achieved immediately, as this requires P (Y i jX L
terms. However, by using Bayes' theorem we obtain
For inference on X, the data Y is xed and so the factors P (Y i ) need not be considered further. Dening
the scaled likelihood L(X L
location i as
we obtain
Y
By replacing L(X L
principled method for the fusion of
the local predictions ^
with the global prior model P (X). (The notation ^
P denotes that
are estimates of the desired probabilities.) This method of combining neural networks
with belief networks has been suggested (for HMMs) in Smyth[42] and Morgan and Bourlard[31]. There is
an interesting connection between the scaled likelihood and the mutual information I(X between the
random variables X i and Y i , where I(X )]. The mutual information is the
expected value of the log scaled likelihood.
As described above, we would need to have and train a separate neural network to predict P (X
for each pixel. This is clearly undesirable, both in terms of the computational eort and in the amount of
data required. The solution we have adopted is to train just one neural network, but to make the position
coe-cients of the pixel be a part of the inputs to the neural network.
To use the scaled likelihood, we require not only ^
In our images it will turn
out that P (X i ) should depend on the position of pixel i, as we know that in the ensemble of images
there are regularities such as the sky appearing at the top. There are a number of ways to approach the
estimation of P (X i ). One is to train a network to predict the class labels given the position of the pixel. An
alternative would be to use the relationship P (X
and to approximate the integral
with an average over appropriate feature vectors. In our experiments (see Section 6) we compared a spatially
one derived from the pixelwise marginals of the ML-trained TSBN; the results obtained
were very similar.
A potential advantage of the scaled-likelihood method is that the generative model for P (Y i jX i ) may be
quite complex, although the predictive distribution P (X i jY i ) is actually quite simple. This means that the
generative approach may spend a lot of resources on modelling details of P (Y i jX i ) which are not particularly
relevant to the task of inferring X.
2.3 Inference
Given a new image we wish to carry out inference on X L , given the probabilistic model. Computing
the posterior P (X would be highly expensive, as it would require enumerating all possible
C K states in X L . There are two alternatives that are computationally feasible, (i) the computation of
the posterior marginals P (X L
giving rise to a segmentation based on maximum posterior
marginals (MPM) and (ii) the MAP interpretation of the data x . These can
be achieved by Pearl's - message passing schemes, as described in [35]. These schemes are non-iterative
and involve one upward and one downward pass through the tree. Details are given of the MPM computation
in
Appendix
A, along with a method for scaling the calculation to avoid under
ow, and the computation of
the marginal likelihood P (x L ).
3 Training the TSBN
Above it was assumed that the parameters used to dene P (X) are known. In fact we estimated these from
training data. Let denote these parameters, which are the prior probabilities of the root node and the
CPTs in the tree. Let x il , denote the possible values of X i , and let pa ik ,
the set of possible values taken on by Pa i , the parent of X i . The parameter ikl denotes the CPT entry
l 1. For simplicity the symbols X i and Pa i are dropped, and
the probability is written as P (x il jpa ik ).
For training the prior model it is assumed that a number of observation images y m and associated labelled
images x Lm are available, where is the index to the images in the training set. Let denote
the parameters in the likelihood model P (yjx; ).
We discuss in turn maximum likelihood training (x3.1) and conditional maximum likelihood training
(x3.2).
3 Note that nding the conguration x is not likely to be equivalent to nding the conguration that maximises P (X
x
3.1 Maximum likelihood
In maximum likelihood training the parameter vector, , is estimated so that
ML
Y
Y
We can see that the likelihood model parameters and the TSBN model parameters can be estimated
separately by choosing the likelihood model parameters to maximise
and the prior model
parameters to maximise
Assuming that the likelihood model is xed, we obtain ^
ML
j). The optimisation can be carried out using the EM algorithm. It uses bottom-up
and top-down message passing to infer the posterior probabilities of the hidden nodes in the E-step, and
then uses the expected counts of the transitions to re-estimate the CPTs [40, 24, 21].
The re-estimation formulas can be derived directly by maximising Baum's auxiliary function Q(
new estimated parameter vector), where x m
denotes the \hidden" variables x m nx Lm on pattern m. The update for each entry in a CPT is given by
The joint probability
can be obtained locally using the - message passing scheme (see
Appendix
A for details).
This update gives a separate update for each link in the tree. Given limited training data this is un-
desirable. If the set of variables sharing a CPT is denoted as X I , then the EM parameter update is given
by
~
If only the Y information is available, one can still carry out maximum likelihood training of the model;
both parameter sets and would be adapted in this case. This is known as unsupervised learning, and is
described for TSBNs in [20]. A disadvantage of the scaled-likelihood method is that it cannot be used for
unsupervised learning as P (Y ) is not available.
In [5] (Section III C), it appears that the parameters are re-estimated on the test image, which is unusual
from the standard pattern recognition methodology, where model parameters are estimated from training
data and then xed when applied to test images. We follow the standard methodology.
3.2 Conditional maximum likelihood
In the CML procedure, the objective is to predict correctly the labels x L associated with the evidence y.
The parameters are then estimated by maximising the probability of the correct labelling given the evidence
y,
CML
Y
Y
By analogy to the Boltzmann machine, we observe that computing the conditional probability requires
computation of (1) the probability P in the clamped phase (i.e. with x Lm and y m xed), and
(2) the probability p(y m j; ) in the free-running phase (with only y m xed) 4 .
Below we assume that the likelihood model P (yjx so that the objective function is viewed
as a function of only. To carry out the optimisation in Equation 8 we take logarithms and dene
log
log P
Here we have used the subscripts c and f to mean \clamped" and \free". Using the decomposition of
can be further simplied as
log[
log
log
Then, to nd ^
CML
in Equation 8 we need to maximise
log P
4 The use of the terminology clamped and free-running here follows that in [18].
Unfortunately the EM algorithm is not applicable to the CML estimation, because the CML criterion is
expressed as a rational function[14] . However, maximisation of Equation 11 can be carried out in various
ways based on the gradient of L(). In speech analysis [18, 39], methods based on gradient ascent have been
used. The scaled conjugate gradient optimisation algorithm [30, 4] was used in our work. To use this search
method we need to calculate the gradient of L() w.r.t.
Letting
ik jx Lm ; ), it can be shown (see Appendix B
for details) that
@ ikl
ikl
where m ikl and n ikl can be obtained by propagating y m and x Lm respectively, see Equation 23.
When maximising L() it must be ensured that the probability parameters remain positive and properly
normalised. The softmax function is used to meet these constraints. We dene
l 0 e z ikl 0 where
the z ikl 's are the new unconstrained auxiliary variables and ikl always sums to one over the l index by
construction. The gradients w.r.t. z ikl can be expressed entirely in terms of ikl , m n
ikl and n n
ikl ,
@z ikl
ikl ikl
l 0
3.3 Image coding
A TSBN provides a generative, probabilistic model of label images. We can evaluate the quality of a
model for the label process by evaluating the likelihood of a test set of label images under the model.
By calculating log 2 P (x L )=(#labelled pixels in the image) we obtain the coding cost in bits/pixel. The
minimum attainable coding cost is the entropy (in bits/pixel) of the generating process. As the computation
of P intractable in MRF models, we compare the TSBN results to those from the lossless JPEG-LS
codec [44, 45], available from http://www.hpl.hp.com/loco/.
3.3.1 Coding cost under a TSBN model
Using a TSBN to model the distribution of images, the marginal likelihood of a label image x L can be
calculated e-ciently at the root node X 0 of the tree (see Appendix A.2).
Below we will also consider the eect of truncating the tree at a level below the root of the tree. In this
case, instead of having one large tree, the image model consists of a number of smaller trees (and correlations
between the dierent trees are then ignored). This allows us to quantify the benets of using the higher
levels in the tree, which correspond to longer-range correlations. The priors for each of the smaller trees are
calculated by propagating the prior through the downward CPTs to obtain a prior for each root. The
likelihood of an image under the truncated model is simply the product of the likelihoods of the subimages
as computed in each of the smaller trees.
4 Experimental details
4.1 Data
Colour images of out-door scenes from the Sowerby Image Database 5 of British Aerospace are used in our
experiments. The database contains both urban and rural scenes. They feature a variety of every-day
objects from roads, cars and houses to lanes and elds in various places near Bristol, UK. All the scenes
were photographed using small-grain 35mm transparency lm, under carefully controlled conditions. Each
image of the database has been digitised with a calibrated scanner, generating a high quality 24-bit colour
representation.
Both colour images and their corresponding label images are provided in the database. The label images
were created by over-segmenting the images, and then hand labelling each region produced. There were 92
possible labels, organised in a hierarchical system. We combined labels to produce seven classes, namely
\sky", \vegetation", \road markings", \road surface", \building", \street furniture" and \mobile object".
For instance, the class \street furniture" is a combination of many types such as road sign, telegraph pole
5 This database can be made available to other researchers. Please contact Dr. Andy Wright or Dr. Gareth Rees, Advanced
Information Processing Department, Advanced Technology Centre - Sowerby, BAE SYSTEMS Ltd, PO Box 5 Filton, Bristol,
BS34 7QW, UK, email: [email protected] for details.
Undefined
Vegetation
Road Marking
Road Surface
Building
Furniture
Mobile Object
Figure
2: A rural and an urban scene and their hand-labelled classication. (a) Original images. (b) Hand-labelled
classication. On the right is a key describing the labels used.
and bounding object.
Figure 2a shows two scenes from the test image set of the database, one rural and one urban. Figure
2b shows their hand-labelled classications; dierent grey-levels in the label image correspond to the seven
dierent possible classes. The original 104 images were divided randomly into independent training and
test sets of size 61 and 43 respectively. The full-resolution colour images of size 512 by 768 pixels were
downsampled into 128 by 192 regions of size 4 by 4 pixels. The label of the reduced region was chosen by
majority vote within the region, with ties being resolved by an ordering on the label categories. From now
on we will refer to the reduced label images as label images because the original label images will no longer
be used.
4.2 Feature extraction
An important step in classication is that of feature selection. Initially forty features were extracted from
each region. Among them, six features were based on the R, G, B colour components, i.e., the mean and
variance of overall intensity in the region, the colour hue angle (sine and cosine) [1], (R-B) and (2G-R-B)/2
(as used by [34]), where R, G and B indicate the means of the red, green and blue components respectively.
The texture features are the grey-level dierence vectors (GLDV) textural features [48, 47] of contrast,
entropy, local homogeneity, angular second moment, mean, standard deviation, cluster shade and cluster
prominence. The GLDV features were extracted based on the absolute dierence between pairs of gray
levels at a distance apart at four angles . The (x; y) location of the region was
also included in the feature space, as described in Section 2.2. Each feature was normalised by using a linear
transformation to have zero mean and unit variance over the training set.
It is useful to limit the number of features used because increasing the number of features increases the
free parameters which need to be optimised in neural network training phase. A generalised linear model
(GLM) using the normalised features as inputs and softmax outputs [4] was used in feature selection. For
each input, the sum of the absolute values of the weights coming out of that input in the trained GLM
was calculated, and the twenty-one features for which this sum was larger than unity were retained. This
selection procedure is based on the idea that more important features will tend to give rise to larger weights
(cf the Automatic Relevance Determination idea of MacKay and Neal [32]).
4.3 MLP training
Multi-Layer Perceptrons (MLPs) were used for the task of predicting P (X L
As explained in Section 2.2,
these probabilities were estimated by a MLP that takes as input both the non-positional feature vector and
position of pixel i. The retained features produced a feature vector for each region and were fed to a MLP
with 21 input nodes, 7 output nodes and one hidden layer which was trained to classify each region into
one of the seven classes. The activation functions of the output nodes and hidden nodes were the softmax
function and tanh sigmoid functions respectively. The error function used in the training process was cross-entropy
for multiple classes (see [4]). A scaled conjugate gradient algorithm was used to minimise the error
function. The training is performed using over 51,000 regions extracted from the training image dataset,
and validating on an independent validation dataset of over 15,000 regions. The validation dataset was used
in order to choose the optimal number of hidden nodes in the MLP; eventually the best performance on the
validation set is obtained for a MLP with nodes.
The training dataset for MLP training was formed by choosing randomly up to 150 regions for each class
from each single image. By doing so we tried to use equal numbers of regions from each class in the training
set of the MLP. The aim of this rebalancing of the training set is to give the net a better chance to learn about
the infrequent classes (see [4], p 224). The probabilities for each class in the training set of the MLP, denoted
by ~
estimated simply by evaluating the fraction of the training set data points in
each class. The corresponding probabilities for all pixels in the whole of the training set images are denoted by
These turned out to be ~
and 0:0070). The ordering of the classes is \sky",
\vegetation", \road markings", \road surface", \building", \street furniture" and \mobile object". These
two sets of prior probabilities are very dierent; ~
almost uniformly distributed over all classes, but
biased towards classes two and four, corresponding to \vegetation" and \road surface" respectively.
Since the training set for the MLP reweights the classes according to ^
necessary to consider
what eect this will have on the scaled likelihood. In fact, following [4] (p. 223) we nd that
Z
~
~
where y i is the input to the network at pixel i, ~
is the network output for class k, P (c k jy i ) is the
compensated network output and Z is the normalising factor used to make
to one. Hence
we see that the scaled likelihood P is equal to ~
to an unimportant constant.
We call ~
prediction, and P (c k jy i ) (as given by Equation 14) the compensated MLP
prediction. A segmentation can be obtained by choosing the most probable class at each pixel independently.
We call these the raw MLP and compensated MLP segmentations, when using the uncompensated and
compensated predictions respectively.
4.4 Gaussian mixture model training
In section 4.3 we have described how MLPs were trained to relate the image features to labels. The alternative
approach is to build class-conditional density estimators for each class, and to use this along with Bayes'
rule to make predictions. Following [5, 20] we have used Gaussian mixture models (GMMs) for this task.
Specically the cluster program available from http://www.ece.purdue.edu/~bouman was used. The
same training set was used as for the MLP. We considered three dierent feature sets (i) the average R, G,
values in a region, (ii) the 21 features used to train the MLP and (iii) all 40 features. In addition two
dierent settings of the cluster program were used, allowing for either diagonal or full covariance matrices
for the Gaussians. The program selects the number of mixture components automatically using a MDL
criterion; the recommended initialisation of starting with three times as many components as features was
used.
The GMMs for each class were combined with the prior probabilities for each class (the P (c k )'s given in
section 4.3) to produce pixelwise classications. The overall classication accuracies were 68.72%, 49.76%,
75.95%, 71.38%, 77.45%, 71.92% for the 3-full, 3-diag, 21-full, 21-diag, 40-full, 40-diag models respectively
on the test set. The GMM model with highest pixelwise performance (namely 40-full) was then used in
further experiments (see section 6 for further details). The number of mixture components in the 40-full
model for each of the seven classes was 9; 9; 4; 9; 8; 7; 6 respectively.
We have found that the trained GMM sometimes makes very condent misclassications and that this
can cause under
ow problems when evaluating the conditional probability P TSBN (x L jy), the conditional
probability of the \ground truth" labelling x L given the image y (see section 6.4). For this reason we
replaced the likelihood term P (Y i jX L
, the minimum
value needed to avoid under
ow in the ML-TSBN.
4.5 TSBN training
The TSBN used was basically a quadtree, except that there were six children of the root node to take into
account the 2:3 aspect ratio of the images. For a down-sampled image with a total of 128 192 pixels, we
took each pixel in the down-sampled image as a leaf node in a belief network and built up an eight-level
TSBN with a total of 32756 links between nodes at adjacent levels. If each link had a separate CPT, a very
large training set would be needed to ensure the CPTs were well determined, and this in turn implies that
huge computational resources could be needed in order to nd a suitable minimum of the CML objective
function. In practice such an approach is clearly impractical. One technique for dimensionality reduction in
this case is to tie CPTs. In our experiments all of the CPTs on each level were constrained to be equal, except
for the transition from level 0 to level 1, where each table was separate. This
exibility allows knowledge
about the broad nature of scenes (e.g. sky occurs near to the top of images) to be learned by the network,
as is indeed re
ected in the learned CPTs (see section 5).
For training the ML-TSBN, the network parameters were initialised in a number of dierent ways. It
was found that the highest marginal likelihood in the training data was obtained when the initial values of
were computed using probabilities derived from downsampled version of the images. The sparse-data problem
appeared in the initial values of the CPTs because some pairings do not occur in the training data 6 . We
dealt with the problem by adding a small quantity to each conditional probability p(c k jc i ) for
then normalising the modied probabilities. We have used for the case that at least
one 1=C. The plot of likelihood against iteration number levelled
iterations. In the database some pixels are unlabelled; assuming these values are \missing at random", we
treated them as uninstantiated nodes, which can easily be handled in a belief network framework.
The CML training was initialised at the ML-TSBN solution. The plots of conditional likelihood against
iteration number had levelled 44 iterations of CML training by scaled conjugate gradient optimisation
for both the GMM and MLP predictors.
The upward () propagation in the tree takes around 10s, and the downward () propagation around
40s on a SGI R10000 processor; the tree has over 30,000 nodes.
We have made available the C++ code for TSBN training and inference, along with a MATLAB demonstration
which calls these functions at http://www.dai.ed.ac.uk/daidb/people/homes/ckiw/code/cbn.html.
4.6 Combining pixelwise predictions with trees
We now have GMM and MLP local predictors, and ML and CML trained TSBNs. This gives rise to a large
number of possible combinations of pixelwise predictors with trees. The ones that we have investigated are
6 The reason why it is important to consider this is that if a CPT entry is set to zero, the EM algorithm will not move it
away from zero during training.
1. raw GMM pixelwise predictions
2. compensated GMM pixelwise predictions (spatially-uniform compensation)
3. compensated GMM pixelwise predictions (using marginals of ML-TSBN)
4. GMM likelihood
5. GMM likelihood
6. raw MLP pixelwise predictions
7. compensated MLP pixelwise predictions (spatially-uniform compensation)
8. compensated MLP pixelwise predictions (using marginals of ML-TSBN)
9.
The TSBN methods calculated the scaled likelihoods as described in Section 4.3, and MAP inference was
used for the pixelwise predictions. For entries 3 and 8 (compensation using the marginals of the ML-TSBN)
note that dierent compensation probabilities are used in the six regions of the image dened by the six
CPTs from the root to level 1. The performance of these methods is investigated in sections 5 and 6.
5 Results from TSBN training
In this section we describe the results from training the TSBN using both ML and CML training. We rst
discuss label-image coding results using the ML-trained tree, and then inspect the learned CPTs from the
ML-trained TSBN and the CML-trained TSBN.
5.1 Image coding results
In this section we present results comparing the ML-trained TSBN and lossless JPEG coding. The relevant
theory has been described in Section 3.3 and the details of the TSBN training are given in Section 4.5.
0.611.4truncation level
bits/pixel
Figure
3: Bit rate (bits/pixel) as a function of the truncation level in the TSBN.
The average bit rate for the TSBN model was 0:2307 bits/pixel (bpp). For comparison purposes the
JPEG-LS codec gave an average bit rate of 0:2420 bpp. We also tried compressing the label images using
coding using the Unix utility gzip; this gave 0:3779 bpp. The fact that a similar level of
compression performance is obtained by JPEG-LS and the TSBN suggests that the TSBN is a reasonably
good model of the label images.
Using the \truncated tree" scheme discussed in Section 3.3, we can analyse the TSBN results further.
Figure
3 shows the bit rate (in bits/pixel) evaluated as a function of truncating the tree at levels 0 to 7. By
the time level 4 has been reached (corresponding to a 8 8 block size), almost all of the benet has been
attained.
5.2 The learned CPTs
The CPTs derived using ML training are shown in Figure 4. Note that six separate CPTs were used for the
transition from the root node to level 1, as explained in Section 4.5.
We can also calculate the prior marginals at each node in the tree, by simply taking the prior from the root
node and passing it through the relevant CPTs on the path from the root to the node under consideration 7 .
The fact that there are six CPTs for the root to level 1 transition means that there are, in eect, six dierent
prior marginals in levels 1 to 7, dened by the 2:3 aspect ratio of the image. These prior marginals are shown
in
Figure
5. It may not be easy to interpret the CPTs/marginals as a permutation of the state labels at
7 This can also be achieved using Pearl's propagation scheme outlined in Appendix A, with every leaf node uninstantiated.
a node and corresponding permutations of the incoming and outgoing CPTs would leave the overall model
unchanged; however, it appears that the downsampling initialisation means that this is not a large problem.
Analysing Figures 4 and 5, we see that (1) The prior marginals at level 7 re
ect the overall statistics of
the images. The sky, vegetation and road surface classes are the most frequently occurring, the sky class is
more likely to be found in the top half of the images and road surface in the bottom half. Similar patterns
are detectable at level 1 of Figure 5, although the vegetation label is less prevalent in the upper half at this
level. (2) The trained CPTs in levels 1 to 7 exhibit a strong diagonal structure, implying that the children
are most likely to inherit their parent's class. (3) The level 0 to level 1 CPTs need to be read in conjunction
with the root's prior distribution to provide a good explanation of the level 1 prior marginals.
Although Laferte et al [20] have carried out EM training of a TSBN, we note that they only estimated
CPTs that were tied on a layer-by-layer basis. For our data Figures 4 and 5 show that relaxing this constraint
can be useful.
The CPTs and prior marginals obtained with CML training were similar to those shown in Figures 4 and
5 respectively. This is probably due to the fact that CML training was initialised at the ML-TSBN solution
for both GMM and MLP predictors.
6 Segmentation results and performance evaluation
We now turn to the classication of the testing images. Often classication performance is evaluated on
pixelwise accuracies. However, for complex real-world classication task, such as ours, this does not tell
the whole story. There are a number of other factors that concern us, most notably the fact that we are
predicting the labels of pixels in an image, and that spatial coherence is important. We also note that the
fractions of pixels from dierent classes are tremendously dierent, and that the ground-truth labels used
in assessing performance are not 100 percent correct because of both the downsampling process, and also
because of inaccuracies in the hand-labelling process. Therefore, it is a di-cult task to assess the quality
of classication derived from the various methods, which may also depend on the uses the classication will
be put to. An early reference to assessing the quality of segmentations is [22]. More recently, there has
(a) Prior for root node:
(b) From root node to Level 1:
(c) From Level 1 to Level 2:
(d) From Level 2 to Level 3:
From Level 3 to Level 4:
(f) From Level 4 to Level 5:
(g) From Level 5 to Level
From Level 6 to Level 7:
Figure
4: Estimated prior for the root and CPTs (ML training) of an eight-level belief network after being trained on training
images. The area of each black square is proportional to the value of the relevant probability. (a) The prior probabilities at the
root node. (b) The six independent CPTs on the links from root node to its six children on the rst level. (c)-(h) CPTs for the
links between adjacent levels from Level 1 to Level 7 respectively. The seven labels are 1-Sky, 2-Vegetation, 3-Road marking,
4-Road surface, 5-Building, 6-Street furniture, 7-Mobile object. The CPTs have entry (1,1) in the top left-hand corner, and are
read with \from level l" indexing the rows and \to level l indexing the columns.
Level 0:
Level 1:
Level 2:
Level 3:
Level 4:
Level 5:
Level
Level 7:
Key:
Vegetation Road
Marking
Road
Surface
Building Street
Furniture
Mobile
Object
Vegetation Road
Marking
Road
Surface
Building Street
Furniture
Mobile
Object
Vegetation Road
Marking
Road
Surface
Building Street
Furniture
Mobile
Object
Vegetation Road
Marking
Road
Surface
Building Street
Furniture
Mobile
Object
Figure
5: The prior marginals after training with the ML algorithm. The area of each black square is
proportional to the value of the relevant probability. See text for further details.
been some realisation that the aim of segmentation may not be to return just a single segmentation, but
multiple solutions [33], or a probability distribution over segmentations P (x L jy). This posterior distribution
can be explored in many ways; below we describe two, namely (i) posterior marginal entropies and (ii) the
evaluation of the conditional probability P
jy), where x L
is the \ground truth" image for given input
data y.
In this section we compare the performance of classication based on the smoothness of the segmented
image in Section 6.1, the pixelwise prediction accuracies in Section 6.2, the marginal entropies in Section 6.3
and the conditional probability in Section 6.4.
6.1 Smoothness
For the rural scene in Figure 2, Figure 6 shows classications using most of the combinations outlined in
section 4.6. The classications obtained from the single-pixel methods typically have a lot of high-frequency
noise, due to locally ambiguous regions. Both the ML- and CML-trained trees tend to smooth out this noise.
A similar smoothing can be obtained using a majority lter [23], where one simply chooses the most
common class within a window centered on the pixel of interest. However, one drawback with this is that
majority-lter smoothing with a reasonably-sized window tends to remove ne detail, such as road markings;
in contrast it seems that the TSBN methods yield something like an adaptive smoothing, depending on the
strength of the local evidence. Also, note that majority-ltering does not return a probability distribution
over segmentations.
6.2 Pixelwise classication accuracy
Table
1 shows the pixelwise classication accuracy for each class, and the overall accuracy for each of the ten
methods listed in Section 4.6. For the TSBN methods the MAP segmentation result is reported; the MPM
results were similar, although they were generally worse by a few tenths of one percent. The most noticeable
feature is that the performance obtained with the MLP methods is superior to that from the GMM methods.
Looking at the results in detail, we notice that the raw results for both the GMM and MLP (columns 1
and are improved by compensation (columns 2 and 7 resp). The compensated methods simply give more
Figure
Classication of a rural scene. (i) raw GMM pixelwise predictions, (ii) raw MLP pixelwise predictions,
(iii) compensated GMM pixelwise predictions, (iv) compensated MLP pixelwise predictions, (v) MAP segmentation
segmentation for MLP segmentation for GMM
TSBN, (viii) MAP segmentation for MLP
weight to the more frequently occurring classes, as can be seen by comparing columns 1 and 2, and 6 and
7. There are only small dierences between the spatially uniform compensation of columns 2 and 7, and the
ML-TSBN marginal compensation scheme of columns 3 and 8.
Columns 4 and 8 combine pixelwise evidence with the ML-TSBN. For both GMM and MLP local models,
it is perhaps surprising that the performance decreases as compared to columns 3 and 7 respectively. (This
is a fair comparison as the methods of columns 3 and 7 use the marginals of the ML-TSBN, but not the
correlation structure).
Columns show the performance of the GMM and MLP local models combined with trees trained
using the CML method on the relevant data. In both cases the performance is better than fusion with the
ML-TSBN and for the MLP this method obtains the best overall performance.
For comparison, we note that McCauley and Engel [28] compared the performance of Bouman and
Shapiro's SMAP algorithm against a pixelwise Gaussian classier on a remote sensing task, and found that
the overall classication accuracy of SMAP was a 3.6% higher (93.4% vs 89.8%).
The reasons for the superior performance of the MLP in our experiments are not entirely clear. However,
we note that the test images are a relatively diverse set of images (although drawn from the same distribution
as the training images). It may be that features that are important to the MLP classier are similar in the
training and test images, while other features (whose distribution has to be modelled by the GMMs) do vary
between training and test sets. In contrast, some other evaluations in the literature (e.g. [28]) use only a
single test image with training data drawn from a subset of the pixels; in this case the issue of inter-image
variability does not arise. We also note that the comparison of the GMM and MLP classiers was carried
out using a training set of a particular size and composition; dierent results might be obtained if these
factors were varied.
6.3 Pixel-wise entropy
We are interested in understanding the uncertainty described by P (x L jy). It appears that the computation
of the joint entropy of this conditional distribution is intractable, however posterior marginal entropies are
readily computable from the posterior marginals P (x L
Table
1: Performance of the 10 methods, showing the percentage correct for each class and overall. The
second column in the table gives the overall percentage of each class in the test images.
Class
percentage
vegetation 40.12 61.04 78.92 77.46 67.92 75.72 79.32 92.41 90.40 81.67 90.45
road markings 0.17 55.14 42.26 44.33 43.09 27.36 78.61 68.97 67.91 70.04 67.91
road surface 39.5 62.14 78.18 80.45 70.10 73.45 94.52 97.16 96.26 94.99 96.85
building 6.11 44.19 46.98 49.67 63.70 74.92 67.69 44.43 52.73 79.40 64.60
street furniture 1.35 28.58 14.05 13.77 20.97 8.58 24.89 3.96 4.62 10.07 6.63
mobile object 0.57 58.85 43.05 43.10 72.76 76.23 49.13 28.83 32.15 78.89 44.74
overall 63.71 77.45 77.94 71.00 75.85 85.72 90.16 89.38 87.38 90.68
kjy) 8 . Images displaying these posterior marginal entropies are shown in Figure 7, pertaining to the original
image shown on the right in Figure 2. As expected, the pixelwise entropy is reduced by the use of the TSBN;
this is particularly eective for the CML trees. Notice that the pixels which have signicant posterior
marginal entropy are good indicators of the pixels that are misclassied; this is especially true of the CML-
combination in Figure 7. This property could well be useful information for a later stage of
processing.
6.4 Conditional probability
If the model we have developed is a good one, then P (x L jy) should ascribe high probability to the \ground
truth" labelling x L
. Dierent image models can be compared in terms of the relative values of P
jy).
In particular we compare the TSBN image models against independent pixel models.
Ignoring spatial correlations we obtain PMLP
for the MLP local prediction, and
similarly for GMM local prediction. For a TSBN P (x L
jy) can be calculated as follows:
8 During the revision of this paper we became aware that the calculation of posterior marginal entropies had been proposed
independently by Perez et al [36] to determine \condence maps".
(a)
(c) 0.0487 (d)
Figure
7: Posterior marginal entropies for the MLP predictor. (a) compensated pixel-wise predictions, (b)
ML-TSBN, (c) CML-TSBN. The greyscale is such that black denotes zero entropy, white denotes 2.45 bits.
The number underneath each plot is the average pixelwise posterior entropy. (d) Binary image showing
misclassied labels (bright) and correctly-classied labels (dark).
Equation 16 follows from Equation 15 as P (y i jx L
) and the denominator can be
evaluated by the methods outlined in Appendix A.2.
A complexity arises in this calculation when there are pixels which do not have a label. For PMLP (x jy),
these pixels were simply ignored. For P TSBN (x L
jy), the unlabelled pixels were ignored in both the numerator
and denominator of Equation 17. This is achieved by setting the -vector at the appropriate nodes to
be the vector of ones (see Appendix A for details).
In
Figure
8 we plot 1
log P (x L
jy) under various models. Panel (b) shows that in all 43 test images,
the posterior probability under the MLP+CML-TSBN method is larger than that under the compensated
MLP using an independent-pixel model. Panel (a) shows that for a similar comparison using the GMM
predictor, the CML-TSBN method is better in 41 out of 43 cases. Notice also the relative scales of the plots,
and especially that the GMM model makes condent mistakes on some pixels, thereby dragging down the
average posterior probability.
log(P) for GMM
log(P)
for
GMM
CML-TSBN
-0.5log(P) for MLP
log(P)
for
MLP
CML-TSBN
(a) (b)
Figure
8: Comparison of log P (x L
jy)=N for (a) compensated GMM vs GMM compensated MLP
There are a large number of similar plots that can be made. A comparison of the posterior probabilities
under the ML-trained TSBN and independent models comes up with roughly equal numbers being better
coded under the two models, for both GMM and MLP predictors. The CML-trained TSBN with MLP
prediction is better than all other methods on 39 out of the 43 test images; on the remaining four the
ML-TSBN with MLP wins.
This paper has made a number of contributions:
We have used the EM algorithm to train the ML-TSBN and have observed that the learned parameters
do re
ect the underlying statistics of the training images. The quality of the probabilistic model
has been evaluated in coding terms and found to be comparable to state-of-the-art methods. The
\truncated tree" analysis shows over what scales correlations are important.
We have compared the performance of GMM and MLP pixelwise classiers on a sizable real-world
image segmentation task. The performance of the discriminatively-trained MLP was found to be
superior to the class-conditional GMM model. We have also shown that the scaled-likelihood method
can be used to fuse the pixelwise MLP predictions with a TSBN prior.
We have compared conditional maximum likelihood (CML) training for the tree against maximum
likelihood (ML) training on a number of dimensions including classication accuracy, pixel-wise entropy
and the conditional probability measure P (x L jy).
The problem of evaluating segmentations is an old one, and a full answer may well depend on a decision-theoretic
analysis which takes into account the end-use of the segmentation (e.g. for an automated driving
system). However, one attractive feature of the TSBN framework is that some aspects of the posterior
uncertainty can be computed e-ciently, e.g. the posterior marginal entropies discussed in Section 6.3.
architectures are not the ultimate image model, as we know that run generatively
they give rise to \blocky" label images. There are a number of interesting research directions which try to
overcome this problem. Bouman and Shapiro [5] suggest making a more complex, cross-linked model. The
problem with this is that inference now becomes much more complex (one needs to use the junction tree
algorithm, see e.g. [21]). One interesting idea (suggested in [5]) is to retain Pearl-style message passing even
though this is now not exact; this idea is analysed in [12] and [46]. Another approach to inference is to use
alternative approximation schemes, such as the \recognition network" used in Helmholtz machines [8], or
\mean-eld" theory [41].
An alternative to creating a cross-linked architecture is to retain a TSBN, but to move away from a rigid
quadtree architecture and allow the tree-structure to adapt to the presented image. This can be formulated
in a Bayesian fashion by setting up a prior probability distribution over tree-structures. Initial results of this
approach are reported in [49, 43]. We believe that the general area of creating generative models of (image)
data and nding eective inference schemes for them will be a fruitful area for research.
Appendix
A: Pearl's probability propagation procedures
Below we describe Pearl's scheme for probability propagation in trees, the computation of the marginal
likelihood P (x L j) and a scaling procedure for this algorithm to avoid under
ow.
A.1 Pearl's scheme
We rst consider the calculation of the probability distribution P (xje) at node X in a TSBN, given some
instantiated nodes (\evidence") e.
Consider the tree fragment depicted in Figure 9 (based on Figure 4.14 in [35]). P (xje) depends on two
distinct sets of evidence; evidence from the sub-tree rooted at X , denoted as e x , and evidence from the rest
of the tree, denoted as e
x . We shall assume that each node has a nite number of states C (each node can
have a dierent number of states but this adds some extra notation and is not necessary in our application).
Bayes' rule, together with the independence property in the TSBNs, yields the product rule,
x ) and
Partitioning e x as e
is dened recursively by
Y
where we have assumed that node X has n children and y ik is the kth value of node Y i . y i (x) is known as
the -message sent to node X from its child node Y i .
(x) is given recursively by
z
Y
z 0 2s(X)
Y
z 0 2s(X)
where z k is the kth state of node Z, and s(X) denotes the siblings of X (i.e. the children of Z excluding
X itself) and is the normalising factor so that the values of x (z) sum to 1. In fact
z
where e s(X) denotes the evidence below the siblings of X . x (z) is known as the -message sent to node X
e
e
x
x
Z
Figure
9: Fragment of causal network showing incoming message (named -message, shown as solid arrows) and outgoing
message (-message, broken arrows) at node X.
from its parent Z.
The propagation procedure is completed by dening the boundary conditions at the root and leaves of
the tree. The -vector at the root of the tree is equal to the prior probabilities for each of the classes. At
the leaves of the tree, the -vector is the vector of ones if the node is uninstantiated, and equal to the vector
with a single entry of 1 (and all other entries 0) corresponding to the instantiated state. The computation
of P (xje) at each node in the tree can now be performed using an upward phase of -message passing, and
a downward phase of -message passing.
To nd the maximum a posteriori conguration of the hidden variables X given evidence e, we can use
a similar message passing scheme, as described in Section 5.3 of [35].
The posterior marginal required for the EM updates and CML derivatives is given by
Y
where is the set of nodes that are siblings of node X i and y (:) denotes the -message sent to node
Pa i by node y. To show this, we partition e pa i
as e s(X
and rst calculate P
Using the conditional independences described by the tree we obtain
Y
y (pa ik
computed by dividing through both sides of the equation by P (e
which can be calculated as
A.2: Marginal likelihood
Now we consider the procedure for computing P (ej). Assuming X 0 is the root node, we have,
where we have used
is the root node and e
x0 is empty.
A.3 Scaling Pearl's probability propagation
In order to understand where and why scaling is required for implementing the message propagation, we
consider the two distinct message passing schemes separately.
Firstly, consider the denition of (x) in Equation 21. (x) is the probability of given the evidence
x , this gives
long as the normalising factor is applied in each time of the calculation of
-message. In this case, no scaling is needed for (x).
We now consider the denition of the (x) in Equation 19 and y i (x) of Equation 20. -values of node
X are the product of all -messages sent to it by its children. Each child node forms a weighted sum of
its -values to form the -message sent to its parent. The element-wise multiplication of the -messages in
Equation 19 and the weighted sum calculation in Equation 20 cause the numerical values of the -vectors to
decrease exponentially with distance from the leaves of the tree.
We scale (x) with three goals: (1) keeping the scaled (x) within the dynamic range of the computer for
all nodes in the tree, (2) maintaining the local propagation mechanisms of Pearl's probability propagation,
(3) recovering the true values at the end of the computation.
This is achieved by the recursive formulae
Y
are the children of X . These equations are initialised with ^
at the leaves. d x
can be any value that gives a reasonable scaling; d
x
was used in our work. The unscaled value
(x) is computed using
x (x), where D x is the product of the scaling coe-cients in the subtree
rooted at X (and including d x ).
In fact if we are only interested in the calculation of P (xje), then it is not necessary to worry unduly
about scaling factors; the -vector can simply be rescaled at each node as required, and P (xje) can be
calculated from the scaled -vector and the -vector by requiring that P (xje) sums to 1. However, scaling is
important if we wish to calculate the marginal likelihood P (ej) 9 . Referring back to Equation 26, we nd
that
where DX0 is the product of all of the scaling factors used in the propagation procedure. Since DX0 could
be out of the machine dynamic range we compute log P
log DX0 .
Appendix
B: Calculation of derivatives for CML optimisation
In this appendix we calculate the gradient of w.r.t. As in section 3.2 we suppress
the dependence of P (yjx) on for notational convenience.
First note that P (y m j) can be written as
the sum is over all
possible values of x in the TSBN. Using the conditional independence relations, P (xj) is easily decomposed
into a product of the transition probabilities on all links.
Following the ideas in Krogh[18] for HMMs, the derivative of L f () w.r.t ikl is
@ ikl
@ ikl
@ ikl
9 Scaling issues are discussed in Perez et al [36]. It appears that they addressed the issue of scaling for the computation of
posterior marginals in their paper but not explicitly scaling for the computation of P (ej).
ikl
ikl
ikl
ikl
ikl
The step from equation 30 to equation 31 is derived from the fact that ikl will only appear in the product
i is in state l and Pa m
i is in state k.
The derivative of the other term, L c (), can be calculated in a similar manner, except that the summation
over variables in the tree is now only taken over the hidden variables x
Acknowledgements
This work is funded by EPSRC grant GR/L03088, Combining Spatially Distributed Predictions From Neural
Networks and EPSRC grant GR/L78181 Probabilistic Models for Sequences. The authors gratefully acknowledge
the assistance of British Aerospace in the project and in making the Sowerby Image Database available
to us. We also thank Dr. Andy Wright of BAe for helpful discussions, Dr. Ian Nabney for help with NETLAB
routines for neural networks, Dr. Gareth Rees (BAe) for discussions on segmentation metrics, Dr. John Elgy
for introducing us to the work of [5] and Prof. Kevin Bowyer for pointing out the work of [33]. We also
thank the three anonymous referees and the Associate Editor Prof. Charles Bouman for helpful comments
and advice which have considerably improved the manuscript.
--R
Computer Vision.
Modelling and estimation of multiresolution stochastic processes.
On the statistical analysis of dirty pirtures.
Neural Networks for Pattern Recognition.
A Multiscale Random Field Model for Bayesian Image Segmentation.
Trainable Context Model for Multiscale Segmentation.
The Helmholtz Machine.
Decision Theory and Arti
A Revolution: Belief Propagation in Graphs with Cycles.
Stochastic Relaxation
An inequality for rational functions with applications to some statistical estimation problems.
An Introduction to Bayesian Networks.
Statistical pattern recognition in image analysis.
Multiresolution Gauss-Markov random eld models for texture segmentation
Hidden markov models for labeled sequences.
Graphical Models.
Dynamic measurement of computer generated image segmentations.
Remote Sensing and Image Interpretation.
Bayesian Belief Networks as a tool for stochastic parsing.
Likelihood Calculation for a Class of Multiscale Stochastic Models
Statistical methods for automatic interpretation of digitally scanned
Comparison of Scene Segmentations: SMAP
Neural Networks for Statistical Recognition of Continuous Speech.
Bayesian Learning for Neural Networks.
Textured image segmentation: returning multiple solutions.
Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference.
A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition.
Neural network classi
Hidden neural networks: a framework for HMM/NN hybrids.
Parameter Estimation of Dependence Tree Models Using the EM ALgorithm.
Hidden Markov models for fault detection in dynamic systems.
Dynamic positional trees for structural image analysis.
The LOCO-I Lossless Image Compression Algorithm: Principles and Standardization into JPEG-LS
Correctness of belief propagation in Gaussian graphical models of arbitrary topology.
A comparative study of texture measures for terrain classi
Dynamic Trees.
Image labelling with a neural network.
The use of neural networks for region labelling and scene un derstanding.
--TR
--CTR
Neil D. Lawrence , Andrew J. Moore, Hierarchical Gaussian process latent variable models, Proceedings of the 24th international conference on Machine learning, p.481-488, June 20-24, 2007, Corvalis, Oregon
Todorovic , Michael C. Nechyba, Dynamic Trees for Unsupervised Segmentation and Matching of Image Regions, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.11, p.1762-1777, November 2005
Amos J. Storkey , Christopher K. I. Williams, Image Modeling with Position-Encoding Dynamic Trees, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.25 n.7, p.859-871, July
Sanjiv Kumar , Martial Hebert, Discriminative Random Fields, International Journal of Computer Vision, v.68 n.2, p.179-201, June 2006
Todorovic , Michael C. Nechyba, Interpretation of complex scenes using dynamic tree-structure Bayesian networks, Computer Vision and Image Understanding, v.106 n.1, p.71-84, April, 2007
Richard J. Howarth, Spatial Models for Wide-Area Visual Surveillance: Computational Approaches and Spatial Building-Blocks, Artificial Intelligence Review, v.23 n.2, p.97-155, April 2005
Simone Marinai , Marco Gori , Giovanni Soda, Artificial Neural Networks for Document Analysis and Recognition, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.1, p.23-35, January 2005 | neural network;scaled-likelihood method;conditional maximum-likelihood training;gaussian mixture model;expectation-maximization EM;Markov random field MRF;tree-structured belief network TSBN;hierarchical modeling |
628814 | Information-Theoretic Bounds on Target Recognition Performance Based on Degraded Image Data. | AbstractThis paper derives bounds on the performance of statistical object recognition systems, wherein an image of a target is observed by a remote sensor. Detection and recognition problems are modeled as composite hypothesis testing problems involving nuisance parameters. We develop information-theoretic performance bounds on target recognition based on statistical models for sensors and data, and examine conditions under which these bounds are tight. In particular, we examine the validity of asymptotic approximations to probability of error in such imaging problems. Problems involving Gaussian, Poisson, and multiplicative noise, and random pixel deletions are considered, as well as least-favorable Gaussian clutter. A sixth application involving compressed sensor image data is considered in some detail. This study provides a systematic and computationally attractive framework for analytically characterizing target recognition performance under complicated, non-Gaussian models and optimizing system parameters. | Introduction
Typical target recognition problems involve detection and classification of targets as well as
estimation of target parameters such as location and orientation. A set of possible targets
comprises objects such as tanks, trucks, and planes. The scene in which targets are present is
acquired by sensors such as coherent laser radar imagers, Synthetic Aperture Radar systems,
forward-looking infrared radar (FLIR) systems, and hyperspectral sensors [1, 2]. These imaging
sensors are used for object recognition in numerous military and civilian applications. To
exploit the capabilities of these sensors in a target-recognition context, image-understanding
algorithms are required to interpret remote observations of complex scenes. In this context,
a pattern-theoretic framework [3, 4] using a deformable template representation of targets
is considered. The targets are modeled as deformations of rigid-body templates, and variability
of pose is introduced via rigid-body motions. The various other unknown parameters
such as target class and thermodynamic profile that characterize within-class variability are
modeled statistically. Statistical models for scene clutter and sensors are also determined.
A statistical approach provides a systematic framework for integrating prior knowledge
about the scene and targets with observational models and for fusing information from
multiple sensors. Once accurate statistical models have been identified, it is in principle
possible to compute optimal solutions to problems of detection, classification and parameter
estimation by application of basic principles of statistical inference [5, 6]. Even when optimal
algorithms are computationally intractable, statistical theory provides fundamental bounds
on the performance of any algorithm. The practical benefits of this approach have been
documented in prior work on target recognition [3, 4] and in related problems [7].
Target classification can often be described as a composite hypothesis-testing problem.
The various hypotheses are the di#erent target types of interest and the null hypothesis
(no target present in the scene). Probabilistic models are formulated under each hypothesis.
These models may be complicated due to dependencies on various unknown parameters, such
as target orientation, motion, and reflectance properties. Such parameters may be viewed
as nuisance parameters, leading to the composite hypothesis-testing formulation.
Additional di#culties are introduced when the sensors are located at a remote location,
e.g., on a mobile platform. In this case, the sensor data are transmitted over a bandwidth-limited
communication channel prior to processing by the computer that hosts the target
recognition algorithm. Lossy compression algorithms are used to e#ciently transmit sensor
data to the host computer. This operation degrades recognition performance, so it is important
to select the compression algorithm carefully. While heuristic evaluation methods are
used in current practice [8], it would be preferable to select the parameters of the compression
algorithm (as well as other system parameters) so as to optimize fundamental measures
of recognition performance. Such measures include Bayesian cost or Bayesian probability of
error, but they are typically intractable.
For such complex problems, an attractive alternative is to work with criteria that are
both tractable and meaningful approximations to the "ideal" performance measures above.
Natural candidates include the Cherno# and Kullback-Leibler distances [6], [9]. Cherno#
distances provide upper bounds and asymptotic expressions for the probability of error (P e ) in
detection problems, and Kullback-Leibler distances provide upper bounds on the probability
of miss (P miss ) for a fixed probability of false alarm (P f ). Both Cherno# and Kullback-Leibler
distances belong to the broad category of measures studied by Ali and Silvey, which
quantify the distance or dissimilarity between two distributions [10]. These distances satisfy
certain axioms of statistical inference which makes them particularly attractive in problems
involving quantization and multisensor data fusion.
Our driving application, used throughout to illustrate the theory and its numerous possible
applications, is the detection of a known target with unknown orientation. The sensor
data are corrupted by additive white Gaussian noise and are subjected to lossy compression
using a transform image coder. We show that despite the nonlinearity introduced by
the quantizers, tractable information-theoretic bounds on detection performance can still be
derived.
For composite hypothesis testing involving unknown nuisance parameters, even information-theoretic
distances are di#cult to evaluate. In this case, we develop tractable upper
and lower bounds on these distances, as well as precise expressions for asymptotic probabilities
of error. The tightness of these bounds is evaluated through theoretical analysis as well
as Monte-Carlo simulations. Finally, we extend the theory developed for binary hypothesis
testing to M-ary case, which covers recognition problems involving multiple target classes.
The paper is organized as follows. Sec. 2 describes the system model and the likelihood
ratio test (LRT) used by an optimal detector. Sec. 3 introduces Ali-Silvey distances and
the properties that make them attractive in target recognition problems. Sec. 4 briefly
reviews basic asymptotic properties of Cherno# bounds. In Sec. 5, performance bounds
for a simple detection problem involving compressed, noisy data are derived and compared
with actual probability of error using theoretical analysis and numerical simulations. The
analysis is extended to more complex detection problems involving nuisance parameters in
Sec. 6. Bounds on classification of multiple targets are derived in Sec. 7, and conclusions are
presented in Sec. 8.
System Model
To demonstrate the key ideas and concepts behind our methodology, we first consider a
composite binary detection problem. The task of the detection algorithm is to determine
whether a known target is present or not. There may be unknown nuisance parameters
associated with the target. The methods developed in the binary case are extended to the
classification of multiple targets in Sec. 7.
2.1 Image Model
Consider a target depending on an unknown parameter # taking its values in some set #. For
instance, the target may be a known template with unknown orientation. For ground-based
targets, # would then be the special orthogonal group SO(2) of rotation matrices [3]. In
the general pattern-theoretic framework of Miller et al. [3], templates are CAD (computer-
aided design) representations of two-dimensional surface manifolds of rigid objects, and # is
an element of a Lie group # whose elements characterize deformations of the template. In
infrared imaging problems, # may be a vector or a smoothly varying function describing the
unknown thermodynamic state of the target. In each case, the target is denoted I(#) and is
completely known if # is known. Target parameterizations reduce the variability of targets
to a low-dimensional, unknown parameter #.
2.2 Sensor Model
The target I(#) is seen through a projection map T , and the projected image T I(#) is
captured by a noisy sensor. Let I D be the sensor data, an N-pixel image. These data are
related to the true image T I(#) through some probabilistic map p(I D
|#). For instance, we
shall apply our analysis to an additive white Gaussian noise model, in which case I
T I(#, where # is sensor noise with mean zero and variance # 2 . Here p(I D
|#) is an
N-variate Gaussian distribution with mean T I(#) and covariance matrix equal to # 2 times
the N -N identity matrix. Figs. 1 and 2 show two examples that will be used to illustrate
the theory: 64 - 64-pixel images of a ground-based T62 tank at orientation
truck at orientation along with noisy sensor data I D . The signal-to-noise is defined
as
denotes the Euclidean norm of x. The SNR in the example of
Figs. 1 and 2 is 15 dB.
2.3 Data Model
In order to account for the need to transmit remote sensor data I D to a central computer
[8], we include a model for lossy compression in our formulation. A special case of this setup
is the conventional detection problem in which the sensor data are directly available to the
target detection algorithm. We restrict our attention to the class of transform-based coders
which are ubiquitous in practice. Transform coding is attractive due to its good compression
performance and low computational complexity [11], and this model simplifies the theoretical
analysis as well.
We use the following simplified mathematical model. Let U be the unitary (orthonormal)
transform used by the coder, and c be the transformed image data. In practice,
U could be a wavelet transform, or a discrete cosine transform, as in the JPEG image
compression standard. We denote the transform of the projected image T I(#) by
UT I(#). Signal energy is preserved under orthonormal transforms, so
and ||c D
||.
The transform coe#cients c D are quantized. The quantized coe#cients are denoted by
We assume that scalar quantizers Q n are
applied to each individual coe#cient: - c D
. The standard choice
used in our experiments is uniform scalar quantizers with a deadzone near zero, see Fig. 3.
In other words, the nonlinear operation Q is separable. The resulting decompressed image
I di#ers from I D due to quantization errors. Throughout this paper, we will use
the tilde (-) symbol to denote quantities pertaining to quantized data. Fig. 4 summarizes
our model for clean images I(#), noisy sensor data I D , and observed (compressed) data -
I D .
2.4 Conditionally Independent Data Sets
In a variety of target recognition problems, the data -
I D may be partitioned into components
that are statistically independent, conditioned on the target I(#). This property simplifies
our analysis as it suggests the use of asymptotic techniques, and is encountered in applications
such as the following.
1. Sensor noise in various imaging modalities can be modeled as independent and identically
distributed (i.i.d.) Gaussian noise (see Sec. 2.2), multiplicative noise (in coherent
imaging systems), or Poisson noise (in noncoherent optical imaging systems). In each
case, individual pixels of the image data -
I D are independent, conditioned on the illuminating
image field T I(#).
2. When -
I D are multisensor data, the degradations introduced by individual sensors are
often independent, even though individual sensors may involve complicated statistical
models.
3. In our main application of interest, sensor noise is i.i.d. Gaussian, and transform
coe#cients are individually quantized. The sensor noise w viewed in the orthonormal
transform domain is still i.i.d. Gaussian noise, so the degradations introduced by the
cascade of sensor noise and coe#cient quantization are independent: the observed
transform coe#cients - c D
are independent, given #. (The discrete
distribution of these coe#cients is obtained by integration of the distribution of c n (#)+
over the quantization cells.)
2.5 Detection Problem
Under the image, sensor and data models above, target detection can be formulated as a
binary statistical hypothesis test. If H 0 and H 1 refer to the hypotheses that the target is
absent or present, respectively, we have
I D
I D ),
I D
I D
|#
(2)
In Bayesian
detection, the uncertainty about # is modeled using a prior distribution #. Under
I D are distributed according to the mixture distribution
I D
I D
|#) d#. (3)
2.6 Optimal Likelihood Ratio Detector
Under hypotheses H 1 and H 0 , -
I D is distributed according to pdf's -
. The likelihood
I D
I D )/-p 0 ( -
I D ) is a su#cient statistic for detection, i.e., all we need to know is
the likelihood ratio for deciding between the hypotheses H 1 and H 0 [5]. The likelihood ratio
is invariant to invertible operations such as the transform U in a transform coder. Under a
variety of optimality criteria, the detection algorithm takes the form of a LRT 1
I D
I D )
I D )
#, (4)
where # is an appropriate threshold. The value of # depends on the optimality criterion [5].
In a Neyman-Pearson test, the threshold # is chosen such that for a given probability of false
the probability of miss (P miss ) is minimized. Under the minimum-probability-
of-error rule, the optimal decision is -
I D
I D )], where - i is
prior probability for hypothesis H i . The LRT in (4) is then optimal when # is equal to the
of the prior probabilities. The probability of error, in this case, is 2
I D
I D
I D , (5)
expectation under hypothesis H 0 . Of interest is also the conditional
probability of error P e|# , which characterizes detection performance under a specific target
configuration (#).
We consider two detection problems. The first is a simple hypothesis-testing problem:
the set # reduces to a singleton. The second problem is a composite hypothesis-testing
In (4), we implicitly assume that a randomized decision is made in the case -
I D
We use the same notation for integrals such as (5) whether the integration variable -
I D is continuous- or
discrete-valued. The measure d -
I D used is either a Lebesgue or a counting measure.
problem.
Detection problem 1: No nuisance parameters. In this problem, # is known, so the
target T I(#) is deterministic, and the detection problem becomes a simple binary hypothesis
testing problem. We assume, as in Sec. 2.4, that the data can be partitioned into independent
components. To make the discussion concrete, we focus on the third case in Sec. 2.4:
i.i.d. Gaussian sensor noise and transform coding. The distributions of the transform coefficients
under H 0 and H 1 are given by -
respectively. The likelihood ratio -
is the product of the likelihood ratios L(-c D
individual coe#cients:
. (6)
Hence the log likelihood ratio is the sum of the log likelihood ratios of each coe#cient.
Detection problem 2: Presence of nuisance parameters. In this case, the nuisance
parameter # is modeled as random with prior #. Under H 1 , the distribution of the
compressed data - c D is the mixture
|#) is the product of the marginals -
|#). The LRT in (4) can be computed
using the mixture in (7). As we shall see in Sec. 6, the presence of a mixture in the model
introduces significant computational and analytical complications.
3 Information-Theoretic Bounds on Detection
The previous section formulated target detection based on compressed data as a statistical
hypothesis testing problem. The threshold # in the LRT (4) can be chosen to minimize the
probability of error P e or the probability of miss (P miss ), for a given value of P f . Unfor-
tunately, both P e and P miss are intractable functions of the N-variate distributions -
and, in general, can only be evaluated experimentally. Hence, it is not feasible
to optimize the parameters of high-dimensional nonlinear systems such as lossy image
coders with respect to P e or P miss . This motivated us to investigate a general category of
performance measures that provide tractable bounds on P e and P miss . Since the ability to
distinguish between two statistical hypotheses depends on the respective conditional distributions
of the data, measures of distance or dissimilarity between two distributions are
natural performance metrics.
Ali and Silvey studied a generic category of distances that measure the dissimilarity
between two distributions [10]. The Ali-Silvey class of distances is based on an axiomatic
definition and takes the general form:
where f is any increasing function, C is any convex function on [0, #), -
is the likelihood ratio for the data, and E 0 is the expectation under hypothesis H 0 . It
is convenient to also allow pairs (f, C) where f is decreasing and C is concave. Kassam
[12] and Poor and Thomas [13] have shown how these performance metrics can be used for
optimal quantizer design in detection problems. In addition to convexity properties, Ali-
Silvey distances possess two attractive properties: they are invariant under application of
invertible maps to the data, and they are decreased under application of many-to-one maps
such as quantization [12], [13]. Specifically,
Observe that
where f(- ln(-) is convex decreasing and is concave, is in the
Ali-Silvey class (8). Even though P e is not a practical choice for design, there exist two
distances in the Ali-Silvey class that are closely related to P f and P miss .
The first are Cherno# distances [5], [6],[9]:
I D
I D )
I D )
I
where again, f is convex decreasing and C is concave. For this is same as the
Bhattacharyya distance [6], [9]. Cherno# distances give an upper bound on both P f and
where # is the threshold in the LRT of (4), and P i [-] is the probability under hypothesis H i .
For the minimum-probability-of-error rule, together with (13) give an
upper bound on P e . This bound can be tightened in scale factor [5] to give
We also use Kullback-Leibler distances [9]:
I D )
I D )
I D )
I
Here f is linear increasing and C is convex. The motivation for considering (15) is Stein's
lemma [9]. Under some conditions, this lemma relates the asymptotic probability of a miss
to the Kullback-Leibler distance D(-) between probability distributions with and without
the target, for a fixed small probability of false alarm:
where the asymptotic equality symbol f(N) # g(N) means that lim N#
The Kullback-Leibler and Cherno# distances are related by the formulas
d
ds
d
ds
The direct relationship to P f , P miss and P e makes the Kullback-Leibler and Cherno# distances
an appropriate choice for obtaining performance bounds. To illustrate these concepts,
consider our simple hypothesis-testing problem based on the Gaussian model in Sec. 2.2, in
the absence of compression. In this case, the distances (11) and (15) take the simple form
SNR.
Hence for Gaussian data, both the Kullback-Leibler and Cherno# distances are proportional
to SNR. For non-Gaussian data (such as our compressed data), there is no direct relationship
between SNR and detection performance. We shall shortly see that the distances (11) and
can still be conveniently evaluated in problems where data compression takes place. We
first examine conditions under which (11) and (15) give tight bounds on target detection.
4 Asymptotic Expressions
The Cherno# bounds (12), (13) and (14) on P f , P miss and P e hold for any distribution
of the data and any sample size N . In many problems, N is large, the data contains
many independent components (see Sec. 2.4), and the central limit theorem applies to the
distribution of the log likelihood ratio. The results in Sec. 3 can then be strengthened
we refer the reader to Van Trees [17, Ch. 2.7] for a lucid exposition of the main
ideas and results. The first fundamental result is that the quantities -
are in fact asymptotic to max s#(0,1) D s (-p 0 , -
While this gives a precise exponential
rate for the convergence of this probabilities to zero, these results can be further strengthened
using asymptotic integral expansion techniques. This yields exact asymptotic expressions
for Specifically, define for notational convenience
and let - # be the first and second derivatives of -(s). Then, for large sample
size, there exists s # (0, 1) such that
. The exponential factors in (19) and (20) are equal to the upper bounds
in (12) and (13), but the central-limit-theorem analysis provides a multiplicative factor that
can be significant. If the prior probabilities of H 0 and H 1 are equal, one can combine (19)
and (20) to obtain the following asymptotic approximation to
e -(s) . (21)
For the Gaussian model above, maximization of (18) with respect to s gives the optimal
Cherno# exponent
e -SNR/8 , (22)
which holds for large SNR.
The applicability of these asymptotic conditions to target recognition is examined next.
5 Bounds for Target Detection Without Nuisance Parameter
We first consider Detection Problem 1 in Sec. 2.6, and derive performance bounds for the
optimal LRT detector (4). The logarithm of the likelihood ratio (6) is the sum of the
marginal log likelihood ratios
L n for each transform coe#cient. Hence the Cherno# and
Kullback-Leibler distances in (11) and (15) are additive over the N transform coe#cients:
This additivity property simplifies the analysis and design of some systems using (23) or
(24) as the optimality criterion. For instance, the paper [14] shows how to optimally design
transform coders subject to bit rate constraints using (23) as the performance measure, and
the thesis [15] compares the performance of wavelet and DCT coders using such performance
measures. The additivity property of Cherno# and Kullback-Leibler distances applies to any
problem in which the data can be partitioned into independent components (see Sec. 2.4).
5.1 Example
To investigate the applicability of the above theory to target detection, we conducted experiments
using a database of T62 tank images generated using the PRISM 3 simulation package.
The images were corrupted by i.i.d. Gaussian sensor noise, as described in Sec. 2.2. Fig. 1
shows one such image at orientation along with noisy sensor data I D
dB). The noisy image data were compressed using a wavelet coder with Daubechies' length-4
D4 wavelet filter, four decomposition levels, and dead-zone scalar quantizers [16]. The dead
zone of these quantizers was twice the step size. Under this model, the received transform
coe#cients - c D
are independent. Their distributions -
p 1,n under hypotheses H 0 and
were computed by numerical integration of the Gaussian distribution for the unquantized
coe#cients c D
n over all quantization bins. Bit rates were estimated using the first-order
entropy
for the data in the presence of the target. (The entropy of the compressed image data in
the absence of a target is slightly lower.) Both hypotheses H 0 and H 1 were assumed to be
3 Courtesy Dr. Alvin Curran, Thermo Analytics, Calumet, MI.
equally likely. We used the optimal LRT detector for the compressed data:
We analyzed the e#ects of compression (measured by the bit rate of the compressed
data) on detection performance. The probability of error P e is guaranteed to decrease with
bit rate, because - ln P e is in the Ali-Silvey class. Fig. 5 compares three estimates of the
probability of error P e . The first estimate was computed using Monte-Carlo simulations with
di#erent noise realizations. The accuracy of this estimate is very high due to large number
of independent experiments performed. The second estimate, -
P e,s is computed using the
Cherno# upper bound (14) evaluated at bound). The motivation
for choosing is that this choice is quasi-optimal, see Fig. 6. For large bit rates,
quantization e#ects are negligible, and P e tends to the probability of error for unquantized
data. Since those data are Gaussian-distributed, an exact expression for P e is available. For
equal priors (-
z e -x 2 /2 dx is
Marcum's Q-function. For large SNR, we have P e # 2
# /SNR exp(-SNR/8) [5]. From
(18), the Cherno# distance is maximum at and so the Cherno# bound (14) is2 exp(-D s #(-
exp(-SNR/8). This bound is approximately four times larger than
the actual P e at high bit rates. At lower rates, the upper bound is slightly less conservative.
The third estimate of P e in Fig. 5 is discussed next.
5.2 On the Accuracy of Asymptotic Cherno# Approximations
In order to improve the upper bound (14), it is tempting to use the asymptotic expression
(21) for the Cherno# bound. Since this bound was established using central limit theorem
arguments, we expect it to be applicable, when the log likelihood ratio is the sum of N independent
components, and N is large. However, in the problem of Sec. 5.1, these components
(-c D
n ) are not identically distributed, so the validity of (21) hinges on whether the central
limit theorem for independent, but not identically distributed components, applies. Roughly
speaking, this requires that any individual component ln -
L n in the sum of log likelihood ratios
be small relative to the sum; more precisely, it is su#cient that the Lindeberg condition
holds [18, Ch. XV.6].
The Lindeberg condition is approximately satisfied for the application in Sec. 5.1, so the
asymptotic expression (21) is quite accurate, as shown in Fig. 5. In general, the Lindeberg
conditions can be expected to approximately hold for high-resolution imaging sensors, or
when multiple copies of the same scene (with di#erent noise realizations) are available. These
conditions are not likely to be satisfied in applications involving targets with relatively few
pixels on target. Even for a relatively large target like the one in Fig. 1, the Lindeberg
conditions do not hold well at very low bit rates, because most transformed coe#cients
are quantized to zero, and the log likelihood ratio is dominated by only a few significant
components.
6 Bounds for Target Detection With Nuisance Parameter
We now consider a more complicated scenario involving nuisance parameters # modeled as
random, with prior #. Under H 1 , the distribution of the data -
I D is the mixture
distribution (7).
As in Sec. 5, the performance of the optimal detector can be evaluated using Cherno#
bounds, which are tight under some conditions. However, here image coe#cients are no
longer independent, and the log likelihood is no longer additive over these coe#cients. Hence
the distances are given by N-dimensional integrals which in general cannot be evaluated
analytically. Fortunately, it is possible to derive bounds on these information-theoretic
distances that are useful and tractable performance measures.
6.1 Upper Bounds on Ali-Silvey Distances
To circumvent the di#culty of evaluating exact distances, we can compute an average dis-
tance, averaged over #, which turns out to be an upper bound on the exact Ali-Silvey dis-
tance. From (3), the likelihood ratio is the weighted average of the conditional likelihood
ratios L(-c D
p1 (-c D
L(-c D
|#) d#. (27)
From Jensen's inequality [9], we have
L(-c D
|#) d#
C(L(-c D
|#) d#
for any convex function C(-) and any pdf #). Hence for any Ali-Silvey measure of the form
L(-c D
|#) d#
|#) d#
where the inequality holds because f is increasing, and the last equality follows from the
definition of Ali-Silvey distances in (8). The result (28) also applies if f is decreasing and C
is concave.
First we apply (28) to - ln P e , which as discussed in Sec. 3 is an Ali-Silvey distance. In
this case, P
where P e|# refers to the probability of error given #, and has been evaluated in Sec. 5.
According to (29), the probability of error is at least equal to the average of the conditional
probability of error.
For the Cherno# and Kullback-Leibler distances in (11) and (15), (28) yields
There are two important points to be made here. First, even if the performance index
was independent on #, (28) would in general not be satisfied with equality.
A much stronger condition would need to be satisfied, namely, L(-c D
|#) would have to be
independent of # for each - c D , implying that # plays no role in the inference. Hence equality is
achieved in (28) only in trivial cases. Second, for nonlinear functions f , the expression (28)
is not the same as the average distance # d(-p 0 , -
increasing
(resp. decreasing) and C is convex (resp. concave), Jensen's inequality implies that the
average distance is an upper bound on (28):
In particular, the inequality (31) applies to Cherno# distances. In this case, (28) can be
further upper-bounded by the average distance # D s (-p 0 , -
We refer to (28) as
the "average f -1 distance" upper bound.
Next, let us see how the average bound on Cherno# distance relates to P e . From (14),
Because the inequalities do not go in the same direction, the right-hand side of (32) can only
serve as an approximation to P e .
6.2 Lower Bounds on Ali-Silvey Distances
We also explore the possibility of having simple lower bounds on Kullback-Leibler and Cher-
no# distances that provide upper bounds on P f and P e . Lower bounds on Cherno# distances
yield upper bounds on P e , and hence unbeatable bounds on the performance of any target
detection algorithm. The minimization of a distance d(-p 0 , -
possible mixture distributions
of the form (7) is illustrated in Fig. 10a. One might conjecture that the
distance is lower-bounded by the distance corresponding to the least favorable #:
worst )), where # worst
However, this inequality does not hold in general 4 . Likewise, the inequality P e # P e|#worst
does not hold in general. Still, the concept of least-favorable # plays a central role in the
asymptotic analysis of Sec. 6.3 below, as well as in minimax detection [21, Ch. 9]. From the
results in Sec. 5, we immediately obtain an upper bound on the probability of error of the
e|#worst # - 1-s
worst
6.3 Asymptotic Expressions
Significant simplifications arise in an asymptotic scenario, as the asymptotic expressions for
probability of error are dominated by the least-favorable #. The classical paper [22] presents
similar results in a closely related context. To understand the basic idea, consider the simple
case of a prior distribution concentrated at two values # 1 and # 2 , where D s (-p 0 , -
. For large N , the distributions -
and -
increasingly well separated, so their support sets become essentially
disjoint. Then
functional #) of the prior #, and the greatest lower bound on #) is
obtained by minimizing #) over the (convex)
set# of all priors. For the conjecture to be true, the
minimum would need to arise at an extremal point of the
set# This property would hold if #) was
concave, but generally does not hold for convex #) [20]. However, similar arguments show that the most
favorable prior # is a mass distribution. The resulting upper bound on d(p 0 , very useful though,
because we have already established the tighter upper bound (31).
A formal proof of this result is beyond the scope of this paper; see [22] for an example of such
analysis. A similar result holds if the prior distribution is concentrated at an arbitrary finite
number of points, and even for continuous priors, under some smoothness assumptions. In
other words, the inequality (33) holds with asymptotic equality for Cherno# distances:
worst )), where # worst
A tractable asymptotic approximation to P e is then obtained via (21), where
6.4 Example
In this section, we compare the bounds in Secs. 6.1 and 6.2 and the asymptotic approximation
(35) in Sec. 6.3 with the actual P e . Experiments were performed on the same database as in
Sec. 5.1. Again, the images were corrupted by i.i.d. Gaussian sensor noise and compressed
using the same wavelet coder. From (4) and (27), the optimal LRT takes the form
#) d#
where the prior #) is uniform. In our implementation, we approximated the integral from
by a summation over 36 orientations 0 We used the LRT above
and Monte-Carlo simulations to accurately estimate P e . We also evaluated the average
approximation (32) to P e , the upper bound (34) on the probability of error of a minimax
detector, and the asymptotic expression (21) (35) for P e .
Figs. 7 and 8 show these quantities as a function of bit rate for tank and truck imagery
at average SNR of 14 dB. Bit rate was computed using the first-order entropy (25), where
approximated by an average over 36 orientations, as described
above. We found that the average approximation (32) to P e is relatively accurate for both
tank and truck imagery. On the other hand, the upper bound (34) on P e is very loose for
the tank data as compared to the truck image data. The asymptotic approximation (21)
(35) is remarkably accurate for the truck data, but is o# by a factor of approximately two
for the tank data. This can be explained by examining Fig. 9, which shows SNR (which
is proportional to the Kullback-Leibler and Cherno# distances in the case of uncompressed
Gaussian data) as a function of the orientation parameter #. Clean tank images at certain
angles have very low energy content. These are seen as negative spikes in the tank image
SNR curve. The Cherno# distance for these worst-case angles give an overly conservative
upper bound on P e . Moreover, the spikes are very narrow, so convergence to the asymptotic
approximation (35) is slow. On the other hand, the clean image energy content does not
vary much with orientation for truck imagery, so the lower- and upper-bound curves are
relatively close, and the asymptotic approximation (35) is accurate. The variability of SNR
with target orientation is shown in Fig. 9 for tank and truck imagery.
Though the average approximation to P e in (32) is close to P e for the tank imagery, this
may not be true in general, as discussed in Sec. 6.1. The tightness of these bounds depend
on the variation of SNR with orientation (see Fig. 9).
7 M-ary Hypothesis Testing: Multiple-Target Case
Until now, we have considered a binary detection problem in which the receiver decides
whether a known target is present or not. This analysis can be extended to the case where
the alphabet of possible targets consists of M # 2 possible targets. This includes our binary
detection problem as a special case, in which 2, and the second hypothesis is a null
hypothesis. Using notation similar to that in Sec. 2, we assume that for there
are nuisance parameters # i # i associated with target type i, and let I i
i with nuisance parameter # i # i . Hence, the detection problem in the transform domain
can be formulated as an M-ary hypothesis test:
I D
I D
The parameters # i are modeled as random with priors # i
the data -
I D follow a mixture distribution -
I D ). Information-theoretic distances between
pairs of such distributions can be derived using methods similar to the binary case.
For the M-ary hypothesis test in (37), the optimal decision under the minimum-probability-
of-error rule is -
I D
I D )], where - i is the prior probability
for hypothesis H i . The probability of error P e , in this case, can be upper bounded using the
union-of-events bound [23]:
where is the probability of deciding H j when the correct hypothesis was H i , and
I D )>- i -
I D )|H
I D )>- j -
I D )|H j
represents the probability of error for a binary hypothesis test between H i and H j . The
inequality (38) follows from
I D ))|H i
I D )>- i -
I D )|H i ], (41)
and (39) follows from the Cherno# bound in (14). The Cherno# distance between distributions
takes the form
I D
I D )
I D )
I
I D )], 0 < s < 1,
I D
I D )/-p i ( -
I D ) is the ratio of the pdf's under hypotheses H i and H i .
The right-hand side of (39) gives an upper bound on P e in terms of Cherno# distances.
For simple hypotheses, the sets # i are singletons, and analysis similar to Sec. 5 applies. The
asymptotic tightness of the Cherno# bound on P e (i, j) for this case holds in the qualified
sense discussed in Sec. 4. Likewise, inequalities (38) and (41) are tight if all the hypotheses H i
are "far apart" from each other. Specifically, under some conditions (such as i.i.d. data), one
pair of hypotheses (i # , dominates the bounds on P e in (39), asymptotically as N #.
The pair (i # , is the one that has least Cherno# distance D s (-p i , -
Proposition 7.1 below shows that d(-p i , -
satisfies an inequality analogous to (28). The
proof of the proposition is given in the appendix.
Proposition 7.1 Let d(-p i , -
I D ))) be an Ali-Silvey distance between distributions
I D
I D )/-p i ( -
I D ) is the likelihood ratio between hypotheses
j depend on random parameters # i # i and # j # j with respective
priors # i
# . (42)
The assumptions of Proposition 7.1 hold for both Cherno# and Kullback-Leibler distances
in (11) and (15). Using the average upper bound (42) on Cherno# distance in (39),
we obtain an average approximation on P e similar to that in Sec. 6. We also note that
it is di#cult to derive nontrivial lower bounds on d(-p i , -
for the reasons described in
Sec. 6.2. Finally, under asymptotic conditions, P e is given by (21) where the exponent
takes an asymptotic form similar to (35):
The distance d(-p i , -
corresponding to mixture distributions -
in Fig. 10b. The probability of error is asymptotically determined by the worst-case
pair of angles corresponding to mass distributions # i and # j .
8 Conclusion
We have developed a systematic framework in which to analytically characterize target recognition
performance and facilitate optimization of system parameters. Probability of error
is usually an intractable function of system parameters, but information-theoretic distances
like Kullback-Leibler and Cherno# distances can be advantageously used as performance
measures. In some cases, simple analytical expressions can be obtained. These distances
provide asymptotically tight bounds on P e . We have studied and qualified the nature of the
gap with respect to asymptotic performance in a practical target recognition problem. We
reemphasize that our methodology is directly applicable to a broad class of object recognition
problems. In the presence of nuisance parameters such as target pose or thermodynamic
state, expressions for information-theoretic distances are often unwieldy, but convexity arguments
show that average distance is an upper bound on the true distance; and asymptotic
arguments provide simple asymptotic approximations.
Due to their simplicity, these expressions provide insights into a problem that we have
not dwelled upon: the optimal design of target recognition parameters such as parameters
of a lossy image compression algorithm. This provides a theoretically motivated alternative
to heuristic design techniques used in the target recognition literature [8]. This issue needs
to be explored in detail and is a challenging area for future research.
Acknowledgement
. The authors thank Dr. Aaron Lanterman and Mark C. Johnson
for reading a draft of this paper and making valuable suggestions.
A Proof of Proposition 7.1
We derive the upper bound (42) in two steps. First, we define
I D
I D
I D )
,
I D
I D )
I D
. (44)
we have from (44), L ij ( -
I D
I D
Hence
the first step follows directly from (28):
I
I D
# . (45)
From (44), L ji ( -
I D
I D )/-p j ( -
I D
I D
the expectation in the right-hand
side of (45) can be written as
I D
I D )C(L
I D
I D
I D )
I D
I D
I D
I D
I D
I D
function g(-). It can be shown that if C(x) is convex
for . Hence the argument of E j [-] in (46) is a
convex function of L ji ( -
I D
implies that L ji ( -
I D
I D
I D
I D
I D
I D
Hence
from (28), (46) can be upper bounded by averaging over
I D
I D
I D
I D
I D
where the equality follows by repeating the steps of (46) in reverse order with L ji ( -
I D
replaced by L ji ( -
I D
Equations (45) and (47) lead to (42). #
--R
"Aided and Automatic Target Recognition Based Upon Sensory Inputs from Image Forming Systems,"
IEEE Transactions on Image Processing
"Automatic target recognition via jump-di#usion algorithms,"
"Hilbert-Schmidt Lower Bounds for Estimators on Matrix Lie Groups for ATR,"
An Introduction to Signal Detection and Estimation.
Boston: Academic Press
"Bounds on Shape Recognition Performance,"
"Data Compression Issues in Automatic Target Recognition and the Measuring of Distortion,"
Elements of Information Theory.
"A general class of coe#cients of divergence of one distribution from another,"
"Optimal quantization for signal detection,"
"Applications of Ali-Silvey distance measures in the design of generalized quantizers for binary decision systems,"
"Optimal Design of Transform Coders for Image Classification,"
"Image recognition from compressed data: performance analysis,"
A Wavelet Tour of Signal Processing
Modulation Theory
An Introduction to Probability Theory and Its Applications
Large Deviation Techniques in Decision
Convex Analysis
" Information-Theoretic Asymptotics of Bayes Meth- ods,"
New York: McGraw-Hill
--TR
--CTR
Lavanya Vasudevan , Antonio Ortega , Urbashi Mitra, Application-specific compression for time delay estimation in sensor networks, Proceedings of the 1st international conference on Embedded networked sensor systems, November 05-07, 2003, Los Angeles, California, USA
M. E. Shaikin, Statistical Estimation and Classification on Commutative Covariance Structures, Automation and Remote Control, v.64 n.8, p.1264-1274, August | object recognition;multisensor data fusion;imaging sensors;performance metrics;data compression;automatic target recognition |
628823 | A Maximum Variance Cluster Algorithm. | AbstractWe present a partitional cluster algorithm that minimizes the sum-of-squared-error criterion while imposing a hard constraint on the cluster variance. Conceptually, hypothesized clusters act in parallel and cooperate with their neighboring clusters in order to minimize the criterion and to satisfy the variance constraint. In order to enable the demarcation of the cluster neighborhood without crucial parameters, we introduce the notion of foreign cluster samples. Finally, we demonstrate a new method for cluster tendency assessment based on varying the variance constraint parameter | Introduction
Data clustering is an extensively investigated problem for which many algorithms have been reported
[18], [26]. Roughly, cluster algorithms can be categorized in hierarchical and partitional algorithms.
Hierarchical algorithms deliver a hierarchy of possible clusterings, while partitional cluster algorithms
divide the data up into a number of subsets. In partitional cluster analysis most algorithms assume the
number of clusters to be known a priori. Because in many cases the number of clusters is not known
in advance, additional validation studies are used to find the optimal partitioning of the data [6], [8],
[11], [16].
In this paper, we propose an algorithm for partitional clustering that minimizes the within cluster
scatter with a constraint on the cluster variance. Accordingly, in contrast to many other cluster
algorithms, this method finds the number of clusters automatically. Clearly, a proper value for the
variance constraint parameter has to be selected. We present a way to discover cluster tendencies to
find significant values for this variance parameter in case this information is not available from the
problem domain. We first formally define the cluster problem.
# The authors are with the Department of Mediamatics, Faculty of Information Technology and Systems, Delft University
of Technology, P.O.Box 5031, 2600 GA, Delft, The Netherlands. E-mail: fC.J.Veenman, M.J.T.Reinders,
be a data set of vectors in a p-dimensional metric
space. Then, the cluster problem is to find a clustering of X in a set of clusters
where M is the number of clusters, such that the clusters C i are homogeneous and the union of clusters
is inhomogeneous.
The most widely used criterion to quantify cluster homogeneity is the sum-of-squared-error criterion
or simply the square-error criterion
where
#x -(Y )# 2 (2)
expresses the cluster homogeneity and
x (3)
is the cluster mean.
Straight minimization of (1) leads to a trivial clustering with N clusters; one for each sample.
Therefore, additional constraints are imperative. For instance, one could fix M , the number of clusters,
to an a priori known number like among others the widely used K-means model [23]. In the image
segmentation domain, a maximum variance per cluster is sometimes used in addition to a spatial
connectivity constraint, e.g. [1], [15]. In this paper, we present an algorithm that is based on a model
proposed for intensity-based image segmentation [28]. The constraint that is imposed on the square-
error criterion (1) within this model states that the variance of the union of two clusters must be higher
than a given limit # 2
|Y |
A consequence of this model is that the variance of each resulting cluster is generally below # 2
This does, however, not imply that we could impose a maximum variance constraint on each individual
cluster instead. That is, if we would replace the joint variance constraint (4) with a constraint for
individual clusters (#C
the minimization of (1) would lead to a trivial solution
with one sample per cluster.
Clearly, since the model imposes a variance constraint instead of fixing the number of clusters,
the resulting optimal clustering can be different from the K-means result, even if the final number of
clusters is the same.
1 Usually the sum-of-squared-error criterion is not averaged over the whole data set. As defined here, J e expresses the
average distance to the cluster centroids instead of the total distance.
A(a) (b) (c)
Figure
1: Illustration of the cluster neighborhood with the 2 nearest-neighbors method. In (a) the
neighbor ranking for sample A is shown. (b) and (c) display the neighborhood of the grey colored
cluster with respectively 3 and 4 samples, where in (c) the set of expansion candidates is empty.
Algorithm
For the optimization of the cluster model, we propose a stochastic optimization algorithm. In the
literature other stochastic clustering algorithms have been reported that generally optimize the K-means
model or fuzzy C-means model either using simulated annealing techniques [7], [20], [25]
or using evolutionary computation techniques [10], [13], [22], [29]. Accordingly, these stochastic
approaches focus on the optimization of known cluster models. The algorithm we propose, however,
shows more resemblance with the distributed genetic algorithm (DGA) for image segmentation as
introduced by Andrey and Tarroux [2], [3]. We also dynamically apply local operators to gradually
improve a set of hypothesized clusters, but, in contrast with the DGA approach, we consider the
statistics of the whole cluster in the optimization process. Before describing the algorithm itself, we
first elaborate on the neighborhood relationships of samples, which play a crucial role in the proposed
algorithm.
Both for effectiveness and efficiency the algorithm exploits locality in the feature space. Namely,
the most promising candidates for cluster expansion are in clusters that are close in the feature space.
Similarly, the most distant cluster members are the least reliable, hence, they are the first candidates
for removal. The computational performance can also profit from feature locality because cluster
update operations can be executed in parallel when the optimization process applies locally. For
these reasons, we consider the optimization process from the individual clusters' point of view, i.e.
each cluster can execute a number of actions in order to contribute to an improvement of the criterion
as well as to satisfy the variance constraint.
In order to collect the expansion candidates of a cluster, we need to find neighboring samples of
that cluster. A common way to define the neighborhood of a sample is to collect its k nearest neighbors
using the Eucledian distance measure, where k is a predefined number. In this way, the neighborhood
of a cluster would be the union of the neighbors of all samples in the cluster. Accordingly, the set
of expansion candidates of a cluster consists of the samples from its neighborhood, excluding the
Figure
2: Illustration of the cluster neighborhood with the 8 nearest-neighbors method. In (a) the
neighbor ranking for sample A is shown. (b) and (c) display the neighborhood of the grey colored
cluster with respectively 3 and 4 samples.
samples from the cluster itself. The problem with this approach is that the value of k becomes an
integral part of the cluster model, e.g. [12], [19], [24]. If k is set too low, then even for small clusters
all k nearest neighbors are in the cluster itself, so there are no expansion candidates left, see Fig. 1.
On the other hand, if k is set too high, then the neighborhood is always large, so all clusters have a
major part of the samples as expansion candidates, which clearly violates the idea of locality. As a
consequence, the set of expansion candidates will be a mix of good and bad candidate samples without
preference, see Fig. 2.
We take another approach to collect the expansion candidates. First, we call the set of expansion
candidates of cluster C a the outer border B a of cluster C a . Further, we introduce the notion of foreign
samples, which we define as neighboring samples that are not in the cluster itself. Accordingly, the
k-th order outer border B a of cluster C a is the union of the k nearest foreigners of all samples in C a ,
leading to:
x#C a
where F(x, k, C a , X) is the set of k nearest foreigners of x according to:
#, if
and n f (x, C a , Y ) is the nearest foreigner of sample x # C a in X defined as:
y#Y-C a
Consequently, the outer border of a cluster always has a limited number of samples and it never
becomes empty (unless there is only one cluster left). In Fig. 3 we illustrate how a second-order outer
border evolves with the growing of a cluster. An appropriate value for the order of the outer border
depends on the constellation of the clusters and the actual data.
Figure
3: The figures illustrate the cluster border construction with respect to sample A and its cluster.
In (a) the distance ranking for sample A is shown and (b) and (c) display the second-order border of
the grey colored cluster with respectively 3 and 4 samples.
Besides the expansion candidates, we also need to collect candidates for removal from the cluster
in order to impose the variance constraint. To this end, we introduce the q-th order inner border I a
of cluster C a . The inner border I a consists of those samples that are the furthest cluster mates of the
samples in C a . Accordingly, the q-th order inner border can be expressed as follows:
I a
x#C a
where G(x, q, C a ) is the set of q furthest cluster mates of x, or, in other words the q furthest neighbors
of x in C a according to:
{ f n(x, Y )} # G(x, q - 1, Y - { f n(x, Y )}), if q > 0
#, if
and f n(x, Y ) is the furthest neighbor of x in Y as in:
Since the set of foreigners of a sample changes every time the cluster is updated, for efficiency
reasons we introduce a rank list R i per sample x i , containing indices to all other samples in X in order
of their distance to the given sample. The rank list R i is an N -tuple defined as R
according to:
where # is the concatenate operator and nn(x, Y ) is the nearest neighbor of x in Y as in:
The k nearest foreigners of a cluster sample can now easily be found by scanning the rank list,
starting at the head while skipping those elements that are already in the cluster. To this end, some
bookkeeping is needed for both the clusters and the samples.
After the definitions of the inner and outer border of a cluster, we now describe the maximum
variance cluster (MVC) algorithm. Since the optimization of the model defined in (1) - (4) is certainly
an intractable problem, exhaustive search of all alternatives is out of the question. In order to prevent
early convergence of the consequent approximate optimization process, we introduce sources of non-determinism
in the algorithm [4].
The algorithm starts with as many clusters as samples. Then in a sequence of epochs every cluster
has the possibility to update its content. Conceptually, in each epoch the clusters act in parallel or
alternatively sequentially in random order. During the update process, cluster C a performs a series of
tests each of which causes a different update action for that cluster.
1. Isolation
First C a checks whether its variance exceeds the predefined maximum # 2
. If so, it randomly
takes a number of candidates i a from its inner border I a proportional to the total number of
samples in I a . It isolates the candidate that is the furthest from the cluster mean -(C a ). It
takes a restricted number of candidates to control the greed of this operation. Then the isolated
sample forms a new cluster (resulting in an increase of the number of clusters).
2. Union
If on the other hand, C a is homogeneous (its variance is below # 2
checks if it can
unite with a neighboring cluster, where a neighboring cluster is a cluster that contains a foreign
sample of C a . To this end, it computes the joint variance with its neighbors. If the lowest joint
variance remains under # 2
, then the corresponding neighbor merges with C a (resulting in a
decrease of the number of clusters).
3. Perturbation
Finally, if none of the other actions applies, the cluster C a attempts to improve the criterion by
randomly collecting a number of candidates b a from its outer border B a . Again, to control the
greed, a restricted number of candidates is selected proportional to the size of the border. Then
C a ranks these candidates with respect to the gain in the square-error criterion when moving
them from the neighboring cluster C b to C a .
We define the criterion gain between C a and C b with respect to x # C b as:
(b)
Figure
4: In (a) the R15 data set is shown, which is generated as 15 similar 2-D Gaussian distributions.
In (b) J e and M are displayed as a function of # 2
for the R15 data set.
If the best candidate has a positive gain then this candidate moves from the neighbor to C a .
Otherwise, there is a small probability P d of occasional defect, which forces the best candidate
to move to C a irrespective the criterion contribution.
Because of the occasional defect, no true convergence of the algorithm exists. Therefore, after a
certain number of epochs E max , we set P since it is possible that at the minimum of the
constrained optimization problem (1) - (4) the variance of some clusters exceeds # 2
(exceptions to
the general rule mentioned in Section 1), after E max epochs also isolation is no longer allowed in order
to prevent algorithm oscillations. With these precautions the algorithm will certainly converge, since
the overall homogeneity criterion only decreases and it is always greater than or equal to zero. In case
two clusters unite, the criterion may increase, but the number of clusters is finite. Still, we have to
wait for a number of epochs in which the clusters have not changed due to the stochastic sampling of
the border.
3 Experiments
In this section we demonstrate the effectiveness of the proposed maximum variance cluster (MVC)
algorithm with some artificial and real data sets. First, however, we show that the maximum variance
constraint parameter can be used for cluster tendency assessment.
Clearly, since the clustering result depends on the setting of # 2
, the square-error criterion J e also
changes as a function of # 2
. Accordingly, cluster tendencies can be read from trends in J e . Consider
for instance the data set shown Fig. 4(a). The corresponding square-error curve resulting from varying
cumulative
distribution
of
maximum plateau strength S max
5 points
points
points
points
points
Figure
5: Cumulative distribution of the maximum plateau strength for differently sized random 2-D
data sets. The distributions have been calculated from 1000 independent draws.
can be seen in Fig. 4(b). The figure shows some prominent plateaus in the square-error crite-
rion. Clearly, these plateaus can both be caused by hierarchical cluster structures and random sample
patterns. If there is real structure in the data then the variance constraint can be increased up to the
moment that true clusters are lumped together. On the other hand, if the resulting clustering is random
in character, then the clusters will easily be rearranged when the variance constraint is increased.
A be the starting point of a J e plateau and # 2
B the end point of that plateau, as for example in
Fig. 4(b). Then, we define the strength S of a plateau as the ratio:
A
Accordingly, the strength of a plateau gives an indication of whether or not the corresponding
clustering represents real structure in the data. Intuitively, we expect that two real clusters will be
lumped together if the variance constraint is higher than roughly twice their individual variance. This
implies that the strength of a plateau in J e should be greater than 2 in order to be a significant plateau,
that is, to represent real structure. To test this hypothesis, we did experiments with uniform random
data and various numbers of samples. For each data set, we measured the maximum plateau strength
subsequently computed the distribution of S max . In Fig. 5 we show the cumulative distribution
of S max for different sizes of the random data sets. Only when N was very low (N < 50),
significant plateaus were occasionally found, which is to be expected with low numbers of samples.
On the other hand, the experiments with structured data, among which the ones that we describe in
this section, indeed resulted in significant J e plateaus.
The advantage of this cluster tendency assessment approach is that the same model is used for
the clustering as for the cluster tendency detection. In our view the usual approach, where different
criteria are used for the clustering and the detection of the cluster tendency, is undesirable, like for
instance in [6], [8], [11], [16]. That is, the cluster algorithm may not be able to find the clustering
corresponding to the local minimum or knee in the cluster tendency function 2 .
Some additional remarks need to be made about the significance of J e plateaus. First, it has to
be noted that because of fractal like effects, # 2
A must be higher than a certain value in order to rule
out extremely small 'significant' plateaus. Second, in case there are multiple significant plateaus,
these plateaus represent the scales at which the data can be considered. Then, the user can select the
appropriate scale and corresponding plateau. In our view, there is no best scale in these cases, so the
selection is fully subjective.
In all experiments, we compared the performance of the MVC algorithm to the K-means algorithm
[23], and the Gaussian mixtures modeling (GMM) method with likelihood maximisation [18] using
the EM algorithm [9]. For both the K-means and the GMM method the numbers of clusters is set to the
resulting number (M) found by the MVC algorithm. Further, since both the MVC and the K-means
algorithm prefer circular shaped clusters we constrained the Gaussian models of the GMM to be
circular too in order to reduce its number of parameters. For the MVC algorithm we set P
a |#. It has to be noted that these
parameter values appeared to be not critical (experiments not included). They merely serve to tune the
convergence behavior similar as in other non-deterministic optimization algorithms, like for instance
the mutation rate and population size parameters in genetic algorithms [14]. Since all algorithms
have non-deterministic components 3 , we ran them 100 times on each data set and display the result
with the lowest square-error (MVC and K-means) or highest likelihood (GMM), i.e. the best solution
found. Further, we measured the average computation time, and the number of times the best solution
was found (hit rate). For the MVC algorithm we did not add the computation time of the rank lists,
since this time only depends on the size of the data set and not on the structure. Moreover, these lists
have to be computed only once for a data set and can then be used for tendency assessment and the
subsequent runs to find the optimal clustering.
We start with the already mentioned data set from Fig. 4(a) consisting of 15 similar 2-D Gaussian
clusters that are positioned in rings (R15). Though we know an estimate of the variance of the clusters,
we first varied the # 2
constraint in order to discover the cluster tendency. Fig. 4(b) shows the
resulting curves for J e and the number of found clusters M . The figure shows a number of prominent
plateaus in J e , from which the first [4.20.16.5] has strength This significant plateau
corresponds to the originating structure of 15 clusters. Further, there is a large plateau [67.5.122]
with strength which corresponds to the clustering where all inner clusters are merged into
one cluster. This plateau is, however, not significant according to our definition. This is because
When additional criteria are used to discover cluster tendencies, they are usually called cluster validity functions or
indices.
3 The K-means and GMM algorithm are initialized with randomly chosen cluster models.
Figure
Results of applying the clustering algorithms to the R15 data set. In (a) the results of the
MVC and K-means algorithm with 15 clusters is shown and in (b) the results of the GMM method
with 15 clusters is shown.
method parameter hit M time (ms)
K-means
K-means
Table
1: Statistical results of applying the algorithms to the R15 data set.
the total variance of the clusters lumped in the center is much higher than the variance of the outer
clusters. The resulting clusterings for MVC
were the same (Fig. 6), and also for MVC
Table
1 shows that the MVC algorithm is clearly more robust in converging towards the (possibly
local) minimum of its criterion. That is, the hit rate for the MVC algorithm is much higher than for
the K-means and the GMM algorithm. Further, the K-means algorithm that is known to be efficient is
indeed the fastest.
The next artificial data set consists of three clusters with some additional outliers (O3), see
Fig. 7(a). Again, we first varied the # 2
parameter for the MVC algorithm in order to discover
cluster tendencies. Although we roughly know the variance of the clusters, in this case it is certainly
useful to search for the proper # 2
value, since the outliers may disrupt the original cluster variances.
Fig. 7(b) clearly shows only one prominent plateau [22.0.82.0]. This plateau is significant, because
its strength is In the corresponding clustering result all three outliers are put in
separate clusters leading to a total of six clusters, as is shown in Fig. 8(a). Because the K-means
(b)
Figure
7: In (a) the O3 data set is shown which is generated as 3 similar 2-D Gaussian distributions
with some additional outliers. In (b) J e and M are displayed as a function of the # 2
parameter.
algorithm does not impose a variance constraint it could find a lower square-error minimum and corresponding
clustering with than the MVC as can be seen in Fig. 8(b). The algorithm split
one cluster instead of putting the outliers in separate clusters. This supports the statement that using
one model for the detection of cluster tendencies and another for the clustering is undesirable. Also
the GMM algorithm was not able to find the MVC solution (see Fig. 8(b)), though the MVC solution
indeed had a higher likelihood. When the K-means and the GMM algorithm merged two true
clusters and put the outliers in one clusters. Table 2 shows the statistics of this experiment. Again,
the K-means and GMM algorithm were clearly less robust in finding their respective (local) criterion
optimum than the MVC and the K-means was the fastest.
We repeated this experiment several times with different generated clusters and outliers. The
results were generally the same as described above, i.e. if there was a difference between the cluster
results of the algorithms, the MVC handled the outliers better by putting them in separate clusters or
it converged more often to its criterion optimum.
For the last synthetic experiment, we used a larger data set (D31) consisting of 31 randomly placed
Gaussian clusters of 100 samples each, see Fig. 9(a). The tendency curve resulting from varying
for the MVC algorithm shows one significant plateau [0.0030.0.0062]
corresponds to the original 31 clusters. Remarkably, the K-means and the GMM algorithm were not
able to find the originating cluster structure, not even after 10000 trials. The statistical results in
Table
3 show that the MVC algorithm consistently found the real structure, while the difference in
(c)
Figure
8: Results of applying the clustering algorithms to the O3 data set. In (a) the results of the
MVC algorithm are shown resulting in 6 clusters. (b) and (c) show the results of the K-means and
GMM algorithm, respectively. The GMM puts a remote cluster sample in a separate cluster.
method parameter hit M time (ms)
K-means
K-means
Table
2: Statistical results of applying the algorithms to the O3 data set. The hit rate of the GMM
method with certainly refers to a local maximum.
00.40.81.2
A
(b)
Figure
9: In (a) the D31 data set is shown which is generated as 31 similar 2-D Gaussian distributions.
In (b) J e and M are displayed as a function of the # 2
constraint parameter.
method parameter hit M time (ms)
K-means
Table
3: Statistical results of applying the algorithms to the D31 data set. The K-means and GMM
algorithm were not able to find the originating structure, so the hit rate refers to a local optimum.
computation time between the algorithms becomes small.
Next, we applied the algorithms to some real data sets. We started with the German Towns data
set which consists of 2-D coordinates of 59 German towns (pre-'Wende' situation). In order to find a
significant clustering result, we again varied the # 2
parameter for the MVC algorithm. The resulting
curves of J e and M are displayed in Fig. 10(a). The two plateaus [1290.1840] and [1940.2810] have
respectively. Although both plateaus are not significant, we show
the clustering results of the first plateau with 4 clusters in Fig. 10(b), which equals the result of the
K-means algorithm with 4. The GMM algorithm came up with a different solution consisting of
three main clusters and one cluster containing a single sample. When we visually inspect the data in
Fig. 10(b), we can conclude that it is certainly arguable if this data set contains significant structure.
Table
4 shows similar hit rates as before and the K-means algorithm was again the fastest.
Finally, we processed the well-known Iris data set with both algorithms. The Iris data set is
actually a labeled data set consisting of three classes of irises each characterized by four features.
Fig. 11 illustrates the cluster tendencies resulting from varying # 2
for the MVC algorithm. The
figure displays several plateaus, from which [0.76.1.39] and [1.40.4.53] are the strongest. The
(a)
Figure
10: (a) shows J e and M as a function of the # 2
constraint parameter for the German Towns
data set. In (b) the clustering result of the MVC and the K-means algorithm with 4 clusters is displayed
method parameter hit M time (ms)
K-means
Table
4: Statistical results of applying the algorithms to the German Towns data set.
Figure
11: J e and M as a function of the # 2
constraint parameter for the Iris data set.
method parameter hit M time (ms)
K-means
K-means
Table
5: Statistical results of applying the algorithms to the Iris data set.
plateaus with strengths correspond to three and two clusters, respectively.
Hence, only the latter is significant. All three algorithms found similar results for the same number of
clusters. Since it is known that the three classes cannot be separated based on the given features, it is
not surprising that the clustering with does not correspond to the given labels. However, from
the clustering with (corresponding to the significant plateau), one cluster almost perfectly
matches the samples of class I and the other cluster matches the samples of class II+III of the Iris
class labels. The statistics in Table 5 show similar differences between the MVC, K-means, and
GMM algorithm as in the other experiments.
We presented a maximum variance cluster algorithm (MVC) for partitional clustering. In contrast
to many other algorithms, the MVC algorithm uses a maximum variance constraint instead of the
number of clusters as parameter. In the experiments, we showed that the method is effective in finding
a proper clustering and we compared its results to those of the widely used K-means algorithm and the
Gaussian mixtures modeling (GMM) method with likelihood maximisation with the EM algorithm.
In contrast to the proposed MVC method both the K-means and the GMM method need the number
of clusters to be known a priori.
We showed that the MVC method copes better with outliers than the K-means algorithm. The
GMM method is in principle able to separate the outliers, but has problems with the optimization
process leading to convergence into local criterion optima. The MVC algorithm is more robust in
finding the optimum of its criterion than both the K-means and GMM algorithm. We must note that
other and better optimization schemes for both the K-means model (e.g. [5], [21]) and the Gaussian
mixtures modeling (e.g. [17], [27]) have been developed. However, the improved optimization of
these algorithms is achieved at the cost of (considerable) additional computation time or algorithm
complexity.
The MVC algorithm is up to 100 times slower than the very efficient K-means algorithm, especially
for small data sets and a low number of clusters. This is partially caused by the fact that we did
not adjust the maximum number of epochs parameter E max to the size of the data set. For larger data
sets with a higher number of clusters the differences in computation time between both algorithms
almost disappear. An advantage of the MVC algorithm with respect to computational efficiency is
that it can be implemented on parallel and distributed computer architectures relatively easily. Ac-
cordingly, for large data sets the MVC algorithm may be advantageous also for efficiency reasons. In
such a distributed computing environment, clusters can be maintained by separate processes. Then,
only clusters that are neighbors communicate with each other. The main point of consideration will
be how to balance the cluster processes on the available computers when clusters merge and when
samples are isolated into new clusters.
An interesting property of the proposed method is that it enables the assessment of cluster ten-
dencies. Generally, the curve resulting from varying the maximum variance constraint parameter as
a function of the square-error displays some prominent plateaus that reveal the structure of the data.
We indicated a way to find significant structure in the data by rating the strength of the plateaus. Ac-
cordingly, we were able to find proper settings of the maximum variance constraint parameter, which
is the only model parameter.
A drawback of the MVC algorithm may be that it uses a distance rank list for every sample. The
size of this rank list grows proportional to the square of the number of samples, so the amount of
storage needed can become substantially. The main problem, however, lies in the computation of
these rank lists. Since these lists are sorted, their construction costs O(N log(N )) operations. In
order to prevent the rank list from becoming a bottleneck for the application of the MVC algorithm,
a maximum distance constraint d max can be imposed in addition to the maximum cluster variance
constraint, e.g. d only those samples need to be ranked that are within the d max
range of the reference sample.
--R
Seeded region growing.
Unsupervised image segmentation using a distributed genetic algo- rithm
Unsupervised segmentation of markov random field modeled textured images using selectionist relaxation.
Competitive environments evolve better solutions for complex tasks.
A stochastic connectionist approach for global optimization with application to pattern clustering.
Some new indexes of cluster validity.
A practical application of simulated annealing to clustering.
A cluster separation measure.
Maximum likelihood from incomplete data via the EM algorithm.
Well separated clusters and optimal fuzzy partitions.
Agglomerative clustering using the concept of mutual nearest neighborhood.
Adaptation in Natural and Artificial Systems.
Picture segmentation by a tree traversal algorithm.
Comparing partitions.
A comparison between simulated annealing and the EM algorithms in normal mixtures decompositions.
Algorithms for Clustering Data.
Clustering using a similarity measure based on shared near neigh- bors
Experiments in projection and clustering by simulated annealing.
Genetic K-means algorithm
Evolutionary fuzzy clustering.
Some methods for classification and analysis of multivariate observations.
Finding salient regions in images: Nonparametric clustering for image segmentation and grouping.
A simulated annealing algorithm for the clustering problem.
Pattern Recognition.
Statistical Analysis of Finite Mixture Distri- butions
A cellular coevolutionary algorithm for image segmentation.
Genetic algorithm for fuzzy clustering.
--TR
--CTR
Venkatesh Rajagopalan , Asok Ray, Symbolic time series analysis via wavelet-based partitioning, Signal Processing, v.86 n.11, p.3309-3320, November 2006
Venkatesh Rajagopalan , Asok Ray , Rohan Samsi , Jeffrey Mayer, Pattern identification in dynamical systems via symbolic time series analysis, Pattern Recognition, v.40 n.11, p.2897-2907, November, 2007
Kuo-Liang Chung , Jhin-Sian Lin, Faster and more robust point symmetry-based K-means algorithm, Pattern Recognition, v.40 n.2, p.410-422, February, 2007
Jos J. Amador, Sequential clustering by statistical methodology, Pattern Recognition Letters, v.26 n.14, p.2152-2163, 15 October 2005
J. Veenman , Marcel J. T. Reinders, The Nearest Subclass Classifier: A Compromise between the Nearest Mean and Nearest Neighbor Classifier, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.9, p.1417-1429, September 2005 | partitional clustering;cluster validity;cluster analysis;cluster tendency assessment |
628848 | Direct Recovery of Planar-Parallax from Multiple Frames. | AbstractIn this paper, we present an algorithm that estimates dense planar-parallax motion from multiple uncalibrated views of a 3D scene. This generalizes the "plane+parallax" recovery methods to more than two frames. The parallax motion of pixels across multiple frames (relative to a planar surface) is related to the 3D scene structure and the camera epipoles. The parallax field, the epipoles, and the 3D scene structure are estimated directly from image brightness variations across multiple frames, without precomputing correspondences. | Introduction
The recovery of the 3D structure of a scene and the camera epipolar-geometries
(or camera motion) from multiple views has been a topic of considerable research.
The large majority of the work on structure-from-motion (SFM) has assumed
that correspondences between image features (typically a sparse set of image
points) is given, and focused on the problem of recovering SFM based on this
input. Another class of methods has focused on recovering dense 3D structure
from a set of dense correspondences or an optical flow field. While these have
the advantage of recovering dense 3D structure, they require that the correspondences
are known. However, correspondence (or flow) estimation is a notoriously
difficult problem.
A small set of techniques have attempted to combine the correspondence
estimation step together with SFM recovery. These methods obtain dense correspondences
while simultaneously estimating the 3D structure and the camera
geometries (or motion) [3, 11, 13, 16, 15]. By inter-weaving the two processes, the
local correspondence estimation process is constrained by the current estimate of
(global) epipolar geometry (or camera motion), and vice-versa. These techniques
minimize the violation of the brightness gradient constraint with respect to the
unknown structure and motion parameters. Typically this leads to a significant
improvement in the estimated correspondences (and the attendant 3D structure)
and some improvement in the recovered camera geometries (or motion). These
methods are sometimes referred to as "direct methods" [3], since they directly
use image brightness information to recover 3D structure and motion, without
explicitly computing correspondences as an intermediate step.
While [3, 16, 15] recover 3D information relative to a camera-centered coordinate
system, an alternative approach has been proposed for recovering 3D structure
in a scene-centered coordinate system. In particular, the "Plane+Parallax"
approach [14, 11, 13, 7, 9, 8], which analyzes the parallax displacements of points
relative to a (real or virtual) physical planar surface in the scene (the "refer-
ence plane"). The underlying concept is that after the alignment of the reference
plane, the residual image motion is due only to the translational motion of the
camera and to the deviations of the scene structure from the planar surface. All
effects of camera rotation or changes in camera calibration are eliminated by
the plane stabilization. Hence, the residual image motion (the planar-parallax
displacements) form a radial flow field centered at the epipole.
The "Plane+Parallax" representation has several benefits over the traditional
camera-centered representation, which make it an attractive framework for correspondence
estimation and for 3D shape recovery:
1. Reduced search space: By parametrically aligning a visible image structure
(which usually corresponds to a planar surface in the scene), the search
space of unknowns is significantly reduced. Globally, all effects of unknown
rotation and calibration parameters are folded into the homographies used
for patch alignment. The only remaining unknown global camera parameters
which need to be estimated are the epipoles (i.e., 3 global unknowns
per frame; gauge ambiguity is reduced to a single global scale factor for all
epipoles across all frames). Locally, because after plane alignment the unknown
displacements are constrained to lie along radial lines emerging from
the epipoles, local correspondence estimation reduces from a 2-D search problem
into a simpler 1-D search problem at each pixel. The 1-D search problem
has the additional benefit that it can uniquely resolve correspondences, even
for pixels which suffer from the aperture problem (i.e., pixels which lie on
line structures).
2. Provides shape relative to a plane in the scene: In many applications,
distances from the camera are not as useful information as fluctuations with
respect to a plane in the scene. For example, in robot navigation, heights
of scene points from the ground plane can be immediately translated into
obstacles or holes, and can be used for obstacle avoidance, as opposed to
distances from the camera.
3. A compact representation: By removing the mutual global component (the
plane homography), the residual parallax displacements are usually very
small, and hence require significantly fewer bits to encode the shape fluctuations
relative to the number of bits required to encode distances from
the camera. This is therefore a compact representation, which also supports
progressive encoding and a high resolution display of the data.
4. A stratified 2D-3D representation: Work on motion analysis can be roughly
classified into two classes of techniques: 2D algorithms which handle cases
with no 3D parallax (e.g., estimating homographies, 2D affine transforma-
tions, etc), and 3D algorithms which handle cases with dense 3D parallax
(e.g., estimating fundamental matrices, trifocal tensors, 3D shape, etc). Prior
model selection [17] is usually required to decide which set of algorithms to
apply, depending on the underlying scenario. The Plane+Parallax representation
provides a unified approach to 2D and 3D scene analysis, with
a strategy to gracefully bridge the gap between those two extremes [10].
Within the Plane+Parallax framework, the analysis always starts with 2D
estimation (i.e., the homography estimation). When that is all the information
available in the image sequence, that is where the analysis stops. The
3D analysis then gradually builds on top of the 2D analysis, with the gradual
increase in 3D information (in the form of planar-parallax displacements and
shape-fluctuations w.r.t. the planar surface).
[11, 13] used the Plane+Parallax framework to recover dense structure relative
to the reference plane from two uncalibrated views. While their algorithm
linearly solves for the structure directly from brightness measurements in two
frames, it does not naturally extend to multiple frames. In this paper we show
how dense planar-parallax displacements and relative structure can be recovered
directly from brightness measurements in multiple frames. Furthermore, we
show that many of the ambiguities existing in the two-frame case of [11, 13] are
resolved by extending the analysis to multiple frames. Our algorithm assumes as
input a sequence of images in which a planar surface has been previously aligned
with respect to a reference image (e.g., via one of the 2D parametric estimation
techniques, such as [1, 6]). We do not assume that the camera calibration information
is known. The output of the algorithm is: (i) the epipoles for all the
images with respect to the reference image, (ii) dense 3D structure of the scene
relative to a planar surface, and (iii) the correspondences of all the pixels across
all the frames, which must be consistent with (i) and (ii). The estimation process
uses the exact equations (as opposed to instantaneous equations, such as in [4,
15]) relating the residual parallax motion of pixels across multiple frames to the
relative 3D structure and the camera epipoles. The 3D scene structure and the
camera epipoles are computed directly from image measurements by minimizing
the variation of image brightness across the views without pre-computing a
correspondence map.
The current implementation of our technique relies on the prior alignment of
the video frames with respect to a planar surface (similar to other plane+parallax
methods). This requires that a real physical plane exists in the scene and is visible
in all the video frames. However, this approach can be extended to arbitrary
scenes by folding in the plane homography computation also into the simultaneous
estimation of camera motion, scene structure, and image displacements (as
was done by [11] for the case of two frames).
The remainder of the paper describes the algorithm and shows its performance
on real and synthetic data. Section 2 shows how the 3D structure relates
to the 2D image displacement under the plane+parallax decomposition. Section
3 outlines the major steps of our algorithm. The benefits of applying the
algorithm to multiple frames (as opposed to two frames) are discussed in Section
4. Section 5 shows some results of applying the algorithm to real data.
Section 6 concludes the paper.
2 The Plane+Parallax Decomposition
The induced 2D image motion of a 3D scene point between two images can be
decomposed into two components [9, 7, 10, 11, 13, 14, 8, 2]: (i) the image motion
of a reference planar surface \Pi (i.e., a homography), and (ii) the residual image
motion, known as "planar parallax". This decomposition is described below.
To set the stage for the algorithm described in this paper, we begin with the
derivation of the plane+parallax motion equations shown in [10]. Let
denote the image location (in homogeneous coordinates) of a point in one view
(called the "reference view"), and let its coordinates in another
view. Let B denote the homography of the plane \Pi between the two views. Let
its inverse homography, and B \Gamma1 3 be the third row of B \Gamma1 . Let
namely, when the second image is warped towards
the first image using the inverse homography B \Gamma1 , the point p 0 will move to the
point pw in the warped image. For 3D points on the plane \Pi ,
3D points which are not on the plane, pw 6= p. It was shown in [10] that
and
represents the 3D structure of the point p, where H is the perpendicular
distance (or "height") of the point from the reference plane \Pi , and
1 The notation we use here is slightly different than the one used in [10]. The change
to projective notation is used to unify the two separate expressions provided in [10],
one for the case of a finite epipole, and the other for the case of an infinite epipole.
Z is its depth with respect to the reference camera. All unknown calibration parameters
are folded into the terms in the parenthesis, where t denotes the epipole
in projective coordinates and t 3 denotes its third component:
In its current form, the above expression cannot be directly used for estimating
the unknown correspondence pw for a given pixel p in the reference image,
since pw appears on both sides of the expression. However, pw can be eliminated
from the right hand side of the expression, to obtain the following expression:
This last expression will be used in our direct estimation algorithm.
3 Multi-Frame Parallax Estimation
Let f\Phi j g l
j=0 be l +1 images of a rigid scene, taken using cameras with unknown
calibration parameters. Without loss of generality, we choose \Phi 0 as a reference
frame. (In practice, this is usually the middle frame of the sequence). Let \Pi be
a plane in the scene that is visible in all l images (the "reference plane"). Using
a technique similar to [1, 6], we estimate the image motion (homography) of \Pi
between the reference frame \Phi 0 and each of the other frames \Phi j (j
Warping the images by those homographies fB j g l
yields a new sequence of
l images, fI j g l
, where the image of \Pi is aligned across all frames. Also, for
the sake of notational simplicity, let us rename the reference image to be I , i.e.,
. The only residual image motion between reference frame I and the
warped images, fI j g l
, is the residual planar-parallax displacement p j
due to 3D scene points that are not located on the reference plane \Pi .
This residual planar parallax motion is what remains to be estimated.
Let the first two coordinates of p j
coordinate is 0). From Eq. (2) we know that the residual parallax is:
where the superscripts j denote the parameters associated with the jth frame.
In the two-frame case, one can define
1+flt3 , and then the problem
posed in Eq. (3) becomes a bilinear problem in ff and in
can be solved using a standard iterative method. Once ff and t are known, fl
can be recovered. A similar approach was used in [11] for shape recovery from
two-frames. However, this approach does not extend to multiple (? 2) frames,
because ff is not a shape invariant (as it depends on t 3 ), and hence varies from
frame to frame. In contrast, fl is a shape invariant, which is shared by all image
frames. Our multi-frame process directly recovers fl from multi-frame brightness
quantities.
The basic idea behind our direct estimation algorithm is that rather than
estimating l separate u j vectors (corresponding to each frame) for each pixel,
we can simply estimate a single fl (the shape parameter), which for a particular
pixel, is common over all the frames, and a single which for each
frame I j is common to all image pixels. There are two advantages in doing this:
1. For n pixels over l frames we reduce the number of unknowns from 2nl to
2. More importantly, the recovered flow vector is constrained to satisfy the
epipolar structure implicitly captured in Eq. (2). This can be expected to
significantly improve the quality of the recovered parallax flow vectors.
Our direct estimation algorithm follows the same computational framework
outlined in [1] for the quasi-parametric class of models. The basic components of
this framework are: (i) pyramid construction, (ii) iterative estimation of global
(motion) and local (structure) parameters, and (iii) coarse-to-fine refinement.
The overall control loop of our algorithm is therefore as follows:
1. Construct pyramids from each of the images I j and the reference frame I .
2. Initialize the structure parameter fl for each pixel, and motion parameter t j
for each frame (usually we start with
for all frames).
3. Starting with the coarsest pyramid level, at each level, refine the structure
and motion using the method outlined in Section 3.1.
4. Repeat this step several times (usually about 4 or 5 times per level).
5. Project the final value of the structure parameter to the next finer pyramid
level. Propagate the motion parameters also to the next level. Use these as
initial estimates for processing the next level.
6. The final output is the structure and the motion parameters at the finest
pyramid level (which corresponds to the resolution of the input images) and
the residual parallax flow field synthesized from these.
Of the various steps outline above, the pyramid construction and the projection
of parameters are common to many techniques for motion estimation (e.g.,
see [1]), hence we omit the description of these steps. On the other hand, the
refinement step is specific to our current problem. This is described next.
3.1 The Estimation Process
The inner loop of the estimation process involves refining the current values of
the structure parameters fl (one per pixel) and the motion parameters t j (3
parameters per frame). Let us denote the "true" (but unknown) values of these
parameters by fl(x; y) (at location (x; y) in the reference frame) and t j . Let
denote the corresponding unknown true parallax flow vector.
Let
c denote the current estimates of these quantities. Let
c , and ffiu
c . These ffi quantities
are the refinements that are estimated during each iteration.
Assuming brightness constancy (namely, that corresponding image points
across all frames have a similar brightness value) 2 , we have:
For small ffiu j we make a further approximation:
c
Expanding I to its first order Taylor series around
where I x ; I y denote the image intensity derivatives for the reference image (at
pixel location (x; y)). From here we get the brightness constraint equation:
I
Or:
I
Substituting
c yields:
I
Or, more compactly:
I -
where
I -
c
If we now substitute the expression for the local parallax flow vector u j given
in Eq. (3), we obtain the following equation that relates the structure and motion
parameters directly to image brightness information:
I -
1+fl(x;y)t ji
I x (t j
We refer to the above equation as the "epipolar brightness constraint".
2 Note that over multiple frames the brightness will change somewhat, at least due to
global illumination variation. We can handle this by using the Laplacian pyramid
(as opposed to the Gaussian pyramid), or otherwise pre-filtering the images (e.g.,
normalize to remove global mean and contrast changes), and applying the brightness
constraint to the filtered images.
Each pixel and each frame contributes one such equation, where the unknowns
are: the relative scene structure for each pixel (x; y), and
the epipoles t j for each frame (j = Those unknowns are computed in
two phases. In the first phase, the "Local Phase", the relative scene structure, fl,
is estimated separately for each pixel via least squares minimization over multiple
frames simultaneously. This is followed by the "Global Phase", where all
the epipoles t j are estimated between the reference frame and each of the other
frames, using least squares minimization over all pixels. These two phases are
described in more detail below.
Local Phase In the local phase we assume all the epipoles are given (e.g.,
from the previous iteration), and we estimate the unknown scene structure fl
from all the images. fl is a local quantity, but is common to all the images
at a point. When the epipoles are known (e.g., from the previous iteration),
each frame I j provides one constraint of Eq. (5) on fl. Therefore, theoretically,
there is sufficient geometric information for solving for fl. However, for increased
numerical stability, we locally assume each fl is constant over a small window
around each pixel in the reference frame. In our experiments we used a 5 \Theta 5
window. For each pixel (x; y), we use the error function:
~
I -
~
I x (t j
I y (t j
I -
I
I
y), and Win(x; y) is a
5\Theta5 window around (x; y). Differentiating Err(fl) with respect to fl and equating
it to zero yields a single linear equation that can be solved to estimate fl(x; y).
The error term was obtained by multiplying Eq. (5) by the denominator
3 ) to yield a linear expression in fl. Note that without multiplying by the
denominator, the local estimation process (after differentiation) would require
solving a polynomial equation in fl whose order increases with l (the number of
frames). Minimizing Err(fl) is in practice equivalent to applying weighted least
squares minimization on the collection of original Eqs. (5), with weights equal
to the denominators. We could apply normalization weights 1
is the estimate of the shape at pixel (x; y) from the previous iteration) to the
linearized expression, in order to assure minimization of meaningful quantities
(as is done in [18]), but in practice, for the examples we used, we found it was
not necessary to do so during the local phase. However, such a normalization
weight was important during the global phase (see below).
Global Phase In the global phase we assume the structure fl is given (e.g.,
from previous iteration), and we estimate for each image I j the position of its
with respect to the reference frame. We estimate the set of epipoles
by minimizing the following error with respect each of the epipoles:
I -
I x (t j
where I
fl(x; y) are fixed, this minimization problem decouples into a set of separate individual
minimization problems, each a function of one epipole t j for the jth
frame. The inside portion of this error term is similar to the one we used above
for the local phase, with the addition of a scalar weight W j (x; y). The scalar
weight is used to serve two purposes. First, if Eq. (7) did not contain the weights
would be equivalent to a weighted least squares minimization of
Eq. (5), with weights equal to the denominators (1
3 ). While this provides
a convenient linear expression in the unknown t j , these weights are not
physically meaningful, and tend to skew the estimate of the recovered epipole.
Therefore, in a fashion similar to [18], we choose the weights W j (x; y) to be
the fl is the updated estimate from the local phase,
whereas the t j
3;c is based on the current estimate of t j (from the previous itera-
tion).
The scalar weight also provides us an easy way to introduce additional robustness
to the estimation process in order to reduce the contribution of pixels
that are potentially outliers. For example, we can use weights based on residual
misalignment of the kind used in [6].
4 Multi-Frame vs. Two-Frame Estimation
The algorithm described in Section 3 extends the plane+parallax estimation
to multiple frames. The most obvious benefit of multi-frame processing is the
improved signal-to-noise performance that is obtained due to having a larger
set of independent samples. However, there are two additional benefits to multi-frame
estimation: (i) overcoming the aperture problem, from which the two-frame
estimation often suffers, and (ii) resolving the singularity of shape recovery in
the vicinity of the epipole (we refer to this as the epipole singularity).
4.1 Eliminating the Aperture Problem
When only two images are used as in [11, 13], there exists only one epipole. The
residual parallax lies along epipolar lines (centered at the epipole, see Eq. (3)).
The epipolar field provides one line constraint on each parallax displacement,
and the Brightness Constancy constraint forms another line constraint (Eq. (4)).
When those lines are not parallel, their intersection uniquely defines the parallax
displacement. However, if the image gradient at an image point is parallel to the
line passing through that point, then its parallax displacement (and
hence its structure) can not be uniquely determined. However, when multiple
images with multiple epipoles are used, then this ambiguity is resolved, because
the image gradient at a point can be parallel to at most one of the epipolar lines
associated with it. This observation was also made by [4, 15].
To demonstrate this, we used a sequence composed of 9 images (105 \Theta 105
pixels) of 4 squares (30\Theta30 pixels) moving over a stationary textured background
(which plays the role of the aligned reference plane). The 4 squares have the same
motion: first they were all shifted to the right (one pixel per frame) to generate
the first 5 images, and then they were all shifted down (one pixel per frame) to
generate the next 4 images. The width of the stripes on the squares is 5 pixels.
A sample frame is shown in Fig. 1.a (the fifth frame).
The epipoles that correspond to this motion are at infinity, the horizontal
motion has an epipole at (1; 52:5], and the vertical motion has an epipole at
[52:5; 1). The texture on the squares was selected so that the spatial gradients of
one square are parallel to the direction of the horizontal motion, another square
has spatial gradients parallel to the direction of the vertical motion, and the two
other squares have spatial gradients in multiple directions. We have tested the
algorithm on three cases: (i) pure vertical motion, (ii) pure horizontal motion,
and (iii) mixed motions.
Fig. 1.b is a typical depth map that results from applying the algorithm to
sequences with purely vertical motion. (Dark grey corresponds to the reference
plane, and light grey corresponds to elevated scene parts, i.e., the squares). The
structure for the square with vertical bars is not estimated well as expected,
because the epipolar constraints are parallel to those bars. This is true even
when the algorithm is applied to multiple frames with the same epipole.
Fig. 1.c is a typical depth map that results from applying the algorithm to
sequences with purely horizontal motion. Note that the structure for the square
with horizontal bars is not estimated well.
Fig. 1.d is a typical depth map that results from applying the algorithm to
multiple images with mixed motions (i.e., more than one distinct epipole). Note
that now the shape recovery does not suffer from the aperture problem.
4.2 Epipole Singularity
From the planar parallax Eq. (3), it is clear that the structure fl cannot be
determined at the epipole, because at the epipole: t j
0. For the same reason, the recovered structure at the vicinity of the epipole
is highly sensitive to noise and unreliable. However, when there are multiple
(a) (b) (c) (d)
Fig. 1. Resolving aperture problem: (a) A sample image, (b) Shape recovery for pure
vertical motion. Ambiguity along vertical bars, (c) Shape recovery for pure horizontal
motion. Ambiguity along horizontal bars, (d) Shape recovery for a sequence with mixed
motions. No ambiguity.
epipoles, this ambiguity disappears. The singularity at one epipole is resolved
by information from another epipole.
To test this behavior, we compared the results for the case with only one
epipole (i.e., two-frames) to cases with multiple epipoles at different locations.
Results are shown in Fig. 2. The sequence that we used was composed of images
of a square that is elevated from a reference plane and the simulated motion
(after plane alignment) was a looming motion (i.e., forward motion). Fig. 2.a,b,c
show three sample images from the sequence. Fig. 2.d shows singularity around
the epipole in the two-frame case. Figs. 2.e,h,i,j show that the singularity at
the epipoles is eliminated when there is more than one epipole. Using more
images also increases the signal to noise ratio and further improves the shape
reconstruction.
5 Real World Examples
This section provides experimental results of applying our algorithm to real world
sequences. Fig. 3 shows an example of shape recovery from an indoor sequence
(the "block" sequence from [11]). The reference plane is the carpet. Fig. 3.a
shows one frame from the sequence. Fig. 3.b shows the recovered structure.
Brighter grey levels correspond to taller points relative to the carpet. Note the
fine structure of the toys on the carpet.
Fig. 4 shows an example of shape recovery for a sequence of five frames (part
of the flower garden sequence). The reference plane is the house. Fig. 4.a shows
the reference frame from the sequence. Fig. 4.b shows the recovered structure.
Note the gradual change of depth in the field.
Fig. 5 shows an example of shape recovery for a sequence of 5 frames. The
reference plane is the flat region in front of the building. Fig. 5.a show one
Fig. 2. Resolving epipole singularity in case of multiple epipoles. (a-c) sample images
from a 9-frame sequence with multiple epipoles, (d,f) shape recovery using 2 images
(epipole singularity exist in this case), (e,g) using 3 images with 2 different epipoles,
using 5 images with multiple epipoles, (i,l) using 7 images with multiple epipoles,
using 9 images with multiple epipoles. Note that epipole singularity disappears
once multiple epipoles exist. (f,g,k,l,m) show an enlarge view of the depth image at the
vicinity of the epipoles. The box shows the region where the epipoles are. For visibility
purposes, different images are shown at different scales. For reference, coordinate rulers
are attached to each image. 12
(a) (b)
Fig. 3. Blocks sequence. (a) one frame from the sequence. (b) The recovered shape
(relative to the carpet). Brighter values correspond to taller points.
(a) (b)
Fig. 4. Flower-garden sequence. (a) one frame from the sequence. (b) The recovered
shape (relative to the facade of the house). Brighter values correspond to points farther
from the house.
frame from the sequence. Fig. 5.b shows the recovered structure. The brightness
reflects the magnitude of the structure parameter fl (brighter values correspond
to scene points above the reference plane and darker values correspond to scene
points below the reference plane). Note the fine structure of the stairs and the
lamp-pole. The shape of the building wall is not fully recovered because of lack
of texture in that region.
(a) (b)
Fig. 5. Stairs sequence. (a) one frame from the sequence. (b) The recovered shape
(relative to the ground surface just in front of the building). Brighter values correspond
to points above the ground surface, while darker values correspond to points below the
ground surface.
6 Conclusion
We presented an algorithm for estimating dense planar-parallax displacements
from multiple uncalibrated views. The image displacements, the 3D structure,
and the camera epipoles, are estimated directly from image brightness variations
across multiple frames. This algorithm extends the two-frames plane+parallax
estimation algorithm of [11, 13] to multiple frames. The current algorithm relies
on prior plane alignment. A natural extension of this algorithm would be
to fold the homography estimation into the simultaneous estimation of image
displacements, scene structure, and camera motion (as was done by [11] for two
frames).
--R
In European Conference on Computer Vision
Direct Multi-Resolution Estimation of Ego-Motion and Structure From Motion
Okamoto N.
In Defense of the Eight-Point Algorithm
Computing Occluding and Transparent Motions
Parallax Geometry of Pairs of Points for 3D Scene Analysis
Recovery of Ego-Motion Using Region Alignment
From Reference Frames to Reference Planes: Multi-View Parallax Geometry and Applications
A Unified Approach to Moving Object Detection in 2D and 3D Scenes
Direct Recovery of shape From Multiple Views: a Parallax Based Approach
Prazdny K.
3D Geometry From
Theory and Application to 3D Reconstruction From Perspective Views
Direct Methods for Visual Scene Reconstruction
Geometric motion segmentation and model selection
Determining the Epipolar Geometry and its Uncertainty: A Review
--TR
--CTR
Han , Takeo Kanade, Reconstruction of a Scene with Multiple Linearly Moving Objects, International Journal of Computer Vision, v.59 n.3, p.285-300, September-October 2004 | structure from motion;multiframe analysis;direct gradient-based methods;correspondence estimation;plane+parallax |
628884 | Lambertian Reflectance and Linear Subspaces. | AbstractWe prove that the set of all Lambertian reflectance functions (the mapping from surface normals to intensities) obtained with arbitrary distant light sources lies close to a 9D linear subspace. This implies that, in general, the set of images of a convex Lambertian object obtained under a wide variety of lighting conditions can be approximated accurately by a low-dimensional linear subspace, explaining prior empirical results. We also provide a simple analytic characterization of this linear space. We obtain these results by representing lighting using spherical harmonics and describing the effects of Lambertian materials as the analog of a convolution. These results allow us to construct algorithms for object recognition based on linear methods as well as algorithms that use convex optimization to enforce nonnegative lighting functions. We also show a simple way to enforce nonnegative lighting when the images of an object lie near a 4D linear space. We apply these algorithms to perform face recognition by finding the 3D model that best matches a 2D query image. | Introduction
One of the most basic problems in vision is to understand how variability in lighting affects the images
that an object can produce. Even when lights are isotropic and distant, smooth Lambertian objects can
produce infinite-dimensional sets of images (Belhumeur and Kriegman [1]). But recent experimental
work ([7, 12, 30]) has indicated that the set of images produced by an object under a wide range of
lighting conditions lies near a low dimensional linear subspace in the space of all possible images. This
can be used to construct efficient recognition algorithms that handle lighting variations. In this paper
we explain these empirical results analytically and use this understanding to produce new recognition
algorithms.
When light is isotropic and distant from an object, we can describe its intensity as a function of
direction. Light, then, is a non-negative function on the surface of a sphere. Our approach begins by
representing these functions using spherical harmonics. This is analogous to Fourier analysis, but on
the surface of the sphere. To model the way surfaces turn light into an image we look at reflectance
as a function of the surface normal (assuming unit albedo). We show that reflectance functions are
produced through the analog of a convolution of the lighting function using a kernel that represents
Lambert's reflection. This kernel acts as a low-pass filter with 99.2% of its energy in the first nine
components. We use this and the non-negativity of light to prove that under any lighting conditions,
a nine-dimensional linear subspace, for example, accounts for 98% of the variability in the reflectance
function. This suggests that in general the set of images of a convex, Lambertian object can be
approximated accurately by a low dimensional linear space. We further show how to analytically
derive this subspace from an object model.
This allows us to better understand several existing methods. For example, we show that the linear
subspace methods of Shashua [25] and Moses [20] use a linear space spanned by the three first order
harmonics, but that they omit the significant DC component. Also, it leads us to new methods of
recognizing objects with unknown pose and lighting conditions. In particular, we discuss how the
harmonic basis can be used in a linear-based object recognition algorithm, replacing bases derived by
performing SVD on large collections of rendered images. Furthermore, we show how we can enforce
non-negative light by projecting this constraint to the space spanned by the harmonic basis. With
this constraint recognition is expressed as a non-negative least-squares problem that can be solved
using convex optimization. This leads to an algorithm for recognizing objects under varying pose and
illumination that resembles Georghides et al. [9], but works in an analytically derived low-dimensional
space. The use of the harmonic basis, in this case, allows us to rapidly produce a representation to
the images of an object in poses determined at runtime. Finally, we discuss the case in which a first
order approximation provides an adequate approximation to the images of an object. The set of images
then lies near a 4D linear subspace. In this case we can express the non-negative lighting constraint
analytically. We use this expression to perform recognition in a particularly efficient way, without
iterative optimization techniques.
It has been very popular in object recognition to represent the set of images that an object can
produce using low dimensional linear subspaces of the space of all images. Ullman and Basri [28]
analytically derive such a representation for sets of 3D points undergoing scaled orthographic projection.
Shashua [25] and Moses [20] (and later also [22, 31]) derive a 3D linear representation of the set of images
produced by a Lambertian object as lighting changes, but ignoring attached shadows. Hayakawa [13]
uses factorization to build 3D models using this linear representation. Koenderink and van Doorn [18]
extend this to a 4D space by allowing the light to include a diffuse component. Researchers have
collected large sets of images and performed PCA to build representations that capture within class
variations [16, 27, 4] and variations due to pose and lighting [21, 12, 30]. Hallinan [12], Epstein et
al. [7] and Yuille et al. [30] perform experiments that show that large numbers of images of Lambertian
objects, taken with varied lighting conditions, do lie near a low-dimensional linear space, justifying
this representation. More recently, analytically derived, convex representations have been used by
Belhumeur and Kriegman [1] to model attached shadows. Georghides et al. [8, 9] use this representation
for object recognition.
Spherical harmonics have been used in graphics to efficiently represent the bidirectional reflection
distribution function (BRDF) of different materials by, e.g., Cabral [3] and Westin et al. [29]
(Koenderink and van Doorn [17] proposed replacing the spherical harmonics basis with the Zernike
polynomials, since BRDFs are defined over a half sphere.) Nimeroff et al. [23]. Dobashi et al. [5] and
Teo et al. [26] explore specific lighting configurations that can be represented efficiently as a linear
combination of basis lightings (e.g., daylight). Dobashi et al. [5] in particular use spherical harmonics
to form such a basis. D'Zmura [6] was first to point out that the process of turning incoming light
into reflection can be described in terms of spherical harmonics. With this representation, after truncating
high order components, the reflection process can be written as a linear transformation, and
so the low order components of the lighting can be recovered by inverting the transformation. He
used this analysis to explore ambiguities in lighting. We extend this work by deriving subspace results
for the reflectance function, providing analytic descriptions of the basis images, and constructing new
recognition algorithms that use this analysis while enforcing non-negative lighting. Independent of
and contemporaneous with our work, Ramamoorthi and Hanrahan [24] have described the effect of
Lambertian reflectance as a convolution. Like D'Zmura they use this analysis to explore the problem
of recovering lighting from reflectances. Also, preliminary comments on this topic can be found in
Jacobs, Belhumeur and Basri[15].
In summary, the main contribution of our paper is to show how to analytically find low dimensional
linear subspaces that accurately approximate the set of images that an object can produce. We can
then carve out portions of these subspaces corresponding to non-negative lighting conditions, and use
these descriptions for recognition.
Modeling Image Formation
Consider a convex object illuminated by distant isotropic light sources. Assume further that the
surface of the object reflects light according to Lambert's law [19]. This relatively simple model
has been analyzed and used effectively in a number of vision applications. The set of images of a
Lambertian object obtained with arbitrary light has been termed the "illumination cone" by Belhumeur
and Kriegman [1]. Our objective is to analyze properties of the illumination cone. For the analysis it will
be useful to consider the set of reflectance functions obtained under different illumination conditions.
A reflectance function (also called reflectance map, see Horn [14], Chapters 10-11) associated with a
specific lighting configuration is defined as the light reflected by a sphere of unit albedo as a function
of the surface normal. A reflectance function is related to an image of a convex object illuminated by
the same lighting configuration by the following mapping. Every visible point on the object's surface
inherits its intensity from the point on the sphere with the same normal, and this intensity is further
scaled by the albedo at the point. We will discuss the effect of this mapping later on in this section.
2.1 Image Formation as the Analog of a Convolution
Let S denote a unit sphere centered at the origin. Let a point on the surface of S,
and let N denote the surface normal at p. p can also be expressed as a unit vector using
the following notation:
sin OE; sin ' sin OE; cos OE); (1)
In this coordinate frame the poles are set at (0; 0; \Sigma1), ' denotes the
solid angle between p and (0; 0; 1), and it varies with latitude, and OE varies with longitude. Since we
assume that the sphere is illuminated by a distant and isotropic set of lights all points on the sphere see
these lights coming from the same directions, and they are illuminated by identical lighting conditions.
Consequently, the configuration of lights that illuminate the sphere can be expressed as a non-negative
function '('; OE), expressing the intensity of the light reaching the sphere from each direction ('; OE).
Furthermore, according to Lambert's law the difference in the light reflected by the points is entirely
due to the difference in their surface normals. Thus, we can express the light reflected by the sphere
as a function r('; OE) whose domain is the set of surface normals of the sphere.
According to Lambert's law, if a light ray of intensity l reaches a surface point with albedo -
forming an angle ' with the surface normal at the point, then the intensity reflected by the point due
to this light is given by
l- max(cos '; 0): (2)
In a reflectance function we use light reaches a point from a multitude of directions then
the light reflected by the point would be the sum of (or in the continuous case the integral over) the
contribution for each direction. Denote by example, the intensity of the
point (0; 0; 1) is given by:
Z 2-Z -k(')'('; OE) sin 'd'dOE: (3)
Similarly, the intensity r('; OE) reflected by a point obtained by centering k about p and
integrating its inner product with ' over the sphere. Thus, the operation that produces r('; OE) is the
analog of a convolution on the sphere. We will refer to this as a convolution, and write:
The kernel of this convolution, k, is the circularly symmetric, "half-cosine" function. The convolution
is obtained by rotating k so that its center is aligned with the surface normal at p. This still leaves one
degree of freedom in the rotation of the kernel undefined, but since k is rotationally symmetric this
ambiguity disappears.
2.2 Properties of the Convolution Kernel
Just as the Fourier basis is convenient for examining the results of convolutions in the plane, similar
tools exist for understanding the results of the analog of convolutions on the sphere. The surface
spherical harmonics are a set of functions that form an orthonormal basis for the set of all functions
on the surface of the sphere. We denote these functions by hnm , with
s
(n +m)! Pnm (cos ')e imOE ; (5)
where Pnm are the associated Legendre functions, defined as
d n+m
dz n+m (z
In the course of this paper it will sometimes be convenient to parameterize hnm as a function of space
coordinates (x; rather than angles. The spherical harmonics, written hnm (x;
polynomials of degree n in (x;
We may express the kernel, k, and the lighting function, ', as harmonic series, that is, as linear combinations
of the surface harmonics. We do this primarily so that we can take advantage of the analog to
the convolution theorem for surface harmonics. An immediate consequence of the Funk-Hecke theorem
(see, e.g., [10], Theorem 3.4.1, page 98) is that "convolution" in the function domain is equivalent to
multiplication in the harmonic domain. In the rest of this section we derive a representation of k as a
harmonic series. We use this derivation to show that k is nearly a low-pass filter. Specifically, almost
all of the energy of k resides in the first few harmonics. This will allow us to show that the possible
reflectances of a sphere all lie near a low dimensional linear subspace of the space of all functions defined
on the sphere.
In
Appendix
A we derive a representation of k as a harmonic series. In short, since k is rotationally
symmetric about the pole, under an appropriate choice of a coordinate frame its energy concentrates exclusively
in the zonal harmonics (the harmonics with while the coefficients of all the harmonics
with Thus, we can express k as:
with
Z 2-Z -k(')h n0 ('; OE) sin 'd'dOE: (8)
After some tedious manipulation (detailed in Appendix A) we obtain that
Energy 37.5 50 11.72 0.59 0.12 0.04
Cumulative energy 37.5 87.5 99.22 99.81 99.93 99.97
Lower bound 37.5 75 97.96 99.48 99.80 99.90
Table
1: The top row shows the energy captured by the n'th zonal harmonic for the Lambertian kernel (0 - n - 8).
The middle row shows the energy accumulated up to order n. This energy represents the quality of the n'th order
approximation of r('; OE) (measured in relative squared error). The bottom row shows a lower bound on the quality of
this approximation due to the non-negativity of the light. The are omitted because they contribute no
energy. Relative energies are given in percents.
The first few coefficients, for example, are
5-
13-
graph representation of the coefficients is shown in Figure 1.
The energy captured by every harmonic term is measured commonly by the square of its respective
coefficient divided by the total squared energy of the transformed function. The total squared energy
in the half cosine function is given by
Z -0
Table
1 shows the relative energy captured by each of the first several coefficients. It can be seen that
the kernel is dominated by the first three coefficients. Thus, a second order approximation already
accounts for 99.22% of the energy. With this approximation the half cosine function can be written as:
The quality of the approximation improves somewhat with the addition of the fourth order term
and deteriorates to 87.5% when a first order approximation is used. Figure 2 shows a 1D
slice of the Lambertian kernel and its various approximations.
2.3 Approximating the Reflectance Function
The fact that the Lambertian kernel has most of its energy concentrated in the low order terms implies
that the set of Lambertian reflectance functions can be well approximated by a low dimensional linear
space. This space is spanned by a small set of what we call harmonic reflectances. The harmonic
reflectance r nm ('; OE) denotes the reflectance of the sphere when it is illuminated by the harmonic "light"
hnm . Note that harmonic lights generally are not positive everywhere, so they do not correspond to real,
physical lighting conditions; they are abstractions. As is explained below every reflectance function
00.40.81.2
Figure
1: From left to right: a graph representation of the first 11 coefficients of the Lambertian kernel, the relative
energy captured by each of the coefficients, and the accumulated energy
-0.4
-0.4
-0.4
Figure
2: A slice of the Lambertian kernel (solid) and its approximations of first (left, dotted), second (middle), and
fourth (right) order.
r('; OE) will be approximated to an excellent accuracy by a linear combination of a small number of
harmonic reflectances.
To evaluate the quality of the approximation consider first, as an example, lighting generated by
a point source at the z direction point source is a delta function. The reflectance
of a sphere illuminated by a point source is obtained by a convolution of the delta function with the
which results in the kernel itself. Due to the linearity of the convolution, if we approximate the
reflectance due to this point source by a linear combination of the first three zonal harmonics, r 00
and r 20 , we account for 99.22% of the energy. In other words
min
where k, the Lambertian kernel, is also the reflectance of the sphere when it is illuminated by a point
source at the z direction. Similarly, first and fourth order approximations yield respectively 87.5% and
99.81% accuracy.
If the sphere is illuminated by a single point source in a direction other than the z direction the
reflectance obtained would be identical to the kernel, but shifted in phase. Shifting the phase of a
function distributes its energy between the harmonics of the same order n (varying m), but the overall
energy in each n is maintained. The quality of the approximation, therefore, remains the same, but
now for an N'th order approximation we need to use all the harmonics with n - N for all m. Recall
that there are 2n in every order n. Consequently, a first order approximation requires
four harmonics. A second order approximation adds five more harmonics yielding a 9D space. The
third order harmonics are eliminated by the kernel, and so they do not need to be included. Finally, a
fourth order approximation adds nine more harmonics yielding an 18D space.
We have seen that the energy captured by the first few coefficients k i (1 - i - N) directly indicates
the accuracy of the approximation of the reflectance function when the light includes a single point
source. Other light configurations may lead to different accuracy. Better approximations are obtained
when the light includes enhanced diffuse components of low-frequency. Worse approximations are
anticipated if the light includes mainly high frequency patterns.
However, even if the light includes mostly high frequency patterns the accuracy of the approximation
is still very high. This is a consequence of the non-negativity of light. A lower bound on the accuracy
of the approximation for any light function can be derived as follows. It is simple to show that for any
non-negative function the amplitude of the DC component must be at least as high as the amplitude of
any of the other components. 1 One way to see this is by representing such a function as a non-negative
sum of delta functions. In such a sum the amplitude of the DC component is the weighted sum of
the amplitudes of all the DC components of the different delta functions. The amplitude of any other
frequency may at most reach the same level, but often will be lower due to interference. Consequently,
in an N'th order approximation the worst scenario is obtained when the amplitudes in all frequencies
1 Note that to obtain the amplitude of the n'th component we must normalize its coefficient, multiplying it by
2n+1 .
Consequently the coefficient of the DC component may be smaller than that of other components, while the amplitude
may not. The Funk-Hecke theorem applies to the amplitudes.
higher than N saturate to the same amplitude as the DC component, while the amplitude of orders
are set to zero. In this case the relative squared energy becomes
Table
1 shows the bound obtained for several different approximations. It can be seen that using a
second order approximation (involving nine harmonics) the accuracy of the approximation for any light
function exceeds 97.96%. With a fourth order approximation (involving the accuracy
exceeds 99:48%. Note that the bound computed in (14) is not tight, since the case that all the higher
order terms are saturated yields a function with negative values. Consequently, the worst case accuracy
may even be higher than the bound.
2.4 Generating Harmonic Reflectances
Constructing a basis to the space that approximates the reflectance functions is straightforward and
can be done analytically. To construct the basis we can simply invoke the Funk-Hecke theorem. Recall
that this space is spanned by the harmonic reflectances, i.e., the reflectances obtained when a unit
albedo sphere is illuminated by harmonic lights. These reflectances are the result of convolving the
half cosine kernel with single harmonics. Due to the orthonormality of the spherical harmonics such a
convolution cannot produce energy in any of the other harmonics. Consequently, denote the harmonic
light by hnm , then the reflectance due to this harmonic is the same harmonic, but scaled. Formally,
(It can be readily verified that the harmonics of the same order n but different phase m share the same
scale factor c n .) It is therefore left to determine c n .
To determine c n (which is important when we enforce non-negative lighting in Sections 3.2 and 3.3)
we can use the fact that the half-cosine kernel k is an image obtained when the light is a delta function
centered in the z direction. The transform of the delta function is given by
and the image it produces is
where the coefficients k n are given in (9). c n determines by how much the harmonic is scaled following
the convolution; therefore, it is the ratio between k n and the respective coefficient of the delta function,
that is,
s
The first few harmonic reflectances are given by
for \Gamman - m - n (and r
For the construction of the harmonic reflectances it is useful to express the harmonics using space
coordinates (x; rather than angles ('; OE). This can be done by substituting the following equations
for the angles:
The first nine harmonics then become
where the superscripts e and o denote the even and the odd components of the harmonics respectively
njmj , according to the sign of m; in fact the even and odd versions of the harmonics
are more convenient to use in practice since the reflectance function is real). Notice that the harmonics
are simply polynomials in these space coordinates. Below we invariably use hnm ('; OE) and hnm (x;
to denote the harmonics expressed in angular and space coordinates respectively.
2.5 From Reflectances to Images
Up to this point we have analyzed the reflectance functions obtained by illuminating a unit albedo
sphere by arbitrary light. Our objective is to use this analysis to efficiently represent the set of images
of objects seen under varying illumination. An image of an object under certain illumination conditions
can be constructed from the respective reflectance function in a simple way: each point of the object
inherits its intensity from the point on the sphere whose normal is the same. This intensity is further
scaled by its albedo. In other words, given a reflectance function r(x; y; z), the image of a point p with
surface normal
We now wish to discuss how the accuracy of our low dimensional linear approximation to a model's
images can be affected by the mapping from the reflectance function to images. We will make two
points. First, in the worst case, this can make our approximation arbitrarily bad. Second, in typical
cases it will not make our approximation less accurate.
There are two components to turning a reflectance function into an image. One is that there is
a rearrangement in the x; y position of points. That is, a particular surface normal appears in one
location on the unit sphere, and may appear in a completely different location in the image. This
rearrangement has no effect on our approximation. We represent images in a linear subspace in which
each coordinate represents the intensity of a pixel. The decision as to which pixel to represent with
which coordinate is arbitrary, and changing this decision by rearranging the mapping from (x; y) to a
surface normal merely reorders the coordinates of the space.
The second and more significant difference between images and reflectance functions is that occlu-
sion, shape variation and albedo variations affect the extent to which each surface normal on the sphere
helps determine the image. For example, occlusion ensures that half the surface normals on the sphere
will be facing away from the camera, and will not produce any visible intensities. A discontinuous
surface may not contain some surface normals, and a surface with planar patches will contain a single
normal over an extended region. In between these extremes, the curvature at a point will determine
the extent to which its surface normal contributes to the image. Albedo has a similar effect. If a point
is black (zero albedo) its surface normal has no effect on the image. In terms of energy, darker pixels
contribute less to the image than brighter pixels. Overall, these effects are captured by noticing that
the extent to which the reflectance of each point on the unit sphere influences the image can range
from zero to the entire image.
We will give an example to show that in the worst case this can make our approximation arbitrarily
bad. First, one should notice that at any single point, a low-order harmonic approximation to a function
can be arbitrarily bad (this can be related to the Gibbs phenomenon in the Fourier domain). Consider
the case of an object that is a sphere of constant albedo (this example is adapted from Belhumeur
and Kriegman [1]). If the light is coming from a direction opposite the viewing direction, it will not
illuminate any visible pixels. We can then shift the light slightly, so that it illuminates just one pixel
on the boundary of the object; by varying the intensity of the light we can give this pixel any desired
intensity. A series of lights can do this for every pixel on the rim of the sphere. If there are n such
pixels, the set of images we get fully occupies the positive orthant of an n-dimensional space. Obviously,
points in this space can be arbitrarily far from any 9D space. What is happening is that all the energy
in the image is concentrated in those surface normals for which our approximation happens to be poor.
However, generally, things will not be so bad. In general, occlusion will render an arbitrary half of
the normals on the unit sphere invisible. Albedo variations and curvature will emphasize some normals,
and deemphasize others. But in general, the normals whose reflectances are poorly approximated
will not be emphasized more than any other reflectances, and we can expect our approximation of
reflectances on the entire unit sphere to be about as good over those pixels that produce the intensities
visible in the image.
Therefore, we assume that the subspace results for the reflectance functions carry on to the images
of objects. Thus we approximate the set of images of an object by a linear space spanned by what we
call harmonic images, denoted b nm . These are images of the object seen under harmonic light. These
images are constructed as in (22) as follows:
Note that b 00 is an image obtained under constant, ambient light, and so it contains for every point
simply the surface albedo at the point (scaled by a constant factor). The first order harmonic images
Figure
3: We show the first nine harmonic images for a model of a face. The top row contains the zero'th harmonic (left)
and the three first order harmonic images (right). The second row shows the images derived from the second harmonics.
Negative values are shown in black, positive values in white.
b 1m are images obtained under cosine lighting centered at the three main axes. These images are,
for every point, the three components of the surface normals scaled by the albedos, and an additional
constant. (See a discussion of past use of these images in Section 3.) The higher order harmonic images
contain polynomials of the surface normals scaled by the albedo. Figure 3 shows the first nine harmonic
images derived from a 3D model of a face.
We can write this more explicitly, combining Equations 21 and 23. Let p i denote the i'th object
point. Let - denote a vector of the object's albedos, that is, - i is the albedo of p i . Similarly, let
three vectors of the same length that contain the x, y and z components of the surface
normal, so that, for example, n x;i (the i'th component of n x ) is the x component of the surface normal
of p i . Further, let n x 2 denote a vector such that n x 2
similarly, where, for example, Finally, we will write -: v to denote the component-wise
product of - with any vector v (this is MATLAB's notation). That is, this product scales the
components of a vector by the albedo associated with the point that produced that component. So
-: n x is just the x components of the surface normals scaled by their albedos. Using this notation,
the first nine harmonic images become:
q12-: n yz
q4-: n x b e
Recognition
We have developed an analytic description of the linear subspace that lies near the set of images that
an object can produce. We now show how to use this description to recognize objects. Although our
method is suitable for general objects, we will give examples related to the problem of face recognition.
We assume that an image must be compared to a data base of models of 3D objects. We will assume
that the pose of the object is already known, but that its identity and lighting conditions are not. For
example, we may wish to identify a face that is known to be facing the camera. Or we may assume
that either a human or an automatic system have identified features, such as the eyes and the tip of
the nose, that allow us to determine pose for each face in the data base, but that the data base is too
big to allow a human to select the best match.
Recognition proceeds by comparing a new image to each model in turn. To compare to a model
we compute the distance between the image and the nearest image that the model can produce. We
present two classes of algorithms that vary in their representation of a model's images. The linear
subspace can be used directly for recognition, or we can restrict ourselves to a subset of the linear
subspace that corresponds to physically realizable lighting conditions.
We will stress the advantages we gain by having an analytic description of the subspace available,
in contrast to previous methods in which PCA could be used to derive a subspace from a sample of an
object's images. One advantage of an analytic description is that we know this provides an accurate
representation of an object's images, not subject to the vagaries of a particular sample of images. A
second advantage is efficiency; we can produce a description of this subspace much more rapidly than
PCA would allow. The importance of this advantage will depend on the type of recognition problem
that we tackle. In particular, we are interested in recognition problems in which the position of an
object is not known in advance, but can be computed at run-time using feature correspondences. In this
case, the linear subspace must also be computed at run-time, and the cost of doing this is important.
Finally, we will show that when we use a 4D linear subspace, an analytic description of this subspace
allows us to incorporate the constraint that the lighting be physically realizable in an especially simple
and efficient way.
3.1 Linear Methods
The most straightforward way to use our prior results for recognition is to compare a novel image to
the linear subspace of images that correspond to a model (D'Zmura [6] also makes this suggestion). To
do this, we produce the harmonic basis images of each model, as described in Section 2.5. Given an
image I we seek a vector a that minimizes kBa \Gamma Ik, where B denotes the basis images, B is p \Theta r, p
is the number of points in the image, and r is the number of basis images used. As discussed above,
nine is a natural value to use for r, but r = 4 provides greater efficiency while offers even better
potential accuracy. Every column of B contains one harmonic image b nm . These images form a basis
for the linear subspace, though not an orthonormal one. So we apply a QR decomposition to B to
obtain such a basis. We compute Q, a p \Theta r matrix with orthonormal columns, and R, an r \Theta r matrix
so that an r \Theta r identity matrix. We can then compute the distance from the
image, I, and the space spanned by B as kQQ T I \Gamma Ik. The cost of the QR decomposition is O(pr 2 ),
assuming p ?? r.
In contrast to this, prior methods have sometimes performed PCA on a sample of images to find a
linear subspace representing an object. Hallinan [12] performed experiments indicating that PCA can
produce a five or six dimensional subspace that accurately models a face. Epstein et al. [7] and Yuille
et al. [30] describe experiments on a wider range of objects that indicate that images of Lambertian
objects can be approximated by a linear subspace of between three and seven dimensions. Specifically,
the set of images of a basketball were approximated to 94.4% by a 3D space and to 99.1% by a 7D
space, while the images of a face were approximated to 90.2% by a 3D space and to 95.3% by a 7D
space. Georghides et al. [9] render the images of an object and find an 11D subspace that approximates
these images.
These numbers are roughly comparable to the 9D space that, according to our analysis, approximates
the images of a Lambertian object. Additionally, we note that the basis images of an object
will not generally be orthogonal, and can in some cases be quite similar. For example, if the z components
of the surface normals of an object do not vary much, then some of the harmonic images will
be quite similar, such as - vs. -z. This may cause some components to be less significant, so that a
lower-dimensional approximation can be fairly accurate.
When s sampled images are used (typically s ?? r), with s !! p PCA requires O(ps 2 ). Also, in
MATLAB, PCA of a thin, rectangular matrix seems to take exactly twice as long as its QR decomposi-
tion. Therefore, in practice, PCA on the matrix constructed by Georghides et al. would take about 150
times as long as using our method to build a 9D linear approximation to a model's images (this is for
9. One might expect p to be about 10,000, but this does not effect the relative costs
of the methods). This may not be too significant if pose is known ahead of time and this computation
takes place off line. But when pose is computed at run time, the advantages of our method can become
very great.
It is also interesting to compare our method to another linear method, due to Shashua [25] and
Moses [20]. Shashua points out that in the absence of attached shadows, every possible image of an
object is a linear combination of the x, y and z components of the surface normals, scaled by the albedo.
He therefore proposes using these three components to produce a 3D linear subspace to represent a
model's images. Notice that these three vectors are identical, up to a scale factor, to the basis images
produced by the first order harmonics in our method.
While this equivalence is clear algebraicly, we can also explain it as follows. The first order harmonic
images are images of any object subjected to a lighting condition described by a single harmonic. The
Funk-Hecke theorem ensures that all components of the kernel describing the reflectance function will
be irrelevant to this image except for the first order components. In Shashua's work, the basis images
are generated by using a point source as the lighting function, which contains all harmonics. But the
kernel used is the full cosine function of the angle between the light and the surface normal. This kernel
has components only in the first harmonic. So all other components of the lighting are irrelevant to
the image. In either case, the basis images are due only to the first set of harmonics.
We can therefore interpret Shashua's method as also making an analytic approximation to a model's
images, using low order harmonics. However, our previous analysis tells us that the images of the first
harmonic account for only 50% percent of the energy passed by the half-cosine kernel. Furthermore,
in the worst case it is possible for the lighting to contain no component in the first harmonic. Most
notably, Shashua's method does not make use of the DC component of the images, i.e., of the zero'th
harmonic. These are the images produced by a perfectly diffuse light source. Non-negative lighting must
always have a significant DC component. Koenderink and van Doorn [18] have suggested augmenting
Shashua's method with this diffuse component. This results in a linear method that uses the four most
significant harmonic basis images, although Koenderink and van Doorn propose this as apparently an
heuristic suggestion, without analysis or reference to a harmonic representation of lighting.
3.2 Enforcing Positive Light
When we take arbitrary linear combinations of the harmonic basis images, we may obtain images that
are not physically realizable. This is because the corresponding linear combination of the harmonics
representing lighting may contain negative values. That is, rendering these images may require negative
"light", which of course is physically impossible. In this section we show how to use the basis images
while enforcing the constraint of non-negative light. Belhumeur and Kriegman [1] have shown that
the set of images of an object produced by non-negative lighting is a convex cone in the space of all
possible images. They call this the illumination cone. We show how to compute approximations to
this cone in the space spanned by the harmonic basis images.
Specifically, given an image I we attempt to minimize kBa \Gamma Ik subject to the constraint that the
light is non-negative everywhere along the sphere. A straightforward method to enforce positive light
is to infer the light from the images by inverting the convolution. This would yield linear constraints
in the components of a, Ha - 0, where the columns of H contain the spherical harmonics hnm .
Unfortunately, this naive method is problematic since the light may contain higher order terms that
cannot be recovered from a low order approximation of the images of the object. In addition, the
harmonic approximation of non-negative light may at times include negative values. Forcing these
values to be non-negative will lead to an incorrect recovery of the light. Below we describe a different
method in which we project the illumination cone [1] onto the low dimensional space and use this
projection to enforce non-negative lighting.
We first present a method that can use any number of harmonic basis images. A non-negative
lighting function can be written as a non-negative combination of delta functions, each representing
a point source. Denote by ffi '0 OE 0 the function returning a non-zero value at
integrating to 1. This lighting function represents a point source at direction (' To project the
function onto the first few harmonics we need to look at the harmonic transform of the delta
function. Since the inner product of ffi '0 OE 0
with a function f returns simply f('
that the harmonic transform of the delta function is given by
m=\Gamman
The projection of the delta function onto the first few harmonics, therefore, is obtained by taking the
sum only over the first few terms.
Suppose now that a non-negative lighting function '('; OE) is expressed as a non-negative combination
of delta functions
for some J . Obviously, due to the linearity of the harmonic transform, the transform of ' is a non-negative
combination of the transforms of the delta functions with the same coefficients. That is,
a jX
m=\Gamman
Likewise, the image of an object illuminated by ' can be expressed as a non-negative combination as
a jX
m=\Gamman
where b nm are the harmonic images defined in the previous section.
Given an image our objective is to recover the non-negative coefficients a j . Assume we consider an
approximation of order N , and denote the number of harmonics required for spanning the space by
In matrix notation, denote the harmonic functions by H, H is
s \Theta r, where s is the number of sample points on the sphere. The columns of H contain a sampling
of the harmonic functions, while its rows contain the transform of the delta functions. Further, denote
by B the basis images, B is p \Theta r, where p is the number of points in the image. Every column of B
contains one harmonic image b nm . Finally, denote a our objective is to solve the
non-negative least squares problem:
min a
kBH T a \Gamma Ik s.t. a - 0: (29)
We can further project the image to the r-dimensional space spanned by the harmonic images and
solve the optimization problem in this smaller space. To do so we apply a QR decomposition to B, as
described previously. We obtain:
min a
Now R is r \Theta r and Q T I is an r-vector.
Note that this method is similar to that presented in Georghides et al. [8]. The primary difference
is that we work in a low dimensional space constructed for each model using its harmonic basis images.
Georghides et al. perform a similar computation after projecting all images into a 100-dimensional
space constructed using PCA on images rendered from models in a ten-model data base. Also we do
not need to explicitly render images using a point source, and project them into a low-dimensional
space. In our representation the projection of these images is simply H T .
3.3 Recognition with Four Harmonics
A further simplification can be obtained if the set of images of an object is approximated only up
to first order. Four harmonics are required in this case. One is the DC component, representing the
appearance of the object under uniform ambient light, and three are the basis images also used by
Shashua. Again, we attempt to minimize kBa \Gamma Ik (now B is p \Theta 4) subject to the constraint that the
light is non-negative everywhere along the sphere.
As before, we determine the constraints by projecting the delta functions onto the space spanned
by the first four harmonics. However, now this projection takes a particularly simple form. Consider
a
. Its first order approximation is given by
m=\Gamman
Using space coordinates this approximation becomes
Let
be the first order approximation of a non-negative lighting function '. ' is a non-negative combination
of delta functions. It can be readily verified that such a combination cannot decrease the zero order co-efficient
relative to the first order ones. Consequently, any non-negative combination of delta functions
must satisfy
(Equality is obtained when the light is a delta function, see (32).) Therefore, we can express the
problem of recognizing an object with a 4D harmonic space as minimizing kBa \Gamma Ik subject to (34).
In the four harmonic case the harmonic images are just the albedos and the components of the
surface normals scaled by the albedos, each scaled by some factor. It is therefore natural to use those
directly and hide the scaling coefficients within the constraints. Let I be an image of the object
illuminated by ', then, using (19) and (23),
I -a
where - and (n x are respectively the albedo and the surface normal of an object point. Using
the unscaled basis images, -n x , -n y , and -n z , this equation can be written as:
I
with
Substituting for the a i 's we obtain
(b 2
which simplifies to
Figure
4: Test images used in the experiments.
Consequently, to find the nearest image in the space spanned by the first four harmonic images with
non-negative light we may minimize the difference between the two sides of (36) subject to (38). This
problem has the general form:
We show in Appendix B that by diagonalizing A and B simultaneously and introducing a Lagrange multiplier
the problem can be solved by finding the roots of a six degree polynomial with a single variable,
the Lagrange multiplier. With this manipulation solving the minimization problem is straightforward.
3.4 Experiments
We have experimented with these recognition methods using a database of faces collected at NEC,
Japan. The database contains 3D models of 42 faces, including models of their albedos in the red,
green and blue color channels. As query images we use 42 images each of ten individuals, taken across
seven different poses and six different lighting conditions (shown in Figure 4). In our experiment, each
of the query images is compared to each model.
In all methods, we first obtain a 3D alignment between the model and the image, using the algorithm
9D Non-negative Lighting
4D Non-negative Lighting
Figure
5: ROC curves for our recognition methods.
of Blicher and Roy [2]. In brief, features on the faces were identified by hand, and then a 3D rigid
transformation was found to align the 3D features with the corresponding 2D image features.
In all methods, we only pay attention to image pixels that have been matched to some point in
the 3D model of the face. We also ignore image pixels that are of maximum intensity, since these may
be saturated, and provide misleading values. Finally, we subsample both the model and the image,
replacing each m \Theta m square with its average values. Preliminary experiments indicate that we can
subsample quite a bit without significantly reducing accuracy. In the experiments below, we ran all
algorithms subsampling with 16\Theta16 squares, while the original images were 640 \Theta 480.
Our methods produce coefficients that tell us how to linearly combine the harmonic images to
produce the rendered image. These coefficients were computed on the sampled image, but then applied
to harmonic images of the full, unsampled image. This process was repeated separately for each color
channel. Then, a model was compared to the image by taking the root mean squared error, derived
from the distance between the rendered face model and all corresponding pixels in the image.
Figure
5 shows ROC curves for three recognition methods: the 9D linear method, and the methods
that enforce positive lighting in 9D and 4D. The curves show the percentage of query images for which
the correct model is classified among the top k, as k varies from 1 to 40. The 4D positive lighting
method performs significantly less well than the others, getting the correct answer about 60% of the
time. However, it is much faster, and seems to be quite effective under the simpler pose and lighting
conditions. The 9D linear method and 9D positive lighting method each pick the correct model first 86%
of the time. With this data set, the difference between these two algorithms is quite small compared
to other sources of error. These may include limitations in our model for handling cast shadows and
specularities, but also includes errors in the model building and pose determination processes. In fact,
on examining our results we found that one pose (for one person) was grossly wrong because a human
operator selected feature points in the wrong order. We eliminated the six images (under six lighting
conditions) that used this pose from our results.
In general, it is a subject of future work to consider how this sort of analysis may be applied to more
complex imaging situations that include specularities and cast shadows. However, in this section we
will make one basic remark about these situations.
We note that a low-dimensional set of images can also result when the lighting itself is low-
dimensional. This can occur when the lights are all diffuse, as when the sun is behind clouds or
lighting is due to inter-reflections. In this case, the lighting itself may be well approximated by only
low order harmonics. If the lighting is a linear combination of a small number of harmonics, then
images will be a linear combination of those produced when the scene is rendered separately by each
of these harmonics. This low-dimensionality is due simply to the linearity of lighting, the fact that the
sum of two images produced by any two lighting conditions will be the image produced by the sum
of these lighting conditions. Therefore, this will be true under the most general imaging assumptions,
including cast shadows and specularities.
We also note that with specular objects, the bidirectional reflection distribution function (BRDF)
is generally much more sharply peaked than it is with the cosine function. This provides the intuition
that specular objects will be more affected by high-order harmonic components of the lighting. In the
extreme case of a mirror, the entire lighting function passes into the reflectance function, preserving all
components of the lighting. Therefore, we expect that for specular objects, a low order approximation
to the image set will be less accurate. A representation in terms of harmonic images may still provide
a useful approximation, however. This is consistent with the experiments of Epstein et al. [7].
Conclusions
Lighting can be arbitrarily complex. But in many cases its effect is not. When objects are Lambertian,
we show that a simple, nine-dimensional linear subspace can capture the set of images they produce.
This explains prior empirical results. It also gives us a new and effective way of understanding the
effects of Lambertian reflectance as that of a low-pass filter on lighting.
Moreover, we show that this 9D space can be directly computed from a model, as low-degree
polynomial functions of its scaled surface normals. This description allows us to produce efficient
recognition algorithms in which we know we are using an accurate approximation to the model's
images. We can compare models to images in a 9D space that captures at least 98% of the energy of
all the model's images. We can enforce the constraint that lighting be positive by performing a non-negative
least squares optimization in this 9D space. Or, if we are willing to settle for a less accurate
approximation, we can compute the positive lighting that best matches a model to an image by just
solving a six-degree polynomial in one variable. We evaluate the effectiveness of all these algorithms
using a data base of models and images of real faces.
Appendix
A The Harmonic Transform of the Lambertian Kernel
The Lambertian kernel is given by denotes the solid angle between the
light direction and the surface normal. The harmonic transform of k is defined as
m=\Gamman
where the coefficients knm are given by
Z 2-Z -k(')h nm ('; OE) sin 'd'dOE:
Without loss of generality, we set the coordinate system on the sphere as follows. We position one of
the poles at the center of k, ' then represents the angle along a longitude and varies from 0 to -, and OE
represents an angle along a latitude and varies from 0 to 2-. In this coordinate system k is independent
of OE and is rotationally symmetric about the pole. Consequently, all its energy is split between the
zonal harmonics (the harmonics with and the coefficients for every m 6= 0 vanish. Below we
We next determine an explicit form for the coefficients k n . First, we can limit the integration to
the positive portion of the cosine function by integrating over ' only to -=2, that is,
Z 2-Z -0
cos 'h n0 (') sin
Z -0
cos 'h n0 (') sin 'd':
where P n (z) is the associated Legendre function of order n defined by
Substituting
Z 1zP n (z)dz:
We now turn to computing the integral
Z 1zP n (z)dz:
This integral is equal
Z 1z
Integrating by parts yields2 n n!
z
d
dz
Z 1d
dz
The first term vanishes and we are left with
Z 1d
dz
d n\Gamma2
dz n\Gamma2
This formula vanishes for so we obtain2 n n!
d n\Gamma2
dz n\Gamma2
When we take the derivative all terms whose exponent is less than
since we are evaluating the derivative at all the terms whose exponent is larger than
Thus, only the term whose exponent is survives. Denote the coefficient by b n\Gamma2 , then,
when n is odd b when n is even
In this case
d n\Gamma2
dz n\Gamma2 (z
and we obtain Z 1zP n
The above derivation holds for n - 2. The special cases that should be handled
separately. In the first case P 0 and in the second case P 1 the integral becomes
and for
Consequently,
Recognition with Four Harmonics
Finding the nearest image in the 4D harmonic space subject to the constraint that the light is non-negative
has the general form
with A (n \Theta 4), b (n \Theta 1), and B (4 \Theta 4). In this representation the columns of A contain the unscaled
images, b is the image to be recognized, and (The method we
present below, however, can be used with an arbitrary nonsingular matrix B.)
First, we can solve the linear system
and check if this solution satisfies the constraint. If it does, we are done. If not, we must seek a
minimum that occurs when the constraint is satisfied at equality. We will divide the solution into two
parts. In the first part we will convert the problem to the form:
min z
ck s.t. z T Dz - 0;
Later, we will show how to turn the new problem into a sixth degree polynomial.
First, we can assume WLOG that b resides in the column space of A, since the component of b
orthogonal to this space does not affect the solution to the problem. Furthermore, since b lies in the
column space of A we can assume that A is 4 \Theta 4 full rank and b is 4 \Theta 1. This can be achieved, for
example, using a QR decomposition. Now, define b 0 such that Ab (this is possible because A is
full rank). Then, implying that our problem is equivalent to:
Using the method presented in Golub and van Loan [11] (see the second edition, pages 466-471,
especially algorithm 8.7.1) we simultaneously diagonalize A T A and B. This will produce a non-singular
matrix X such that X T A T I denotes the identity matrix, and D is a 4 \Theta 4
diagonal matrix. Thus, we obtain
denotes the inverse of X, and X \GammaT denotes its transpose. Denote
then we obtain
min z
ck s.t. z T
This has the desired form.
Step 2:
At this point we attempt to solve a problem of the form
min z
ck s.t. z T
We solve this minimization problem using Lagrange multipliers. That is,
min z
ck
Taking the derivatives with respect to x and - we get
and
z
From the first equation we get
Since D is diagonal the components of z are given by
which, after multiplying out the denominator, becomes a sixth degree polynomial in -. This polynomial
can be efficiently and accurately solved using standard techniques (we use the MATLAB function roots).
We plug in all solutions to determine x, as indicated above, and choose the real solution that minimizes
our optimization criteria.
Acknowledgements
We are grateful to Bill Bialek, Peter Blicher, Mike Langer and Warren Smith. Their insightful comments
and suggestions have been of great assistance to us. We are also grateful to Rui Ishiyama,
Shizuo Sakamoto, and Johji Tajima for their helpful comments and for providing us with data for our
experiments.
--R
"What is the Set of Images of an Object Under All Possible Lighting Conditions?"
"Fast Lighting/Rendering Solution for Matching a 2D Image to a Database of 3D Models."
"Bidirectional Reflection Functions from Surface Bump Maps"
"Training Models of Shape from Sets of Examples,"
"A quick rendering method using basis functions for interactive lighting design,"
"5 \Sigma 2 Eigenimages Suffice: An Empirical Investigation of Low-Dimensional Lighting Models,"
"Illumination Cones for Recognition Under Variable Lighting: Faces"
"From Few to Many: Generative Models for Recognition Under Variable Pose and Illumination"
Geometric applications of Fourier series and spherical harmonics
Matrix Computations
"A Low-Dimensional Representation of Human Faces for Arbitrary Lighting Condi- tions"
"Photometric stereo under a light source with arbitrary motion,"
"Comparing Images Under Variable Illumination,"
"The application of the Karhunen-Loeve procedure for the characterization of human faces"
"Bidirectional reflection distribution function expressed in terms of surface scattering modes,"
"The Generic Bilinear Calibration-Estimation Problem,"
"Photometria Sive de Mensura et Gradibus Luminus, Colorum et Umbrae"
Face recognition: generalization to novel images
Visual learning and recognition of 3D objects from appearance.
"Dimensionality of illumination manifolds in appearance matching,"
"Efficient re-rendering of naturally illuminated environ- ments,"
Personnal communication.
"On Photometric Issues in 3D Visual Recognition from a Single 2D Image"
"Efficient linear re-rendering for interactive lighting de- sign,"
"Eigenfaces for Recognition,"
"Recognition by Linear Combinations of Models,"
"Predicting reflectance functions from complex surfaces,"
"Determining Generative Models of Objects Under Varying Illumination: Shape and Albedo from Multiple Images Using SVD and Integrability"
"heoretical analysis of illumination in PCA-based vision systems,"
--TR
--CTR
G. Narasimhan , Shree K. Nayar, A practical analytic single scattering model for real time rendering, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Pei Chen , David Suter, An Analysis of Linear Subspace Approaches for Computer Vision and Pattern Recognition, International Journal of Computer Vision, v.68 n.1, p.83-106, June 2006
Feng Xie , Linmi Tao, Estimating illumination parameters in real space with application to image relighting, Proceedings of the 13th annual ACM international conference on Multimedia, November 06-11, 2005, Hilton, Singapore
Ronen Basri , David Jacobs , Ira Kemelmacher, Photometric Stereo with General, Unknown Lighting, International Journal of Computer Vision, v.72 n.3, p.239-257, May 2007
Frdo Durand , Nicolas Holzschuch , Cyril Soler , Eric Chan , Franois X. Sillion, A frequency analysis of light transport, ACM Transactions on Graphics (TOG), v.24 n.3, July 2005
Ameesh Makadia , Christopher Geyer , Kostas Daniilidis, Correspondence-free Structure from Motion, International Journal of Computer Vision, v.75 n.3, p.311-327, December 2007
A. Smith , Edwin R. Hancock, Facial Shape-from-shading and Recognition Using Principal Geodesic Analysis and Robust Statistics, International Journal of Computer Vision, v.76 n.1, p.71-91, January 2008
Zhanfeng Yue , Wenyi Zhao , Rama Chellappa, Pose-encoded spherical harmonics for face recognition and synthesis using a single image, EURASIP Journal on Advances in Signal Processing, v.8 n.1, p.1-18, January 2008
Aaron Hertzmann , Steven M. Seitz, Example-Based Photometric Stereo: Shape Reconstruction with General, Varying BRDFs, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.27 n.8, p.1254-1264, August 2005
Ravi Ramamoorthi , Pat Hanrahan, A signal-processing framework for reflection, ACM Transactions on Graphics (TOG), v.23 n.4, p.1004-1042, October 2004
Wang , Xiao Wang , Jufu Feng, Subspace distance analysis with application to adaptive Bayesian algorithm for face recognition, Pattern Recognition, v.39 n.3, p.456-464, March, 2006
Ravi Ramamoorthi , Dhruv Mahajan , Peter Belhumeur, A first-order analysis of lighting, shading, and shadows, ACM Transactions on Graphics (TOG), v.26 n.1, p.2-es, January 2007
Sang-Il Choi , Chunghoon Kim , Chong-Ho Choi, Shadow compensation in 2D images for face recognition, Pattern Recognition, v.40 n.7, p.2118-2125, July, 2007
Christos-Savvas Bouganis , Mike Brookes, Multiple Light Source Detection, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.26 n.4, p.509-514, April 2004 | object recognition;linear subspaces;face recognition;specular;illumination;spherical harmonics;lambertian |
628975 | An Efficient Data Dependence Analysis for Parallelizing Compilers. | A novel algorithm, called the lambda test, is presented for an efficient and accurate data dependence analysis of multidimensional array references. It extends the numerical methods to allow all dimensions of array references to be tested simultaneously. Hence, it combines the efficiency and the accuracy of both approaches. This algorithm has been implemented in Parafrase, a Fortran program parallelization restructurer developed at the University of Illinois at Urbana-Champaign. Some experimental results are presented to show its effectiveness. | Introduction
A parallelizing compiler relies on data dependence analysis to detect
independent operations in a user's program. For array operands, the analysis needs to
check the existence of integer solutions to a linear system obtained from array
subscript expressions. In simple cases, this can be done rather easily. But in many
cases, this is not so. An analyzer can only resort to checking certain sufficient
conditions to determine the absence of a data dependence. If the conditions do not
hold, the existence of a data dependence becomes unclear. To err on the safe side, the
analyzer must assume that a data dependence exists. Most existing analysis algorithms
give only such "incomplete" tests. There are several well-known data dependence
analysis algorithms in practice. They are based on the theories of Diophantine
equations and the bounds on real functions [3], [4], [22], [1], [2]. [13] calls this class
of algorithms numerical methods. Current numerical methods handle only one single
dimension. For multi-dimensional array references, each dimension has to be tested
separatedly. These methods are generally efficient and can detect data independence in
many practical cases. Nevertheless, for complicated array subscripts, especially for
multi-dimensional arrays, their test results are often too conservative. More precise
results can be achieved by checking the consistency of a linear system of inequalities
and equalities. In theory, it could be solved by integer programming (IP) or, with
some loss of precision, by linear programming (LP). However, currently known LP
and IP algorithms, such as the simplex method [7] and the Karmarkar's method [8],
are aimed at large systems with at least a few hundred constraints and variables. They
are not suitable for data dependence analysis where a large number of small systems
need to be analyzed. Program verification faces similar problems - methods other
than IP or LP are needed to improve its efficiency. Recently, several authors proposed
to use methods such as the Fourier-Motzkin elimination and the loop residue method
(see e. g. [6], [17]) for data dependence analysis [11], [20], [13] 1 . In [21], a modified
simplex method for IP without considering the cycling problem is considered. It can
yield a conservative solution when no conclusion could be reached after a certain
number of iterations. Most of these methods can determine whether there exists a
real-valued solution to a system of equations and inequalities. Even though these
methods [11], [20], [13] still do not answer whether integer solutions exist, they are a
very good approximation in practical cases. Unfortunately, compared to earlier
numerical methods, these methods are very time consuming. The worst-case
computing time is exponential in the number of loop indices even for single-dimensional
array references. Even though empirical data on the efficiency of these
methods is very limited, some experimental results have shown that average testing
time in the Fourier-Motzkin elimination is about 22 to 28 times longer than existing
numerical methods [19]. Unless far more efficient algorithms are found, difficulties in
testing multi-dimensional array references will remain. In this paper, we extend the
existing numerical methods to overcome these difficulties. A geometrical analysis
reveals that we can take advantage of the regular shape of the convex sets derived
from multi-dimensional arrays in a data dependence test. The general methods
proposed before assume very general convex sets; this assumption causes their
inefficiency. We have implemented a new algorithm called the l-test and performed
some measurements. Results were quite encouraging (see Section 4). As in earlier
numerical methods, the proposed scheme uses Diophantine equations and bounds of
real functions. The major difference lies in the way multiple dimensions are treated.
In earlier numerical methods, data areas accessed by two array references are
examined dimension by dimension. If the examination of any dimension shows that
the two areas representing the subscript expressions are disjoint, there is no data
dependence between the two references. However, if each pair of areas appears to
overlap in each individual dimension, it is unclear whether there is an overlapped area
when all dimensions are considered simultaneously. In this case, a data dependence
has to be assumed. Our algorithm treats all dimensions simultaneously. Based on the
subscripts, it selects a few suitable "viewing angles" so that it gets an exact view of
the data areas. Selection of the viewing angles is rather straightforward and only a
few angles are needed in most cases. We present the rest of our paper as follows. In
Section 2, we give some examples to illustrate the difficulties in data dependence
analysis on multi-dimensional array references. Some measurement results on a large
set of real programs are presented to show the actual frequency of such difficult cases.
In Section 3, we describe our new algorithm and provide its theoretical background.
In Section 4, we present our experimental results and give a brief conclusion.
2. Coupled Subscripts in Multi-Dimensional Array References In this section, we
identify coupled subscripts as the main difficulty in data dependence analysis on
multi-dimensional array references. First, we show a couple of examples.
Example 2.1
END END
In the above, r1 and r2 are two references to array A. If there is no data dependence between r1 and r2,
then both loops in the example can be parallelized. In order to determine whether there is a data
dependence between r1 and r2, a system of equations and inequalities should be examined. Let x
and x 2 - j for reference r1, x 3 - i and x 4 - j for reference r2. We equate the subscript expressions in r1
and r2 and get the followings:
because of the loop bounds. Data dependence exists between r1 and r2 if
and only if equations (2.1) and (2.2) have common integer solutions within the loop bounds. We can show that such
common solutions do not exist in this example. Consider a linear combination of (2.1) and (2.2) by (2.1) - 3 - (2.2)
- 2, which gives a new equation:
Given the loop bounds, the minimum value on the left hand side of (2.3) is 17 which is larger than the right hand side
5. This means (2.1) and (2.2) have no solutions. Thus r1 and r2 are independent. However, in most existing numerical
methods, each dimension of the array A is treated separately. Therefore, instead of examining both (2.1) and (2.2)
simultaneously, most algorithms examine each equation separately. If any equation can be shown to have no solution
within the loop bounds, then there is no data dependence. But if each equation has solutions independently, the algorithm
has to assume the existence of data dependence. In (2.1) and (2.2), some loop indices appear in both dimensions
of array A (either in r1 or in r2). As a result, equations (2.1) and (2.2) share some common un-knowns. We say
the references have coupled subscripts. Due to such coupled subscripts, while the simultaneous equations do not have
common solutions (as shown earlier), each individual equation may have independent solutions. As a matter of fact,
is a solution to (2.1), and is a solution to (2.2). Both solutions
are within the loop bounds. No algorithms based on dimension-by-dimension approach could detect the independence
between r1 and r2. In the next example, we consider the effect of coupled subscripts on the determination of dependence
directions [22].
Example 2.2
Obviously there is data dependence between references r1 and r2 in the loop above. However, if the dependence does
not cross loop iterations, then the loop can still be parallelized. This is exactly the case with the inner loop: when
index i of the outer loop is fixed, r1 and r2 could never access the same data element from different iterations of the
inner loop. So the inner loop can be parallelized. Testing cross-iteration dependences is normally done by examining
dependence directions [22]. The procedure of checking these directions starts with setting up the following equations
and inequalities.
The dependence directions to be examined are specified in (2.7). We have because the iteration of the outer
loop should be fixed. is the condition for a data dependence (from r1 to r2) to cross the iterations of the inner
loop. Note that we also need to examine cross-iteration dependences from r2 to r1, for which we should set
The discussion is similar so we omit it. It is not hard to see that equations (2.4) and (2.5) do not have common solutions
satisfying both (2.6) and (2.7). However, earlier numerical methods cannot detect this, again because they treat
each dimension of an array separately. They check on two subsystems. One contains (2.4), (2.6) and (2.7). The
other contains (2.5), (2.6) and (2.7). Now that each subsystem has solutions (e. g.
and (2.7)), the algorithms must assume that the inner loop has cross-iteration
dependences. Dimension-by-dimension approach fails to detect impossible dependence directions in this case because
some loop indices (say, index i ) appear in both dimensions of the array A, i.e., there are coupled subscripts. According
to an empirical study reported in [16], coupled subscripts appear quite frequently in real programs. In that study,
twelve Fortran program packages which contain 1074 subroutines and more than one hundred thousand lines of statements
were examined. The packages include LINPACK, EISPACK, ITPACK, FISHPAK, SPICE and others. Of all
the array references examined, two-dimensional array references account for 36.23%, and three-dimensional array
references account for 7%. The percentage of array references with more than three dimensions are negligible. In
more than four thousand pairs of two-dimensional array references with linear subscripts in DO loops, about 46%
have coupled subscripts 2 . As for array references with more than two dimensions, only 2% have coupled subscripts.
The data show several interesting things. First, multi-dimensional array references are very common. Second, coupled
subscripts appear quite frequently. Third, two-dimensional arrays are dominant in references with coupled sub-
scripts. Although a single dimension test could sometimes succeed in detecting data independence despite the coupled
subscripts, often it may fail. Therefore, it is important to have an efficient test algorithm to handle coupled subscripts.
As the data indicates, two-dimensional array references are especially important.
3. The l-test: a new algorithm We present our new algorithm, the l-test, in this section. We consider a data dependence
problem where subscript expressions are linear in terms of loop indices. Loop bounds are assumed to be con-
-stant, otherwise they are replaced with their closest constant bounds. Dependence directions may also be given if
required. We consider only constant loop bounds in this paper because an efficient test in the presence of variable
loop bounds is a research topic by itself and is beyond the scope of this paper. [5] gave a very good discussion of
dependence tests for single dimensional array references in the presence of variable loop bounds. Handling coupled
subscripts in loops with variable bounds is discussed in [12]. Given the data dependence problem as specified, the l-
test examines a system of equalities and inequalities and determines whether the system has real-valued solutions.
(Section 4 will discuss integer solutions.) Some notations need be defined before we describe the test.
3.1 Notation (r 1 , r 2 ), is a pair of references to an array A of m dimensions. r 1 is nested in l 1 loops. r 1 - A(f 1 (i 1 ,
are the loop indices (from the outermost to the inner-
most). The loops have constant lower bounds
, ., L i, and constant upper bounds U
. r 2 is nested
in l 2 loops. r 2 - A(g 1 (j 1 ,
are the loop indices
(from the outermost to the innermost). The loops have constant lower bounds
, ., L j, and constant upper
bounds
, ., U j. Equating the subscripts of r 1 and r 2 , we have the following:
only if (3.1) has an integer solution (i 1
, ., etc. (r 1 , r 2 ) is said to intersect at (i 1
Assume that f i and g i are all linear. A dependence direction is q - {<, >, =}. A dependence direction vector is q
each q i is a dependence direction [22]. Suppose r 1 and r 2 have l c common loops of which the
innermost is indexed by i l in r 1 , (or j l c
in r 2 ), l c - min(l 1 , l 2 ). (r 1 , r 2 ) is said to intersect with dependence direction
vector q
they intersect at (i 1
may intersect with more than one dependence direction vector.
We denote the set of loop indices in (r 1 ,
} and denote the index set of (r 1 , r 2 )
that appears in the array dimension j, 1-j -d , by IND appears in either f j or g j }.
(a) If IND j 1
-, then dimension j 1 and dimension j 2 are said to be coupled. (r 1 , r 2 ) is said to have coupled
subscripts;
(b) If dimension j 1 and j 2 are coupled, and j 2 and j 3 are coupled, then j 1 and j 3 are also coupled.
3.2 A geometrical analysis Our algorithm is best explained with the aid of a geometrical illustration. Suppose f i 's
and g i 's in (3.1) are linear equations with n unknowns, we can rewrite (3.1) as:
a m
(n
a 2
(n
a 1
(n
We assume that there are no redundant equations in (3.2). Otherwise, they can simply be eliminated. Further, all
array dimensions are assumed to be coupled. Otherwise, (3.2) can be broken into several disjoint subsystems. Partial
solutions can be obtained for each subsystem and later merged together to form a complete solution. Of course, the
number of coupled dimensions, m, is very small in practice. An especially important case is
effort should be expended to derive a fast test. Geometrically, each linear equation is a hyperplane p in R n space.
The intersection, S, of the m hyperplanes corresponds to the common solutions to all the equations in (3.2). Obvi-
ously, if S is empty then there is no data dependence. Checking whether S is empty is trivial in linear algebra.
Therefore we consider only nonempty S henceforth. The loop bounds and the given dependence directions correspond
to a bounded convex set V in R n . An equation has a real-valued solution satisfying the loop bounds and the dependence
directions if and only if its corresponding hyperplane p intersects V. A dimension-by-dimension test would be
able to determine whether each p intersects V. What we want is to determine whether S itself intersects V. If any of
the hyperplanes does not intersect V, then obviously S cannot intersect V. However, even if every hyperplane in (3.2)
intersects V, it is still possible that S and V are disjoint. In Figure 1, p 1 and p 2 are two such hyperplanes representing
two equations from the system, each of which intersects V. But the intersection of p 1 and p 2 is outside of V. If one
can find a new hyperplane which contains S but is disjoint from V, then it immediately follows that S and V do not
intersect. In Figure 1, p 3 is such a new hyperplane. The following theorem guarantees that if S and V are disjoint,
then there must be a hyperplane in R n which contains S and is disjoint from V. Further, this hyperplane is a linear
combination of the hyperplanes in (3.2). On the other hand, if S and V intersect, then no such linear combination
exists.
Theorem 1
only if there exists a hyperplane, p, which corresponds to a linear combination of equations in
l i a -
denotes the inner product of a -
(1) , a i
a i
[proof] See Appendix. An array (l 1 , l 2 , ., l m ) in Theorem 1 determines a hyperplane that contains S. There are
an infinite number of such hyperplanes. The tricky part in the l-test is to examine as few hyperplanes as necessary to
determine whether S and V intersect. We start from the case of both for the convenience of presentation and
for the practical importance of this case, as described above.
3.3 The case of 2-dimensional array references In the case of 2-dimensional array references, the equations in
(3.2) are f
(n . For convenience, we directly refer
to a linear equation as a hyperplane in R n . An arbitrary linear combination of the two equations can be written as
The domain of (l 1 , l 2 ) is the whole R 2 space. Let f l 1
that is f l 1
l 2 a 2
(n
can be viewed in two ways. With (l 1 , l 2
is a linear function of (v (1) , v (2) , ., v (n ) ) in R n . With
(v (1) , v (2) , ., v (n it is a linear function of (l 1 , l 2 ) in R 2 . Further, the coefficient of each v (i ) in f l 1
is a
linear function of (l 1 , l 2 ) in R 2 , i. e. y (i
(i ) .
Definition The equation y (i called a y-equation. A y-equation corresponds to a line in R 2 which
is called a y-line.
Definition Each y-line is the boundary of two half-spaces: Y i
There are at most n y-lines which together divide R 2 into at most 2n regions. Each region is a cone (Figure 2) called
a l-cone. In each l-cone, none of the functions y (i ) can change the sign of its value (except change to zero in a y-
line). This leads to the following lemma.
is defined by loop bounds but not by dependence directions. (Note: we will consider dependence
directions later.) If f l 1
for every (l 1 , l 2 ) in every y-line, then f l 1
every (l 1 , l 2 ) in R 2 .
[Proof] From the well-known intermediate value theorem, f l 1
only if min(f l 1
on V. Since V is bounded and f l 1
is continuous on V, when (l 1 ,l 2 ) is fixed, there exist v
that min(f l 1
(v - min ), and v - such that max(f l 1
(v - max ). With a fixed (l 1 , l 2 ), it is easy to verify
that v - max and v - min are determined by the sign of the coefficient of each v (i
x , if y (i
y , if y (i
where are the lower bound and the upper bound of v (i ) respectively. x , y are arbitrary values in [L (i ) , U (i ) ].
The coefficient of each v (i ) in f l 1
does not change its sign in each l-cone. It follows that v - max and v - min
remain the same in each l-cone. Therefore, f l 1 ,l 2
(v -
(n
(1)
(n
linear function of (l 1 ,l 2 ) in each l-cone. From the assumption of the lemma,
we have f l 1 ,l 2
(v -
on the boundaries of each cone. It is a well-known fact in the convex
theory that any point (l 1 , l 2 ) in a cone can be expressed as a linear combination of some points on the cone's boun-
daries. It immediately follows that f l 1
(v -
is true in each l-cone. Of course it is also true in the whole R 2
space. By the same argument, we have f l 1
(v -
Therefore, for any (l 1 , l 2 ),
intersects V in R n space.
If V is defined by loop bounds plus dependence directions, we have a similar lemma. First we should discuss the
rules for the selection of v -
min and v
dependence directions are given. Note that each dependence direction q
relates a unique pair of loop indices v (i (j ) which are associated with one of the common loops. L (i
U (j ) . Obviously v min
(i
(j
(i
(j ) should be chosen such that the partial sum a (i (j ) has the
minimum value at (v min
(i
(j ) ) and has the maximum value at (v max
(i
(j constrained by a dependence
direction, v max
(i ) and v min
(i ) should be chosen as in the proof of Lemma 1. The following rules determine the
minimum and maximum points of a function f, f - a (1) v (1) + a (2) v (2) in the convex set V defined by
both loop bounds and dependence directions.
Rules If v (i
(j
(j
consider (iia) and (iib). Unless a (i ) < 0 and a (j ) > 0, we have
(j
U (j ) , if a (j ) < 0,
(j
where rules in (iia) would conflict with the dependence
direction. Instead, since we have
a (i
the rules in this case should be
(j
(j
where
(i
(j are similar. Consider (iic) and (iid). Unless
a (i ) >0 and a (j ) <0, we have
(j
(j ) +1, if a (j ) < 0,
U (j ) , if a (j ) > 0,
where
(j
(j
where Rules for the case v (i ) > v (j ) are symmetrical to those in (ii). Details are
omitted.
be the sum of the coefficients of v (i ) and v (j ) in f l 1
, where v (i ) and v (j ) are related by a dependence
direction, i. e., f (i ,
(j
(j From the rules stated above, it is clear that the minimum
point and the maximum point of f l 1
in V, in the presence of dependence directions, depend not only on the sign of
the coefficient of each v (i ) but also on the sign of f (i , j ) .
called a f-equation. Each f-equation corresponds to a f-line in R 2 space.
There are at most n/2 f-lines. All f-lines and y-lines divide R 2 space into at most 3n regions. Each region is a cone,
still called a l-cone. We have the following lemma similar to Lemma 1.
is defined by loop bounds as well as dependence directions. If f l 1
(l 1 , l 2 ) in every y-line and every f-line, then f l 1 ,l 2
[Proof] Follow the same argument as for Lemma 1. As a matter of fact, in order to determine whether f l 1 ,l 2
sects V for every (l 1 , l 2 ) in a f-line or a y-line, it suffices to test a single point in the line. This is from the following
lemma.
Lemma 3 Given a line in R 2 corresponding to an equation al 1
fixed
(l 1
in the line, then for every (l 1 , l 2 ) in the line, f l 1
[Proof] Note that f l 1
only if min(f l 1
on V. Every point in the line can
be expressed as (el 1
el 2
min for f el 1
should be the same as for f l 1
Therefore we have min(f el 1
0). If e - 0 then v -
min for f el 1
in V is the same as v -
and we have min(f el 1
0). In either case, we should have min(f el 1
because of min(f l 1
For the same reason, we have max(f el 1
Definition Given an equation of the form a l 1 are not zero simultaneously, we define a canonical
solution of the equation as follows:
neither of a, b is zero and b > 0;
neither of a, b is zero and b < 0.
Definition The L set is defined to be the set of all canonical solutions to the y-equations and f-equations. The hyper-plane
in R n corresponding to l 1 canonical solution in L, is called a l-plane. Obvi-
ously, the size of the L set is at most n if V is defined by loop bounds only, and is at most 3n/2 if V is defined by
loop bounds as well as dependence directions.
Theorem 2 S intersects V if and only if every l-plane intersects V. If V is defined by loop bounds only, then there
are no more than n l-planes. If V is defined by loop bounds as well as dependence directions, then there are no more
than 3n/2 l-planes.
[Proof] From Lemmas 1-3 and the definition of l-planes. Theorem 2 provides a foundation for the l-test in the
case of 2. The test examines the subscripts from the two coupled dimensions, then determines the L set from the
f-equations and the y-equations. Each element of L determines a l-plane. Each l-plane is tested to see if it intersects
V, by checking its minimum and maximum values as done in Banerjee-Wolfe test on each single dimension. If
any l-plane does not intersect V, then there is no data dependence. If every l-plane intersects V, then data dependence
should be assumed, unless further tests on integer solutions are to be performed. For the sake of efficiency,
computation of the L set and the test on l-planes are performed alternately, i.e., a new element in L is computed only
after it has been tested that the previous l-plane intersects with V. Obviously, repeated canonical solutions can be
ignored. We illustrate the l-test by applying it to Examples 2.1 and 2.2 in Section 2. In Example 2.1, no dependence
directions are given. We only have y-equations, from which the L set is easily determined:
-1), (1, -1)}. A canonical solution (3, -2) determines a l-plane which is exactly the linear combination we used to
show the absence of data dependence in Example 2.1. In Example 2.2, the y-equations have two canonical solutions
(1, 1), each corresponding to an original hyperplane in one of the dimensions. The f-equations are l 1 - l 2
canonical solutions are both (1, 1). This gives a l-plane: which does
not intersect V. In practice, the original hyperplanes f are usually l-planes to be tested. Due to the
regularity of coefficients in subscripts, it is extremely rare that more than one l-plane besides f needs
be tested. In our experiments, the l-test takes less than twice as much time as needed for a dimension-by-dimension
test in most programs and total increase in compile time is very insignificant. Hence, the l test is quite efficient.
3.4 The case of m > 2 To generalize the l-test, we consider m equations in (3.2) with m > 2. All m equations are
assumed to be connected; otherwise they can be partitioned into smaller systems. This case is more of theoretical
interest than of practical concern, since it is rare in real programs to have more than two coupled dimensions. As
stated before, we can assume that there are no redundant equations. An arbitrary linear combination of the m equations
can be written as
,l 2,
,l 2,
l j a j
l j a j
l j a j
(n
It is to be determined whether f l 1
, . ,l m
space for arbitrary (l 1 ,l 2 , . ,l m ). The coefficient
of each v (i ) in f l 1
,l 2,
is a linear function of (l 1 , l 2 , ., l m ) in R m , which is y (i
(i ) .
Definition The equation y (i called a y-equation. A y-equation corresponds to a hyperplane in R m ,
called a y-plane.
Each y-plane is the boundary of two half-spaces defined as follows. Y i
be the sum of the coefficients of v (i ) and v (j ) in f l 1
,l 2,
, where v (i ) and v (j ) are related by a dependence
direction, i. e., f (i j
(j
(j
(j
Definition The equation f (i called a f-equation. A f-equation corresponds to a hyperplane in R m
which is called a f-plane.
Each f-plane is the boundary of two half-spaces defined as follows. F i
Definition If V is defined by loop bounds only, then a nonempty set
- }, is called a l-
-region.
Definition If V is defined by loop bounds as well as dependence directions, then a nonempty set (
- }, is called a l-region. Note that the intersection of F i , j is taken for
all pairs of index variables which are related by a dependence direction.
Lemma 4 Every l-region is a cone in R m space. The l-regions in R m space have several lines as the frame of their
boundaries. Each line (called a l-line) is the intersection of some y-planes and f-planes.
Lemma 5 If f l 1
,l 2,
for every (l 1 , l 2 , ., l m ) in every l-line, then f l 1
,l 2,
for every (l 1 , l 2 , ., l m ) in R m .
[Proof] Follow the same argument as for Lemmas 1 and 2. Note that every point in a cone can be expressed as a
linear combination of some points in the cone's 1-dimensional boundaries.
Lemma 6 Given a line in R m which crosses the origin of the coordinates, if f l 1
,l 2,
fixed (l 1
in the line, then for every (l 1 , l 2 , ., l m ) in the line, f l 1
,l 2,
V.
[Proof] Follow the same argument as for Lemma 3.
Theorem 3 There is a finite set of hyperplanes in R n such that S intersects V if and only if every hyperplane in the
set intersects V. If V is defined by loop bounds only, then there are no more than
hyperplanes in the set. If
V is defined by loop bounds as well as dependence directions, then there are no more than
hyperplanes in the
set.
[Proof] See Appendix. Lemmas 4 through 6 and Theorem 3 provide a foundation for the l-test in the case of m -
3. Obviously, Theorem 3 is also true for the case of 2. We do not go into the detail of the l-test in the general
case, since the discussion is very similar to the case of 2. Compared with the theoretical time complexity of
methods based on inequality consistency checking, the l-test is clearly much faster, especially for small m.
4. Experimental Results and Conclusion
4.1 Experimental results We have implemented the l-test in a program parallelization
restructurer, Parafrase [10], [9], [15]. Almost all well-known dimension-by-dimension data dependence
test algorithms can be found in Parafrase. We added the l-test and performed experiments
on a numerical package, EISPACK [18]. The package has 56 subroutines, 31 of which were
found to have coupled subscripts. Among these 31 subroutines, 25 had their data dependence
analysis improved by the l-test. In our experiments, parallelization of EISPACK subroutines
required examination of 72,697 pairs of array references. Array dimensions ranged from one to
three. Some involved the same pair of array references with different dependence directions. The
dimension-by-dimension algorithms in Parafrase found 30,973 cases that had no data dependence.
In many other cases, the algorithms made a conservative assumption that dependences existed.
We then checked whether coupled subscripts were present. If so, the l-test was applied. The l-
test found an additional 3,214 cases that had no data dependence. So we had an improvement
rate of 10.4%. The improvement rate can be affected by two factors. First, the frequency of coupled
subscripts. Second, the "success rate" of the l-test, by which we mean how often a l-test
detects a case where there is no data independence. In our experiment, coupled subscripts were
found in 8943 cases where the dimension-by-dimension algorithms assumed data dependences.
3214 of them were found to have no data dependence, so the "success rate" was 36%. Table 1
shows a rough breakdown of an improvement rate for various subroutines. For instance, the first
row shows that there are 7 subroutines with the improvement rate between 20% to 40% in each
subroutine. In our experiments, l-planes always included the hyperplane from each dimension of
an array reference. Note that the dimension-by-dimension algorithms had already tested these
hyperplanes. The l-tests examined a total number of 8971 additional l-planes in our experiment.
That is, almost every l-test had examined only one additional l-plane. In light of this fact, the
additional time needed by a l-test is very small. Timing results are shown in Table 2. Each row
shows how much additional time was needed in the l-tests. For instance, the first row shows that
there were 12 subroutines in which the l-tests consumed no more than 20% additional time. For
most of the subroutines (53 out of 56), the l-tests never need more than 100% additional time.
Additional time was spent mostly on (1) finding coupled dimensions, (2) calculating l-values, and
(3) examining each l-plane.
4.2 Conclusion The l-test is a new algorithm that can improve data dependence analysis
significantly when there are coupled subscripts in multi-dimensional array references. It achieves
the same testing precision as methods based on inequality consistency checking. However, its
testing time is much less, especially for small coupled dimensions which occur most frequently.
Therefore, it seems to be a promising practical scheme to overcome many difficulties in existing
data dependence analysis methods. As in the methods based on inequality consistency checking,
the l-test can only determine whether real-valued solutions exist. However, in most practical
cases, it is a very good approximation. [14] found that in many common cases the existence of
real-valued solutions and the unconstrained integer solutions, i. e., integer solutions without considering
loop bounds as well as dependence directions [5], [14], can guarantee the existence of
integer solutions that satisfy the loop bounds as well as dependence directions. There is still
much work to be done in data dependence analysis. A general and efficient way to obtain integer
solutions is very desirable. We hope the proposed l-test has moved one step closer toward that
goal.
--R
"Dependence Analysis for Subscripted Variables And Its Application to Program Transformations,"
"Automatic Translation of Fortran Programs to Vector Form,"
"Data Dependence in Ordinary Programs,"
"Speedup of Ordinary Programs,"
Dependence Analysis for Supercomputing
"Automatic discovery of linear restraints among variables of a pro- gram,"
Linear Programming and Extensions
"A new polynomial-time algorithm for linear programming,"
"The structure of an advanced vectorizer for pipelined processors,"
"Automatic program restructuring for high-speed computations,"
"Optimization And Interconnection Complexity for: Parallel Processors, Single-Stage Networks, And Decision Trees,"
"Intraprocedural and Interprocedural Data Dependence Analysis for Parallel Computing,"
"Introducing symbolic problem solving techniques in the dependence testing phase of a vectorizer,"
Practical methods for exact data dependence analysis
"High-speed multiprocessors and compilation techniques,"
"An empirical study on array subscripts and data dependences,"
"Deciding linear inequalities by computing loop residues,"
Matrix Eigensystem Routines - Eispack Guide
"Interprocedural Analysis for Program Restructuring with Parafrase,"
"Direct parallelization of CALL statements,"
"Dependence of multi-dimensional array references,"
"Optimizing Supercompilers for Supercomputers,"
--TR
Direct parallelization of call statements
Introducing symbolic problem solving techniques in the dependence testing phases of a vectorizer
Dependence of multi-dimensional array references
Some results on exact data dependence analysis
Linear programming 1
Deciding Linear Inequalities by Computing Loop Residues
Automatic discovery of linear restraints among variables of a program
Dependence Analysis for Supercomputing
Automatic program restructuring for high-speed computation
A new polynomial-time algorithm for linear programming
Speedup of ordinary programs
Optimization and interconnection complexity for
Dependence analysis for subscripted variables and its application to program transformations
Optimizing supercompilers for supercomputers
Intraprocedural and interprocedural data dependence analysis for parallel computing
--CTR
Weng-Long Chang , Chih-Ping Chu, The infinity Lambda test, Proceedings of the 12th international conference on Supercomputing, p.196-203, July 1998, Melbourne, Australia
Jay P. Hoeflinger , Yunheung Paek , Kwang Yi, Unified Interprocedural Parallelism Detection, International Journal of Parallel Programming, v.29 n.2, p.185-215, April 2001
Jingke Li , Michael Wolfe, Defining, Analyzing, and Transforming Program Constructs, IEEE Parallel & Distributed Technology: Systems & Technology, v.2 n.1, p.32-39, March 1994
S. Sharma , C.-H. Huang , P. Sadayappan, On data dependence analysis for compiling programs on distributed-memory machines (extended abstract), ACM SIGPLAN Notices, v.28 n.1, p.13-16, Jan. 1993
Jan-Jan Wu, An Interleaving Transformation for Parallelizing Reductions for Distributed-Memory Parallel Machines, The Journal of Supercomputing, v.15 n.3, p.321-339, Mar.1.2000
Dror E. Maydan , John L. Hennessy , Monica S. Lam, Efficient and exact data dependence analysis, ACM SIGPLAN Notices, v.26 n.6, p.1-14, June 1991
T. H. Tzen , L. M. Ni, Dependence Uniformization: A Loop Parallelization Technique, IEEE Transactions on Parallel and Distributed Systems, v.4 n.5, p.547-558, May 1993
Lee-Chung Lu , Marina C. Chen, Subdomain dependence test for massive parallelism, Proceedings of the 1990 conference on Supercomputing, p.962-972, October 1990, New York, New York, United States
Lee-Chung Lu , Marina C. Chen, Subdomain dependence test for massive parallelism, Proceedings of the 1990 ACM/IEEE conference on Supercomputing, p.962-972, November 12-16, 1990, New York, New York
Michael Wolfe, Experiences with data dependence abstractions, Proceedings of the 5th international conference on Supercomputing, p.321-329, June 17-21, 1991, Cologne, West Germany
Z. Shen , Z. Li , P. C. Yew, An Empirical Study of Fortran Programs for Parallelizing Compilers, IEEE Transactions on Parallel and Distributed Systems, v.1 n.3, p.356-364, July 1990
Yu-Kwong Kwok , Ishfaq Ahmad, FASTEST: A Practical Low-Complexity Algorithm for Compile-Time Assignment of Parallel Programs to Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.10 n.2, p.147-159, February 1999
Weng-Long Chang , Chih-Ping Chu , Jia-Hwa Wu, A Polynomial-Time Dependence Test for Determining Integer-Valued Solutions in Multi-Dimensional Arrays Under Variable Bounds, The Journal of Supercomputing, v.31 n.2, p.111-135, December 2004
Yunheung Paek , Jay Hoeflinger , David Padua, Simplification of array access patterns for compiler optimizations, ACM SIGPLAN Notices, v.33 n.5, p.60-71, May 1998
M. Wolfe , C. W. Tseng, The Power Test for Data Dependence, IEEE Transactions on Parallel and Distributed Systems, v.3 n.5, p.591-601, September 1992
Junjie Gu , Zhiyuan Li , Gyungho Lee, Symbolic array dataflow analysis for array privatization and program parallelization, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.47-es, December 04-08, 1995, San Diego, California, United States
Minjoong Rim , Rajiv Jain, Valid Transformations: A New Class of Loop Transformations for High-Level Synthesis and Pipelined Scheduling Applications, IEEE Transactions on Parallel and Distributed Systems, v.7 n.4, p.399-410, April 1996
David J. Lilja, Cache coherence in large-scale shared-memory multiprocessors: issues and comparisons, ACM Computing Surveys (CSUR), v.25 n.3, p.303-338, Sept. 1993
David F. Bacon , Susan L. Graham , Oliver J. Sharp, Compiler transformations for high-performance computing, ACM Computing Surveys (CSUR), v.26 n.4, p.345-420, Dec. 1994
Baowen Xu , Ju Qian , Xiaofang Zhang , Zhongqiang Wu , Lin Chen, A brief survey of program slicing, ACM SIGSOFT Software Engineering Notes, v.30 n.2, March 2005 | numerical methods;hyperplanes;parafrase;array subscripts;fortran program parallelization restructurer;linear inequalities;parallelizing compilers;lambda test;multidimensional array references;FORTRAN;index termsprogram restructuring;data dependence analysis;loop bounds;parallel programming;program compilers;convex set |
629004 | An Empirical Study of Fortran Programs for Parallelizing Compilers. | Some results are reported from an empirical study of program characteristics, that are important in parallelizing compiler writers, especially in the area of data dependence analysis and program transformations. The state of the art in data dependence analysis and some parallel execution techniques are examined. The major findings are included. Many subscripts contain symbolic terms with unknown values. A few methods of determining their values at compile time are evaluated. Array references with coupled subscripts appear quite frequently; these subscripts must be handled simultaneously in a dependence test, rather than being handled separately as in current test algorithms. Nonzero coefficients of loop indexes in most subscripts are found to be simple: they are either 1 or -1. This allows an exact real-valued test to be as accurate as an exact integer-valued test for one-dimensional or two-dimensional arrays. Dependencies with uncertain distance are found to be rather common, and one of the main reasons is the frequent appearance of symbolic terms with unknown values. | Introduction
The key to the success of a parallelizing compiler is to have accurate data depen
ence information on all of the statements in a program. We would like to identify all
of the independent variable references and statements in a program, so they can be
xecuted independently (i.e. in parallel). Several algorithms have been proposed and
used quite successfully in many parallelizing compilers [1], [2], [3], [4], [28], [11]
onetheless, their ability is still limited to relatively simple subscripts. This paper
identifies three factors that could potentially weaken the results of current algorithms
terms with unknown values; (2) coupled subscripts; (3) nonzero and non-
sunity coefficients of loop indices. We discuss the effects of these factors and presen
ome measured results on real programs. We also report some characteristics of data
s
a
dependences found in real programs. The state of the art in data dependence analysi
nd various parallel execution techniques can be examined in light of such informa-
tion. The information can also help to indicate the direction of further improvement in
hose areas. We begin with a brief review of some basic concepts in data dependence
analysis and their effects on parallel execution of programs.
2. Data Dependences
There are three types of data dependences [16]. If a statement, S1, uses the resul
f another statement, S2, then S1 is flow dependent on S2. If S1 can store its result
only after S2 fetches the old data stored in that location, then S1 is antidependent on
2. If S1 overwrites the result of S2, then S1 is output dependent on S2. Data dependences
dictate execution precedence among statements. The following DO loop is an
xample.Example 2.
A
A
In this loop, S1 is flow dependent on S2 because it reads the result of S2 (from the
l
previous iteration). Due to the dependence, the execution of S1 in iteration I must fol
ow the execution of S2 in iteration I-1. S3 is antidependent on S1 and the execution
d
of S3 in iteration I must follow the execution of S1 in iteration I-1. S3 is outpu
ependent on S2 and the execution of S3 in iteration I must follow the execution of S2
F
in iteration I-2. Execution precedence may also be affected by control dependence
or example, an IF statement decides which branch to take. Hence, the statements in
s
the branches cannot be executed before the decision is made. Control dependence i
ot studied in this paper.
In order to speed up program execution on a parallel machine, a parallelizing
compiler can be used to discover independent statements which can be executed i
arallel. DO loops are usually the most important source of such parallelism because
d
they usually contain most of the computation in a program. If there are no depen
ences among the statements in a DO loop, or the dependences are restricted within
the iteration boundaries, different iterations of the loop can be executed concurrently.
xample 2.2
In the above example, although S2 is flow dependent on S1, the dependence is restricted
awithin each iteration (i.e. there is no cross-iteration dependences). Therefore
ll of the iterations in the loop can be executed in parallel. This example shows that a
a
parallelizing compiler not only needs to determine whether data dependence exists, bu
lso needs to analyze whether such dependence prohibits loop parallelization. Many
s
l
transformation techniques (e.g. loop interchange [28] and the detection of Doacros
oops [9]) require even more information about dependences, such as dependence distances
and dependence direction vectors.
If a data dependence occurs across several iterations of a loop, the distance is
s
called its dependence distance (with respect to that loop). All of the data dependence
Example 2.1 have constant distances. For instance, the output dependence between
e
d
and S3 has a distance of 2. If a dependence occurs within the same iteration, th
ependence distance is 0. Note that statements may be nested in a number of loops.
Their dependences may have different distances with respect to different loops.
Example 2.3
END
In the example above, the flow dependence between S1 and S2 has a distance of 1
a
d
with respect to the I loop and a distance of -1 with respect to the J loop. A dat
ependence distance may not always be constant. Consider the following example.
Example 2.4
O
END
The dependence between S1 and S2 has a variable distance. If we use S1 to
e
denote the instance of S1 in iteration i of the I loop and use S2<i,k> to denote th
nstance of S2 in iteration i of the I loop and iteration k of the K loop, then S1<1>
d
should be executed before S2<1,1>, S2<2,1>, ., S2<N,1>; S1<2> should be execute
efore S2<2,2>, S2<3,2>, ., S2<N,2>, and so on. We shall give more examples on
variable dependence distances in Section 4.2.
A dependence direction vector [28] contains several elements, each corresponding
d
a
to one of the enclosing loops. Each element of a dependence direction vector is calle
dependence direction. To simplify the discussion, we take as an example a nest of
a
two loops, where the outer loop has index variable I and the inner loop has index vari
ble J. Suppose as the result of a data dependence between statements S1 and S2, the
r
execution of S1< , > must precede that of S2< , >. The dependence direction fo
r
the J loop is "<", "=", or ">" depending on if we have <
espectively. The dependence direction for the I loop is determined similarly. Note,
however, that since the I loop is the outmost loop, its dependence direction cannot be
>". In Example 2.3, the flow dependence between S1 and S2 has a dependence direction
vector (<, >). In Example 2.4, the flow dependence between S1 and S2 has a
ependence direction vector (<) and a dependence direction vector (=). The two vectors
can sometimes be combined, written as (<=).
Obviously, dependence direction vectors can be used to describe general data
dependences, although they are not as precise as the latter. For many important loop
arallelization and transformation techniques, dependence direction vectors usually provide
nsufficient information. Nonetheless, dependence distances are important to tech
iques such as data synchronization [25], [29], loop partitioning [21], [22], [24], processor
fallocation [9], [23], and processor self-scheduling [12], [26]. As a matter
act, most data synchronization and loop partitioning schemes assume subscripts to
a
d
have a simple form of i+c where i is a loop index and c is a constant. Further, dat
ependences are assumed to have constant distances. If dependences do not have constant
distances, existing schemes either fail or suffer from loss of run-time efficiency.
.2 The Experiment
This empirical study evaluates the complexity of array subscripts and data dependences
pin real programs. Our measurements are done on a dozen Fortran numerica
ackages (Table 1) which have a total of more than a thousand routines and over a
hundred thousand lines of code. This sampling is a mix of library packages (Linpack
ispack, Itpack, MSL, Fishpak) and working programs (SPICE, SMPL, etc. Library
packages are important because their routines are called very frequently in user's pro
rams for scientific and engineering computing. On the other hand, the working programs
may better reflect the array reference behavior in user-written programs
urther study may be needed to distinguish array referencing behavior in library routines
and in working programs. Our code for measurement is embedded in Parafrase
13], which is a restructuring compiler developed at the Center for Supercomputing
Research and Development, University of Illinois at Urbana-Champaign.
Our study differs from previous related works in that we directly measure the
variable references and data dependences, whereas previous works (e.g. [22], [25]
15]) focused on counting the number of statements that can be executed in parallel.
Our work is a preliminary attempt to examine closely the effects of array subscript pat-
erns on some important techniques used at compile time and run time for efficient
parallel execution. We have yet to relate our results to previous results which were
ostly at higher levels, e.g. the statement level. The relationship between those
different levels is certainly an important subject for further study.
Our analysis is presented as follows. In Section 3, we examine the form of array
c
subscripts. We cover three factors that can affect data dependence analysis: linearity
oupled subscripts, and coefficients of loop indices. In Section 4, we show the
e
effectiveness of several well-known data dependence test algorithms. To get som
dea of how often a pair of array references are detected to be independent by these
e
algorithms, we recorded the number of independent array reference pairs detected by
ach algorithm. We also report statistics on data dependence distances. Finally, we
make some concluding remarks in Section 5.
3. Subscripts in Array References
3.1 Linearity of array subscripts
Consider an m-dimensional array reference in a loop nesting with n loops 1
ndexed by I , I , ., I . Normally the reference has the following form:
here: Exp is a subscript expression, 1 - i - m.
A subscript expression has the following form
r
where: I is an index variable, 1 - j - n, a is the coefficient of I , 1 - j - n. b is th
emaining part of the subscript expression that does not contain any index variables.
Note that, following the convention in mathematics, b may be called the constant term
owever, in a nest of loops, it is possible that b may contain some unknown variables
that are updated within the loop. For convenience, we call b the zeroth term.
The above subscript expression is linear if all of its coefficients and its zeroth
s
term are integer constants. Otherwise, it is nonlinear because the subscript expression
ay behave like a nonlinear function with respect to the loop indices. If all of the
f
s
subscript expressions in a reference are linear, we say that the reference is linear. I
ome of the subscript expressions are nonlinear, the reference is partially linear.
Finally, if none of the expressions is linear, the reference is nonlinear.
Virtually all current algorithms for data dependence tests operate on linear subscript
pexpressions. Recently, some symbolic manipulation schemes are proposed fo
artially linear and nonlinear cases [18]. Several restructuring techniques can also be
e
e
used to transform nonlinear subscripts into linear ones. The most notable ones ar
xpression forward substitution, induction variable substitution, and constant folding
(e.g., see [2], [14]).
Table
2 gives an overview of the linearity of array subscripts (i
he 12 numerical packages) after transformation by those techniques. We count only
the references in loops.
From
Table
2, we can see that only 8% of the array references have more than
c
two dimensions (in column 2) and also that 53% of the references are linear (i
olumn 3), 13% are partially linear (in column 4), and 34% are nonlinear (column 5).
The major reason for a subscript expression to be nonlinear is that it contains a
nknown variable (i.e., a non-index variable with an unknown value) or an array ele-
ment. Between the two, Table 3 shows that unknown variables are the major cause.
We found that quite a number of unknown variables are dummy parameters of
subroutines or are related to dummy parameters. Some of them can assume a fixed
alue at run time. The value is normally set by a user before the program is run and
y
s
is transferred from the main program to the subroutines. These variables usuall
pecify the size of a matrix, the number of diagonals, and the number of vectors to be
e
transformed. It has been a common practice to include user assertions to make th
alue of such parameters known to the compiler [8], [15]. As a matter of fact, the
e
value of such parameters usually does not affect the outcome of a data dependenc
est. Often, providing the value mainly helps the data dependence test algorithms to
s
eliminate the same symbolic terms in the subscripts and the loop bounds. Unknow
ymbolic terms may also be eliminated through interprocedural constant propagation
[8]. Further, if data dependence tests could be extended to allow symbolic terms
fixing the value would be unnecessary. For example, [1] presented suggestions about
how to handle symbolic terms. We did not use interprocedural constant propagation
ecause we analyzed procedures separately and did not write driver programs to call
l
subroutines in the packages. We only examined the effect of user assertions on the
inearity of subscripts. Nonetheless, since user assertions often provide the same
f
results as interprocedural constant propagation, our result partly reflects the effect
nterprocedural constant propagation. It is too time consuming to provide user assertions
to more than one thousand routines. Instead, six packages were chosen and
nalyzed with user assertions. These packages are: Linpack, Eispack, Nasa, Baro,
a
Itpack, and Old. Linpack and Eispack were chosen because user assertions were avail-
ble from a previous experiment [15]. The rest of the packages were chosen randomly
Table
4 shows some details of the study. Without the help of user assertions
7% of the one-dimensional array references and 45% of the two-dimensional array
l
a
references were nonlinear. Using user assertions, only 28% of the one-dimensiona
rray references and 15% of the two-dimensional array references remained nonlinear.
l
user assertions, 27% of the two-dimensional array references were partially
inear. Using user assertions, 25% of the two-dimensional array references were partially
linear. Table 4 also shows the number of unknown variables found, including
hose in partially linear references.
For the remaining unknown symbolic terms, we examined the causes. One is that
many nonlinear subscripts showed up in loops with subroutine calls or external func-
ion statements. These nonlinear subscripts cannot be transformed into linear subscripts
using simple forward substitution techniques, unless the call effects of these
tatements are determined by interprocedural analysis. We studied 35 real-valued sub-routines
in the Linpack (there are another 35 complex-valued subroutines in Linpack
hich are almost identical [10]) using summary USE and MOD information to expose
l
call effects [6]. The results on Linpack which are given in Table 5 suggest that non
inear subscripts can be reduced considerably by using interprocedural analysis. Note
that the number of unknown variables include those in partially linear references.
Besides unknown symbolic terms, the next most common reason for nonlinear
subscripts is the presence of an array index which is indirectly an element of an array
he following example is from Linpack, where IPVT(*) is an integer vector of pivot
indices.
Z
There are other various minor reasons for nonlinear subscripts which are omitted here
due to limited space.
3.2 Coupled subscripts
In this section, we study a phenomenon called coupled subscripts which demonstrates
a weakness of current data dependence tests.
To test data dependence between a pair of array references, ideally all array
s
dimensions should be considered simultaneously. However, most current algorithm
est each dimension separately, because a single-dimension test is by far easier. For-
tunately, this often suffices for discovering data independence. However, in cases
here data independence could not be proven by testing each dimension separately, a
s
data dependence has to be assumed. The main reason for those cases is coupled sub
cripts in which a loop index appears in more than one dimension. The following
simple example is from Eispack:
R
In this example, there is no data dependence between the two references to RM1. But
this could be detected only when both dimensions are considered simultaneously.
Of course, to measure how often coupled subscripts actually hurt a single dimension
data dependence test requires testing all dimensions simultaneously to find
enuine data dependences. A few methods are known to be very time consuming.
Recently [19] proposed a new algorithm which is quite efficient. Using the new test
Eispack routines, data independence detection has improved by 10% over using single
dimension tests. [7] discussed a different approach called, linearization, to dealing
ith coupled subscripts.
Here we only measure how often coupled subscripts occur in programs. As we
shall discuss later, coupled subscripts are also a common cause for a dependence to
ave a non-constant distance. We examined all pairs of multidimensional array references
(in the 12 packages) that need to be tested for data dependence. Aliasing effects
ere ignored, so each reference pair is to one array. We did not find four- or five-dimensional
array reference pairs to have coupled subscripts. Table 6 shows the
umber of two- and three- dimensional array reference pairs which have linear or partially
linear subscripts. Table 7 shows that in 9257 pairs of two-dimensional array
eferences that are linear or partially linear, 4105 (44%) of them have coupled subscripts
.3 Coefficients of loop indices
A data dependence exists only when there are integer solutions which satisfy loop
r
s
bounds and other constraints. However, it is very time consuming to obtain intege
olutions in general. Existing algorithms either check integer solutions without considering
sloop bounds or only check real-valued solutions one dimension at a time (e. g.
ee [1], [3], [28]). By doing so, the test can be more efficient, although less effective.
Here we give a brief account of two tests that represent the two approaches.
The GCD test
The GCD test is an integer test that ignores loop bounds. It is based on a well-known
fact that if a Diophantine equation has solutions, then the greatest common
ivisor (GCD) of its coefficients must divide its constant term.
Example 3.1
F
END
or data dependences to exist between S1 and S2 due to the two references to A, the
e
e
subscript of A referenced in S1 (for some values of the index variables) should b
qual to that in S2 (for some other values of the index variables). Hence we can
derive the following Diophantine equation from the subscripts.
he GCD of the coefficients of the variable terms is 2, which does not divide the constant
101. Therefore the equation does not have solutions and there is no data
ependence between S1 and S2 due to the A references.
Banerjee-Wolfe test
Same as the GCD test, the Banerjee-Wolfe test first establishes a Diophantine
s
equation to equate subscripts in two tested array references. However, the test treat
he Diophantine equation as a real valued equation whose domain is a convex set
defined by constant loop bounds and dependence directions. According to the well-
nown intermediate value theorem in real analysis, the real valued equation has solutions
over the given domain if and only if the minimum of the left-hand side is no
reater than zero and the maximum no smaller than zero.
Example 3.2
END
he Diophantine equation for the above example is as follows.
e
Treated as a real function on the domain of 1 - 30, the left-hand side of th
quation has a maximum of -110. Therefore the equation has no solutions and there is
no data dependence between S1 and S2 due to the A references.
Banerjee-Wolfe test was first presented in [3]. [28] produced a new version
s
which includes dependence directions in the function's domain. [1] used another ver
ion which determines dependence levels instead of dependence directions.
s
A test based only on real values is not an exact test in general. Nonetheless, [3
howed that, in a pair of single-dimensional arrays, if all of the nonzero coefficients of
r
loop indices are either 1 or -1, then a data dependence exists if and only if there are
eal solutions to the system derived from their subscript expressions. Obviously, for
multidimensional array reference pairs which do not have coupled subscripts (cf. Sec
ion 3.2), each dimension is independent of each other, so the conclusion will apply.
[19] showed that the conclusion could also apply to two-dimensional array references
ith coupled subscripts. For array references with more than two dimensions,
although the conclusion no longer applies in general, small coefficients do make the
est for integer solutions much easier. For these reasons, we are interested in the magnitude
of coefficients in array references.
The data in Table 8 are from array references (in the 12 packages) which are
linear or partially linear. Notice that the percentage shown there is on a dimension
y-dimension basis. The first column shows the percentage of references which have
e
constant subscripts. The second column shows the percentage of references that hav
onzero coefficients, but they are either 1 or -1. The third column shows the percentage
of references in which some coefficients are greater than 1. The percentage of the
hird case is very small. This result suggests that for single-dimensional references
which are linear or partially linear, real-valued solutions suffice in most cases.
We also checked the coefficients for array reference pairs with coupled subscripts.
As expected, we found that among 4,105 pairs of two-dimensional array references
ith coupled subscripts, most (3,997 pairs, i.e. 97%) have all of their coefficients
being 1 or -1. For three-dimensional array reference pairs with coupled subscripts,
airs (100%) have coefficients of 1 or -1. (Note that, in the single dimension cases, we
r
did make the same examination on subscript pairs. Hence, we could not obtain direc
esults on how often real-valued solutions suffice in single dimension tests)
4. Data Dependences and Data Dependence
Test Algorithms
e
We also did some measurements on the frequency of different data dependenc
ests being used in Parafrase, the number of array reference pairs found to be independent
by each method, and statistics of dependence distances.
.1 The usage frequency of dependence test methods
d
c
As mentioned earlier, different test algorithms have different complexity an
apability. In general, more powerful algorithms can handle more general cases with
r
more accuracy, but they usually require more execution time. Hence, these test algo
ithms are applied hierarchically in Parafrase. Parafrase includes most of the existing
single-dimension test algorithms. Simpler and faster tests are applied first. If they ca
rove neither data independence nor data dependence, other tests are then applied. It
is conceivable that the chance for a test to be used can be affected by the arrangemen
f the test sequence, because the testing task may be accomplished before the test is
r
e
used. It was unclear to us what test sequence would achieve the best compile
fficiency. We did not alter the one chosen in Parafrase. That sequence is described
in the following to facilitate understanding of our statistics.
First we explain the input and output of the tests. Parafrase usually retests data
e
dependences for a pass that needs the dependence information. As a result, the sam
air of references may be tested in different passes, undergoing the same test sequence
e
d
described below. However, different passes may require to check different dependenc
irection vectors. The input to the tests is a pair of array references, a loop nesting
e
c
that encloses either of the references, and a dependence direction vector relevant to th
urrent pass. The output is an answer to whether data dependence exists under the
d
constraint of the dependence direction vector. If the answer is uncertain, data depen
ence is assumed.
The test sequence
both subscripts in a reference pair are constants, then the Constant Test is per-
formed, which simply compares the two constants. If they are not equal, there is
dependence. Otherwise, dependence is assumed for this dimension and the
test proceeds to the next dimension.
If (1) does not apply, then the Root Test is performed. The Root Test is a
Banerjee-Wolfe test disregarding the constraint of the given dependence direction
ector. If it reports data independence, the test terminates. Otherwise, it
proceeds to either test (3), test (4), or test (5), depending on the loop nesting.
test (2) did not prove data independence and both references are in the same
e
singly nested loop, the Exact Test [4], [28] is performed. In this case, th
iophantine equation derived from the subscripts has at most two unknowns;
d
hence it can be determined exactly whether the dependence exists. If dependence
oes not exist, the test terminates. Otherwise, it proceeds to test (5).
s
test (2) did not succeed and test (3) does not apply, but each subscript contain
at most one loop index with a nonzero coefficient, then the GCD test [4] is per-
formed. If independence is proven, the test terminates; otherwise, it proceeds to
est (5). We have three remarks here.
(Remark 1) In test (3), the Exact Test could be extended to determine exactly
what dependence directions are possible. However, Parafrase chooses to disre
ard dependence directions in the Exact Test. Instead, it uses the Theta Test in
(5) to examine the given dependence directions.
Remark 2) The Exact Test could be extended and to be used in test (4) where
s
e
both indices may not be the same. But we did not measure the result of thi
xtension.
(Remark 3) The GCD test could be applied to any subscript. However, if there
d
are more than two unknowns in the equation, it is very likely that the common
ivisor of their coefficients would equal 1, which is not useful for the test
(because 1 can divide any number).
If none of (1), (3) and (4) applies, or (3) did not prove independence, then the
d
Theta Test is performed. It is a Banerjee-Wolfe test that uses the given depen
ence direction vector as a further constraint. Thus it is more accurate than the
e
Root Test in (2). It would either show that the given dependence directions ar
mpossible (in this case, the test terminates), or conclude that some of the directions
bare possible. In the latter case, if "=" direction is the only remaining possi
le direction for every loop, the All Equal Test in (6) is performed. Otherwise,
the test proceeds to the next dimension.
Test enters here from (5). At this point, "=" is the only remaining possible direction
for every loop. This corresponds to a dependence which crosses no loop
teration. The All Equal Test is performed to see if such a direction vector contradicts
ethe program's control flow. If all possible execution paths from the refer
nce r1 to r2 need to cross an iteration of any loop, then dependence could not
exist from r1 to r2 with an "all equal" direction. In other words, independence is
roven. Otherwise, dependence is assumed. Test proceeds to the next dimension.
Following the above test steps, we measured the usage frequency and the
ndependence detection rate of the single dimension tests in Parafrase. Table 9 gives
the measured results. These data are obtained by running each program through
arafrase for detecting Doall loops (i.e. the loops without cross-iteration data depen-
dences). The independence detection rate is the rate a particular test method detects
ndependence between a reference pair, under the constraints of given dependence
s
direction vectors. If a method detects data independence before others have been used
uccess is counted only for this method, even though other methods used next could
potentially detect independence as well. From Table 9, some useful observations ca
e made. Overall, the above test sequence was applied 119,755 times, which is the
sum of the using frequencies of the Constant Test and the Real Root Test. Summing
he independences proven by each method together, we have 50,625 independences in
c
total. This represents an overall independence detection rate of 44%. One can als
ompute the percentage of the independences detected by each test method over the
l
total independences to get an idea of the contribution by each method (in this particu
ar test sequence). It is very important to note that the test sequence was only applied
to linear subscripts or partially linear subscripts.
We mentioned earlier that the same pair of references may be tested repeatedly
l
under different constraints of dependence direction vectors and in different passes. Al
ses of each test method are counted cumulatively, so are its independence detection
s
successes. They are counted cumulatively because all the uses are required and eac
uccess contributes to a certain parallelization technique. Recall, for instance, that
d
even if data dependence exists between two statements, absence of cross-iteration
ependence directions (i.e. "<" and ">") would allow parallel execution of different
loop iterations.
We point out that the All Equal Test benefits from the Theta Test whenever the
latter reduces the original dependence direction vector to one of "all equal" directions
he All Equal Test is a good example of using information of control flow within a
e
loop body to sharpen data dependence analysis. It would be interesting to measur
ow often independences are detected by the Theta Test and the All Equal Test jointly
but not by either one alone.
Our result can certainly be refined by further study. For example, the capability
s
of each test method can be evaluated more precisely by applying it first in the tes
equence.
3.2 Data dependence distance
As mentioned earlier, many data synchronization schemes and loop partitioning
techniques assume constant dependence distance. Constant dependence distance als
akes loop scheduling on Doacross loops more effective. Complicated dependence
e
d
patterns are very difficult to handle efficiently. Moreover, dependence distances ar
ifficult to determine if subscript patterns are complicated. As a matter of fact, many
e
d
parallelization techniques (e.g., [22], [24], [27]) which require constant dependenc
istances assumed the following three conditions for array subscripts and loop nesting:
a
(1) Each reference has subscripts of the form a*i+c where i is a loop index, and c and
are constants. Note that if more than one loop index appears in a subscript expres-
Tsion, the dependence at the outer loop level is likely to have varied distances. (2
here are no coupled subscripts (cf. Section 3.2). (3) Nonzero coefficients are the
same in the same dimension.
(1) can be explained by the following example.
Example 4.1
A
END
For S1 and S2 in the above example, the dependence distance with respect to the
f
c
I loop is variable. The dependence distance with respect to the J loop is I which
ourse remains fixed within an outer loop iteration, but changes when I increments.
Condition (2) can be explained by another example.
xample 4.2
A
END
For S1 and S2 in the above example, the dependence distance with respect to the I
loop is variable. The dependence distance with respect to the J loop is 1. But to find
ut this, both dimensions need to be considered simultaneously.
A
Example 2.4 in Section 2 explained condition (3).
lthouth exceptions can be found for each of the three conditions, our tests
d
looked for such simple forms in real programs. If any condition is not satisfied, we
id not pursue more sophisticated algorithms to determine whether the distances are
e
c
constant but leave the distances as uncertain instead. The only exception is when w
ould determine that "=" is the only possible dependence direction for a loop, in which
case the corresponding dependence distance is zero.
We first determine the common nest loops for an array reference pair. Then we
d
measure the dependence distance for each common nest loop. We divide the depen
ence distances into four classes:
Zero: The dependence does not cross loop iterations. It occurs within an
U
iteration.
nity: The dependence crosses one iteration (either forward or backward).
r
Constant: The dependence crosses a constant number (> 1) of iterations (eithe
forward or backward).
Uncertain: The dependence distance is not constant or cannot be decided in our
O
experiment.
ur measurement shows that 73% of the array references with linear or partially
s
linear subscripts have the i+c form. However, not many dependence distances are con
tant. The results of dependence distance measurement are presented in Table 10.
e
Note that distance is measured for every loop common to a pair of dependent refer-
nces, because of its definition and its usage. The main reasons for uncertain distance
r
s
are (1) loops not common to both references, (2) coupled subscripts, and (3) nonlinea
ubscripts. Note also that a good symbolic dependence test could help to reduce the
d
number of nonlinear subscripts and hence could reduce the cases of uncertain depen
ence distance. A study can be pursued further by counting the cases for different reasons
5. Conclusion
l
We presented some measurements critical to data dependence analysis and DO
oop parallel execution. We found that quite a few array references are not amenable
to current data dependence test methods. Although we do not have the data to show
how many of those failed tests really have data dependences, more efficient and more
accurate tests are certainly very desirable.
We discovered that a lot of subscripts become nonlinear because of unknown
r
terms. User assertions and interprocedural analysis can be used effectively t
educe unknown symbolic terms (see Tables 3, 4, 5). A more sophisticated and yet
efficient symbolic manipulation scheme could be very useful, since a significan
umber of nonlinear subscripts still remain (Table 5).
s
We also discovered that a significant number of reference pairs have coupled sub
cripts (Table 7), which could cause the inaccuracy in the current dependence test
algorithms. Efficient algorithms are needed to handle such subscripts.
A welcome result is that an overwhelming majority of nonzero coefficients are
e
a
either 1 or -1 (Table 8), which allows more efficient real-valued tests to be as accurat
s integer-valued tests [5], [19]. It also makes the test on array references with higher
dimensions much easier.
We reported the measurements on the usage frequencies and independence detection
urates of several well-known data dependence test methods (Table 9). Those meas
rements followed the testing sequence in Parafrase. Data dependence distance is also
f
measured between each dependent reference pair (Table 10). The large percentage
ncertain dependence distances (over 86%) suggests that more sophisticated algorithms
s
are needed for distance calculation. It also calls for more effective schemes for data
ynchronization and DO-loop scheduling. However, in our measurements, we did not
separate numerical libraries from user programs. It is conceivable that, because of the
enerality in library routines, there might be more unknown symbolic terms than in
user programs. In our study, we have more numerical packages than user programs
ence the statistics might be more biased toward library routines than toward user pro-
grams. In the future studies, it would be interesting to see if there are differences
etween these two groups of programs.
Acknowledgement
We thank the referees for their careful reading and insightful comments, whic
ave helped us to improve the paper significantly.
J. R. Allen, "Dependence analysis for subscripted variables and its application to
e
program transformations," Ph.D. dissertation, Department of Mathematical Sci
nce, Rice University, Houston, TX, April 1983.
r
J. R. Allen and K. Kennedy, "Automatic translation of Fortran programs to vecto
form," Dept. of Computer Science, Rice University, Houston, TX, Rice Comp
TR84-9, July, 1984.
[3] U. Banerjee, "Data dependence in ordinary programs," Department of ComputerSciences, University of Illinois at Urbana-Champaign, Rpt. No. 76-837, Nov
976.
[4] U. Banerjee, "Speedup of ordinary programs", Ph.D. dissertation, University of
Illinois at Urbana-Champaign, DCS Rpt. No. UIUCDCS-R-79-989, 1979.
5] U. Banerjee, Dependence Analysis for Supercomputing, Kluwer Academic Publish-
ers, Norwell, Mass., 1988.
J. Banning, "A method for determining the side effects of procedure calls," Ph.D.
dissertation, Standford University, Aug. 1978.
7] M. Burke and R. Cytron, "Interprocedural dependence analysis and paralleliza-
tion," Proc. of the ACM SIGPLAN'86 Symposium on Compiler Construction
CM SIGPLAN Not., Vol. 21, No. 7, pp. 162-175, July 1986.
[8] D. Callahan, K. Cooper, K. Kennedy, and L. Torczan, "Interprocedural constan
propagation," Proc. of the ACM SIGPLAN '86 Symp. on Compiler Construction,
ACM SIGPLAN Not., Vol. 21, No. 6, June 1986.
9] R. G. Cytron, "Compile-time scheduling and optimization for multiprocessors,"
U
Ph.D. dissertation, University of Illinois at Urbana-Champaign, DCS Rep
IUCDCS-R-84-1177, 1984.
[10] J. Dongarra, J. Bunch, C. Moler, and G. W. Stewart, LINPACK Users' Guide,
SIAM, Philadelphia, 1979. Spring-Verlag, Heidelberg, 1976.
J. A. Fisher, J. R. Ellis, J. C. Ruttenberg, and A. Nicolau, "Parallel processing: A
smart compiler and a dumb machine," Proc. of the ACM SIGPLAN '84 Symp.
ompiler Construction, SIGPLAN Notices Vol. 19, No. 6, June 1984.
r
[12] Z. Fang, P. Yew, P. Tang, and C. Zhu, "Dynamic processor self-scheduling fo
general parallel nested loops," Proc. 1987 Int'l. Conf. on Parallel Processing
(August, 1987), pp. 1-10.
13]D. Kuck, R. Kuhn, B. Leasure, and M. Wolfe, "The structure of an advanced vec-
ttorizer for pipelined processors", Proceedings of COMPSAC 80, The 4th Interna
ional Computer Software and Applications Conference, October 1980, pp. 709-
715.
14]D. Kuck, R. Kuhn, D. Padua, B. Leasure and M. Wolfe, "Dependence graphs and
f
compiler organizations", Proceedings of the 8th ACM Symposium on Principles
rogramming Languages, Williamsburgh, VA, January 1981, pp. 207-218.
[15] D. Kuck, A. Sameh, R. Cytron, A. Veidenbaum, et al. "The effects of progra
restructuring, algorithm change, and architecture choice on program performance,"
Proc. 1984 Int'l. Conf. on Parallel Processing, pp. 129-138, August 1984.
16]D. Kuck, The Structure of Computers and Computations, Vol. 1, John Wiley and
Sons, New York, 1978.
17]D. J. Kuck, Y. Muraoka, and S.-C. Chen, "On the number of operations simultaneously
executable in Fortran-like programs and their resulting speedup," IEEE
rans. Compt., vol. c-21, No. 12, pp. 1293-1310, Dec. 1972.
[18] A. Lichnewsky and F. Thomasset, "Introducing symbolic problem solving tech
niques in the dependence testing phases of a vectorizer," Proc. 1988 Int'l. Conf.
on Supercomputing, July, 1988.
19]Z. Li, P.-C. Yew, and C.-Q. Zhu, "An efficient data dependence analysis for parallelizing
pcompilers," IEEE Trans. Parallel and Distributed Systems, vol. 1, No. 1
p. 26-34, Jan. 1990.
[20] A. Nicolau and J. A. Fisher, "Measuring the parallelism available for very long
-instruction word architectures," IEEE Trans. Comput., vol. c-33, No. 11, pp. 968
76, Nov. 1984.
[21] D. A. Padua, "Multiprocessors: Discussions of some theoretical and practical prob-
lems," Ph.D. dissertation, University of Illinois at Urbana-Champaign, DCS Rep
IUCDCS-R-79-990, Nov. 1979.
[22] J.-K. Peir, "Program partitioning and synchronization on multiprocessor systems,"
U
Ph.D. dissertation, University of Illinois at Urbana-Champaign, DCS Rep
IUCDCS-R-86-1259, Mar. 1986.
[23] C. D. Polychronopoulos, D. J. Kuck, and D. A. Padua, "Optimal processor allocation
Pof programs on multiprocessor systems," Proc. 1986 Int'l. Conf. on Paralle
rocessing, Aug. 1986.
[24] W. Shang and J. A. B. Fortes, "Independent partitioning of algorithms with uniform
dependencies," Proc. 1988 Int'l. Conf. Parallel Processing, Aug. 1988, pp
6-33.
[25] B. J. Smith, "A pipelined, shared resource MIMD computer," in Proc. 1978 Int'l.
Conf. Parallel Processing, Aug. 1978, pp. 6-8.
26] P. Tang, P. Yew, and C. Zhu, "Impact of self-scheduling order on performance of
,multiprocessor systems," Proc. of ACM 1988 Int'l. Conf. on Supercomputing (July
[27] P. Tang, P. Yew, and C. Zhu, "Algorithms for generating data-level synchronization
sinstructions," Center for Supercomputing Research and Development, Univer
ity of Illinois at Urbana-Champaign, Rpt. No. 733, Urbana, January, 1988.
[28] M. J. Wolfe, "Optimizing Supercompilers for Supercomputers", Ph.D. dissertation
University of Illinois at Urbana-Champaign, DCS Rpt. No. UIUCDCS-R-82-1105,
October 1982.
29]C. Q. Zhu and P. C. Yew, "A scheme to enforce data dependence on large multiprocessor
systems," IEEE Trans. Software Eng., vol. SE-13, pp. 726-739, June
# Package Description Subroutines Lines
LINPACK Linear system package
ISPACK Eigensystem package 70 11700ITPACK 68 559
Sparse matrix algorithms (iterative
methods)
SL Mathematic science library (CDC) 407 20473FISHPAK 159 2264
Separable elliptic partial differential equations
package
CM Random algorithms from ACM 25 2712B
OLD Checon Chebyshev economization program 74 321
ARO Shallow water atmospheric model 8 1052S
NASA Program from NASA 4 78
MPL Flow analysis program 15 2072S
WEATHER Weather forecasting program 64 391
PICE Circuit simulation program 120 17857 #
Total 1074 102195 #
Table
1. Analyzed Fortran packages
# Dim.
--R
Linear Partially linear nonlinear
Table 2.
Array element
Table 3.
dimensions Undefined Defined
Table 4.
dimension Undefined Defined Def.
Total array references 623 623 623
references w/ nonlinear subscripts 305
dimensions Undefined Defined Def.
Total array references 250 250 250
references w/ nonlinear subscripts
eferences w/ partially linear subscripts 67 27
Table 5.
Total array reference pairs 18698 2867
Table 6.
Pairs w/ linear/partially linear subscripts 9257 2798
Pairs w/ coupled subscripts (linear) 2935
airs w/ coupled subscripts (partially linear) 1170 13
Table 7.
Table 8.
with linear or partially linear subscripts
Test method Usage frequency Indep.
Table 9.
detection rate of various dependence test methods
--TR
Interprocedural constant propagation
Interprocedural dependence analysis and parallelization
Program partitioning and synchronization on multiprocessor systems
A scheme to enforce data dependence on large multiprocessor systems
Introducing symbolic problem solving techniques in the dependence testing phases of a vectorizer
Impact of self-scheduling order on performance on multiprocessor systems
Parallel processing
Dependence Analysis for Supercomputing
Dependence graphs and compiler optimizations
Structure of Computers and Computations
An Efficient Data Dependence Analysis for Parallelizing Compilers
A method for determining the side effects of procedure calls.
Speedup of ordinary programs
Multiprocessors
Dependence analysis for subscripted variables and its application to program transformations
Optimizing supercompilers for supercomputers
Compile-time scheduling and optimization for asynchronous machines (multiprocessor, compiler, parallel processing)
--CTR
Weng-Long Chang , Chih-Ping Chu, The infinity Lambda test, Proceedings of the 12th international conference on Supercomputing, p.196-203, July 1998, Melbourne, Australia
Reiner W. Hartenstein , Karin Schmidt, Combining structural and procedural programming by parallelizing compilation, Proceedings of the 1995 ACM symposium on Applied computing, p.130-134, February 26-28, 1995, Nashville, Tennessee, United States
Zhiyuan Li, Compiler algorithms for event variable synchronization, Proceedings of the 5th international conference on Supercomputing, p.85-95, June 17-21, 1991, Cologne, West Germany
Dan Grove , Linda Torczon, Interprocedural constant propagation: a study of jump function implementation, ACM SIGPLAN Notices, v.28 n.6, p.90-99, June 1993
Niclas Andersson , Peter Fritzson, Generating parallel code from object oriented mathematical models, ACM SIGPLAN Notices, v.30 n.8, p.48-57, Aug. 1995
Venugopal , William Eventoff, Automatic transformation of FORTRAN loops to reduce cache conflicts, Proceedings of the 5th international conference on Supercomputing, p.183-193, June 17-21, 1991, Cologne, West Germany
Lee-Chung Lu , Marina C. Chen, Subdomain dependence test for massive parallelism, Proceedings of the 1990 conference on Supercomputing, p.962-972, October 1990, New York, New York, United States
Lee-Chung Lu , Marina C. Chen, Subdomain dependence test for massive parallelism, Proceedings of the 1990 ACM/IEEE conference on Supercomputing, p.962-972, November 12-16, 1990, New York, New York
Michael P. Gerlek , Eric Stoltz , Michael Wolfe, Beyond induction variables: detecting and classifying sequences using a demand-driven SSA form, ACM Transactions on Programming Languages and Systems (TOPLAS), v.17 n.1, p.85-122, Jan. 1995
On Effective Execution of Nonuniform DOACROSS Loops, IEEE Transactions on Parallel and Distributed Systems, v.7 n.5, p.463-476, May 1996
Manish Gupta , Prithviraj Banerjee, PARADIGM: a compiler for automatic data distribution on multicomputers, Proceedings of the 7th international conference on Supercomputing, p.87-96, July 19-23, 1993, Tokyo, Japan
Kuei-Ping Shih , Jang-Ping Sheu , Chih-Yung Chang, Efficient Address Generation for Affine Subscripts in Data-Parallel Programs, The Journal of Supercomputing, v.17 n.2, p.205-227, Sept. 2000
Michael Wolfe, Beyond induction variables, ACM SIGPLAN Notices, v.27 n.7, p.162-174, July 1992
E. Christopher Lewis , Calvin Lin , Lawrence Snyder, The implementation and evaluation of fusion and contraction in array languages, ACM SIGPLAN Notices, v.33 n.5, p.50-59, May 1998
Michael O'Boyle , G. A. Hedayat, A transformational approach to compiling Sisal for distributed memory architectures, Proceedings of the 6th international conference on Supercomputing, p.335-346, July 19-24, 1992, Washington, D. C., United States
Vivek Sarkar , Guang R. Gao, Optimization of array accesses by collective loop transformations, Proceedings of the 5th international conference on Supercomputing, p.194-205, June 17-21, 1991, Cologne, West Germany
Weng-Long Chang , Chih-Ping Chu , Jia-Hwa Wu, A Polynomial-Time Dependence Test for Determining Integer-Valued Solutions in Multi-Dimensional Arrays Under Variable Bounds, The Journal of Supercomputing, v.31 n.2, p.111-135, December 2004
C. Koelbel , P. Mehrotra, Compiling Global Name-Space Parallel Loops for Distributed Execution, IEEE Transactions on Parallel and Distributed Systems, v.2 n.4, p.440-451, October 1991
Yunheung Paek , Jay Hoeflinger , David Padua, Simplification of array access patterns for compiler optimizations, ACM SIGPLAN Notices, v.33 n.5, p.60-71, May 1998
Guohua Jin , Zhiyuan Li , Fujie Chen, An Efficient Solution to the Cache Thrashing Problem Caused by True Data Sharing, IEEE Transactions on Computers, v.47 n.5, p.527-543, May 1998
Yunheung Paek , Jay Hoeflinger , David Padua, Efficient and precise array access analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.1, p.65-109, January 2002
Junjie Gu , Zhiyuan Li , Gyungho Lee, Symbolic array dataflow analysis for array privatization and program parallelization, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.47-es, December 04-08, 1995, San Diego, California, United States
Partitioning and Labeling of Loops by Unimodular Transformations, IEEE Transactions on Parallel and Distributed Systems, v.3 n.4, p.465-476, July 1992
Gina Goff , Ken Kennedy , Chau-Wen Tseng, Practical dependence testing, ACM SIGPLAN Notices, v.26 n.6, p.15-29, June 1991
Patricio Buli , Veselko Gutin, An extended ANSI C for processors with a multimedia extension, International Journal of Parallel Programming, v.31 n.2, p.107-136, April
David F. Bacon , Susan L. Graham , Oliver J. Sharp, Compiler transformations for high-performance computing, ACM Computing Surveys (CSUR), v.26 n.4, p.345-420, Dec. 1994 | integer-valued test;parallelizing compilers;fortran programs;FORTRAN;program transformations;program characteristics;data dependence analysis;program compilers;index termsarray references |
629046 | Interactive Parallel Programming using the ParaScope Editor. | The ParaScope Editor, an intelligent interactive editor for parallel Fortran programs, which is the centerpiece of the ParaScope project, an integrated collection of tools to help scientific programmers implement correct and efficient parallel programs, is discussed. ParaScope Editor reveals to users potential hazards of a proposed parallelization in a program. It provides a variety of powerful interactive program transformations that have been shown useful in converting programs to parallel form. ParaScope Editor supports general user editing through a hybrid text and structure editing facility that incrementally analyzes the modified program for potential hazards. It is shown that ParaScope Editor supports an exploratory programming style in which users get immediate feedback on their various strategies for parallelization. | Introduction
The widespread availability of affordable parallel machines has increasingly challenged the
abilities of programmers and compiler writers alike. Programmers, eager to use new machines
to speed up existing sequential scientific codes, want maximal performance with minimal ef-
fort. The success of automatic vectorization has led users to seek a similarly elegant software
solution to the problem of programming parallel computers. A substantial amount of research
has been conducted on whether sequential Fortran 77 programs can be automatically converted
without user assistance to execute on shared-memory parallel machines [2, 3, 7, 8, 42,
49]. The results of this research have been both promising and disappointing. Although such
systems can successfully parallelize many interesting programs, they have not established a
level of success that will make it possible to avoid explicit parallel programming by the user.
Hence, research has turned increasingly to the problem of supporting parallel programming.
Systems for automatic detection of parallelism are based on the analysis of dependences
in a program, where two statements depend on each other if the execution of one affects the
other. The process of calculating dependences for a program is known as dependence analysis.
A dependence crossing between regions that are executed in parallel may correspond to a
data race, indicating the existence of potential nondeterminism in the program. In general,
an automatic parallelization system cannot be allowed to make any transformation which
introduces a data race or changes the original semantics of the program.
The systems for automatic detection of parallelism in Fortran suffer from one principal
drawback: the inaccuracy of their dependence analysis. The presence of complex control flow,
symbolic expressions, or procedure calls are all factors which limit the dependence analyzer's
ability to prove or disprove the existence of dependences. If it cannot be proven that a
dependence does not exist, automatic tools must be conservative and assume a dependence,
lest they enable transformations that will change the meaning of the program. In these
situations, the user is often able to solve the problem immediately when presented with the
specific dependence in question. Unfortunately, in a completely automatic tool the user is
never given this opportunity 1 .
parallelization systems (for example, see [45]) provide a directive that instructs the compiler
to ignore all dependences. The use of broad directives like this is unsound because of the danger that the user
will discard real dependences with the false ones, leading to errors that are hard to detect.
To address this problem, we developed Ptool, an interactive browser which displays the
dependences present in a program [6]. Within Ptool, the user selects a specific loop and is
presented with what the analyzer believes are the dependences preventing the parallelization
of that loop. The user may then confirm or delete these dependences based on their knowledge
of the underlying algorithms of the program. Although Ptool is effective at helping users
understand the parallelism available in a given Fortran program, it suffers because it is a
browser rather than an editor [33]. When presented with dependences, the user frequently
sees a transformation that can eliminate a collection of dependences, only to be frustrated
because performing that transformation requires moving to an editor, making the change,
and resubmitting the program for dependence analysis. Furthermore, Ptool cannot help
the user perform a transformation correctly.
The ParaScope Editor, Ped, overcomes these disadvantages by permitting the programmer
and tool each to do what they do best: the tool builds dependences, provides expert
advice, and performs complex transformations, while the programmer determines which dependences
are valid and selects transformations to be applied. When transformations are
performed, Ped updates both the source and the dependence information quickly and cor-
rectly. This format avoids the possibility of the user accidentally introducing errors into the
program. As its name implies, the ParaScope Editor is based upon a source editor, so it also
supports arbitrary user edits. The current version reconstructs dependences incrementally
after any of the structured transformations it provides and for simple edits, such as the deletion
or addition of an assignment statement. For arbitrary unstructured edits with a broader
scope, batch analysis is used to reanalyze the entire program.
The current prototype of Ped is a powerful tool for exploring parallel programs: it
presents the program's data and control relationships to the user and indicates the effectiveness
of program transformations in eliminating impediments to parallelism. It also permits
arbitrary program changes through familiar editing operations. Ped supports several styles
of parallel programming. It can be used to develop new parallel codes, convert sequential
codes into parallel form, or analyze existing parallel programs. In particular, Ped currently
accepts and generates Fortran 77, IBM parallel Fortran [36], and parallel Fortran for the
Sequent Symmetry [46]. The Parallel Computing Forum is developing PCF Fortran [44].
PCF Fortran defines a set of parallel extensions that a large number of manufacturers are
committed to accepting, obviating the current need to support numerous Fortran dialects.
These extensions will be supported when they emerge.
The remainder of this paper is organized as follows. Section 2 discusses the evolution of
the ParaScope Parallel Programming Environment and in particular the ParaScope Editor.
Section 3 outlines the program analysis capabilities of Ped, and Section 4 describes the
manner in which dependence information is displayed and may be modified. A survey of the
program transformations provided by Ped appears in Section 5. Issues involving interactive
programming in Ped are discussed in Section 6. Section 7 summarizes related research, and
Section 8 offers some conclusions.
Background
Ped is being developed in the context of the ParaScope project [17], a parallel programming
environment based on the confluence of three major research efforts at Rice University: IR n ,
the Rice Programming Environment [23]; PFC , a Parallel Fortran Converter [8]; and Ptool,
a parallel programming assistant [6]. All of these are major contributors to the ideas behind
the ParaScope Editor, so we begin with a short description of each. Figure 1 illustrates the
evolution of ParaScope.
2.1 The IR n Programming Environment
Ped enjoys many advantages because it is integrated into the IR n Programming Environment.
Begun in 1982, the IR n Programming Environment project pioneered the use of interprocedural
analysis and optimization in a program compilation system. To accomplish this, it has
built a collection of tools that collaborate to gather information needed to support interprocedural
analysis while preparing a program for execution. Included in this collection is a source
editor for Fortran that combines the features of text and structure editing, representing programs
internally as abstract syntax trees. Also available are a whole program manager, a
debugger for sequential and parallel programs, interprocedural analysis and optimizations,
and an excellent optimizing scalar compiler.
IR n is written in C and runs under X Windows. It is a mature environment that has
been distributed in both source and executable form to many external sites. One of the
Ptool (1985)
dependence analysis
interprocedural analysis
program transformations
user interface
dependence filters
Ped
ParaScope (1987)
text/structure editor
interprocedural analysis
and optimization
Figure
1 Evolution of ParaScope
goals of IR n has been a consistent user interface that is easy to learn and use. As with any
large system consisting of many independent and related portions, another goal has been to
create a modular, easily modified implementation. The resulting environment is well suited
for integrating and extending our research in parallel programming. ParaScope includes and
builds on all of the functionality of the IR n project.
2.2 PFC
The PFC project was begun in 1979 with the goal of producing an automatic source to
source vectorizer for Fortran. It is written in PL/1 and runs on an IBM mainframe. In recent
years the project has focused on the more difficult problem of automatically parallelizing
sequential code. PFC performs data dependence analysis [9, 13, 14, 56], interprocedural
side effect analysis [25] and interprocedural constant propagation [18]. More recently an
implementation of regular section analysis [20, 32], which determines the subarrays affected
by procedure calls, has been completed. This analysis significantly improves the precision of
PFC's dependence graph, because arrays are no longer treated as single units across procedure
calls. PFC also performs control dependence analysis [28], which describes when the execution
of one statement directly determines if another will execute.
The analyses performed in PFC result in a statement dependence graph that specifies a
partial ordering on the statements in the program. This ordering must be maintained to
preserve the semantics of the program after parallelization. The dependence graph is conservative
in that it may include dependences that do not exist, but cannot be eliminated
because of imprecise dependence analysis. In being conservative, PFC guarantees that only
safe transformations are applied, but many opportunities for parallelism may be overlooked.
A special version of PFC has been modified to export the results of control and data depen-
dence, dataflow, symbolic, and interprocedural analysis in the form of an ascii file for use by
Ptool and Ped.
PFC concentrates on discovering and enhancing loop level parallelism in the original
sequential program. Although loops do not contain all the possible parallelism in a program,
there are several reasons for focusing on them. In scientific and numerical applications, most
computation occurs in loops. Also, separate iterations of a loop usually offer portions of
computation that require similar execution times, and often provide enough computation to
keep numerous processors occupied.
PFC has had many successes. It was influential in the design of several commercial vectorization
systems [47], and it has successfully found near-optimal parallelism for a selected set
of test cases [19]. However, it has not been successful enough to obviate the need for explicit
parallel programming. In large complex loops, it tends to find many spurious race conditions,
any one of which is sufficient to inhibit parallelization. Therefore, we have also turned our
attention to the use of dependence information to support the parallel programming process.
2.3 PTOOL
Ptool is a program browser that was developed to overcome some of the limitations of
automatic parallelizing systems by displaying dependences in Fortran programs. It is in use
as a debugging and analysis tool at various sites around the country, such as the Cornell
National Supercomputing Facility. Ptool was developed at the request of researchers at Los
Alamos National Laboratory who wanted to debug programs parallelized by hand. It uses
a dependence graph generated by PFC to examine the dependences that prevent loops from
being run in parallel. To assist users in determining if loops may be run in parallel, Ptool
also classifies variables as shared or private.
When examining large scientific programs, users frequently found an overwhelming number
of dependences in large loops, including spurious dependences due to imprecise dependence
analysis [33]. To ameliorate this problem, a number of improvements were made in
PFC's dependence analysis. In addition, a dependence filtering mechanism was incorporated
in the Ptool browser which could answers complex queries about dependences based on
their characteristics. The user could then use this mechanism to focus on specific classes of
dependences. Ped incorporates and extends these abilities (see Section 4).
2.4 The ParaScope Editor
The ParaScope Editor is an interactive tool for the sophisticated user. Programmers at Rice,
Los Alamos, and elsewhere have indicated that they want to be involved in the process of
parallel programming. They feel the output of automatic tools is confusing, because it does
not easily map to their original code. Often sequential analysis may be invalidated by the
parallel version or, even worse, be unavailable. This complicates the users' ability to improve
the modified source. They want their program to be recognizable; they want to be in control
of its parallelization; and they want to be able to tailor codes for general usage as well as for
specific inputs. Ped is intended for users of this type.
Ped is an interactive editor which provides users with all of the information available
to automatic tools. In addition, Ped understands the dependence graph and parallel con-
structs, and can provide users with both expert advice and mechanical assistance in making
changes and corrections to their programs. Ped will also update both the source and dependence
graph incrementally after changes. In Ped, a group of specific transformations provide
the format for modifying programs in a structured manner. When changes takes this structured
form, updates are incremental and immediate. Ped also supports general arbitrary
edits. When program changes are unstructured, source updates are done immediately and
dependence analysis of the program is performed upon demand.
3 Program Analysis
3.1 Control and Data Dependences
In sequential languages such as Fortran, the execution order of statements is well defined,
making for an excellent program definition on which to build the dependence graph. The
statement dependence graph describes a partial order between the statements that must
be maintained in order to preserve the semantics of the original sequential program. A
dependence between statement S 1
, denoted S 1
, indicates that S 2
depends on S 1
and that the execution of S 1
must precede S 2
There are two types of dependence, control and data. A control dependence, S 1
indicates that the execution of S 1
directly determines if S 2
will be executed at all. The
following formal definitions of control dependence and the post-dominance relation are taken
from the literature [27, 28].
x is post-dominated by y in G f
if every path from x to stop contains y,
where stop is the exit node of G f
Given two statements x, y 2 G f
, y is control dependent on x if and only if:
1. 9 a non-null path p, from x to y, such that y post-dominates every node
between x and y on p, and
2. y does not post-dominate x.
A data dependence, S 1 indicates that S 1 and S 2 use or modify a common variable in
a way that requires their execution order to be preserved. There are three types of data
dependence [40]:
ffl True (flow) dependence occurs when S 1
stores a variable S 2
later uses.
ffl Anti dependence occurs when S 1
uses a variable that S 2
later stores.
ffl Output dependence occurs when S 1
stores a variable that S 2
later stores.
Dependences are also characterized by either being loop-carried or loop-independent [5, 9].
Consider the following:
ENDDO
The dependence, S 1
, is a loop-independent true dependence, and it exists regardless of
the loop constructs surrounding it. Loop-independent dependences, whether data or control,
are dependences that occur in a single iteration of the loop and in themselves do not inhibit
a loop from running in parallel. For example, if S 1 ffiS 2
were the only dependence in the loop,
this loop could be run in parallel, because statements executed on each iteration affect only
other statements in the same iteration, and not in any other iterations.
In comparison, S 1 ffiS 3
is a loop-carried true dependence. Loop-carried dependences are
dependences that cross different iterations of some loop, and they constrain the order in
which iterations of that loop may execute. For this loop-carried dependence, S 3
uses a value
that was created by S 1
on the previous iteration of the I loop. This prevents the loop from
being run in parallel without explicit synchronization. When there are nested loops, the level
of any carried dependence is the outermost loop on which it first arises [5, 9].
3.2 Dependence Analysis
A major strength of Ped is its ability to display dependence information and utilize it to
guide structured transformations. Precise analysis of both control and data dependences in
the program is thus very important. Ped's dependence analyzer consists of four major com-
ponents: the dependence driver, scalar dataflow analysis, symbolic analysis, and dependence
testing.
The dependence driver coordinates other components of the dependence analyzer by handling
queries, transformations, and edits. Scalar dataflow analysis constructs the control flow
graph and postdominator tree for both structured and unstructured programs. Dominance
frontiers are computed for each scalar variable and used to build the static single assignment
graph for each procedure [26]. A coarse dependence graph for arrays is constructed by
connecting fDefsg with fDefs [ Usesg for array variables in each loop nest in the program.
Symbolic analysis determines and compares the values of expressions in programs. When
possible, this component eliminates or characterizes symbolic expressions used to determine
loop bounds, loop steps, array subscript expressions, array dimensions, and control flow. Its
main goal is to improve the precision of dependence testing. The SSA graph provides a
framework for performing constant propagation [53], auxiliary induction variable detection,
expression folding, and other symbolic analysis techniques.
Detecting data dependences in a program is complicated by array references, since it is
difficult to determine whether two array references may ever access the same memory location.
A data dependence exists between these references only if the same location may be accessed
by both references. Dependence testing is the process of discovering and characterizing data
dependences between array references. It is a difficult problem which has been the subject
of extensive research [9, 13, 14, 56]. Conservative data dependence analysis requires that if
a dependence cannot be disproven, it must be assumed to exist. False dependences result
when conservative dependences do not actually exist. The most important objective of the
dependence analyzer is to minimize false dependences through precise analysis.
Ped applies a dependence testing algorithm that classifies array references according to
their complexity (number of loop index variables) and separability (no shared index variables).
Fast yet exact tests are applied to simple separable subscripts. More powerful but expensive
tests are held in reserve for the remaining subscripts. In most cases, results can be merged
for an exact test [30].
Ped also characterizes all dependences by the flow of values with respect to the enclosing
loops. This information is represented as a hybrid distance/direction vector, with one element
per enclosing loop. Each element in the vector represents the distance or direction of the
flow of values on that loop. The hybrid vector is used to calculate the level of all loop-carried
dependences generated by an array reference pair. The dependence information is
used to refine the coarse dependence graph (constructed during scalar dataflow analysis) into
a precise statement dependence graph.
The statement dependence graph contains control and data dependences for the program.
Since Ped focuses on loop-level parallelism, the dependence graph is designed so that dependences
on a particular loop may be collected quickly and efficiently. The dependences may
then be displayed to the user, or analyzed to provide expert advice with respect to some
transformation. The details of the dependence analysis techniques in Ped are described
elsewhere [38].
3.3 Interprocedural Analysis
The presence of procedure calls complicates the process of detecting data dependences. Interprocedural
analysis is required so that worst case assumptions need not be made when
calls are encountered. Interprocedural analysis provided in ParaScope discovers aliasing, side
effects such as variable definitions and uses, and interprocedural constants [18, 25]. Unfor-
tunately, improvements to dependence analysis are limited because arrays are treated as
monolithic objects, and it is not possible to determine whether two references to an array
actually access the same memory location.
To improve the precision of interprocedural analysis, array access patterns can be summarized
in terms of regular sections or data access descriptors. These abstractions describe
subsections of arrays such as rows, columns, and rectangles that can be quickly intersected
to determine whether dependences exist [12, 20, 32]. By distinguishing the portion of each
array affected by a procedure, regular sections provide precise analysis of dependences for
loops containing procedure calls.
3.4 Synchronization Analysis
A dependence is preserved if synchronization guarantees that the endpoints of the dependence
are always executed in the correct order. Sophisticated users may wish to employ event
synchronization to enforce an execution order when there are loop-carried dependences in a
parallel loop. In these cases, it is important to determine if the synchronization preserves all
of the dependences in the loop. Otherwise, there may exist race conditions.
Establishing that the order specified by certain dependences will always be maintained
has been proven Co-NP-hard. However, efficient techniques have been developed to identify
dependences preserved in parallel loops by post and wait event synchronization [21, 22, 52].
Ped utilizes these techniques in a transformation that determines whether a particular dependence
is preserved by event synchronization in a loop. Other forms of synchronization
are not currently handled in Ped. We intend to expand our implementation and include a
related technique that automatically inserts synchronization to preserve dependences.
Implementation Status
Though the implementation of dependence analysis in Ped has made much progress, several
parts are still under construction. Underlying structures such as the control flow graph,
postdominator tree, and SSA graphs have been built, but are not yet fully utilized by the
dependence analyzer. Ped propagates constants, but does not currently perform other forms
of symbolic analysis. Most dependence tests have been implemented, but work remains for
the Banerjee-Wolfe and symbolic tests. Interprocedural analysis of aliases, side effects, and
constants is performed by the ParaScope environment, but is not integrated with Ped's
dependence analysis. This integration is underway as part of a larger implementation encompassing
both interprocedural symbolic and regular section analysis.
To overcome these gaps in the current implementation of dependence analysis, Ped can
import on demand dependence information from PFC . When invoked with a special option,
PFC utilizes its more mature dependence analyzer to produce a file of dependence information.
Ped then converts the dependence file into its own internal representation. This process is a
temporary expedient which will be unnecessary when dependence analysis in Ped is complete.
4 The Dependence Display
This section describes how Ped's interface allows users to view and modify the results of
program analysis. The persistent view provided by Ped appears in Figure 2. The Ped
window is divided into two panes, the text pane and the dependence pane. The text pane is
on top and consists of two parts; a row of buttons under the title bar, and a large area where
Fortran source code is displayed. 2 The buttons in the text pane provide access to functions
such as editing, saving, searching, syntax checking, and program transformations.
Directly below the text pane is the dependence pane in which dependences are displayed.
It also has two parts: buttons for perusing loops and dependences, and a larger pane for
detailed dependence descriptions. The dependence pane shows the results of program analysis
to users. Since Ped focuses on loop level parallelism, the user first selects a loop for
consideration. Regardless of whether the loop is parallel or sequential, the analysis assumes
sequential semantics and the dependences for that loop are displayed.
The current loop can be set by using the next loop and previous loop buttons in the
dependence pane, or by using the mouse to select the loop header in the text pane. The
loop's header and footer are then displayed in italics for the user. Ped will display all
the dependences for the current loop, or one or more of the loop-carried, loop-independent,
control, or private variable 3 dependences. The type of dependences to be displayed can be
selected using the view button. The default view displays just the loop-carried dependences,
2 The code displayed is a portion of the subroutine newque from the code simple, a two dimensional Lagrangian
hydrodynamics program with heat diffusion, produced by Lawrence Livermore National Laboratory.
3 Private variables are discussed in Section 4.2.
Figure
Ped Dependence Display and Filter
because only they represent race conditions that may lead to errors in a parallel loop.
Although many dependences may be displayed, only one dependence is considered to be
the current dependence. The current dependence can be set in the dependence pane by using
the next dependence and previous dependence buttons, or by direct selection with the mouse.
For the convenience of the user the current dependence is indicated in both panes. In the
dependence pane, the current dependence is underlined; in the text pane, the source reference
is underlined and the sink reference is emboldened.
For each dependence the following information is displayed in the dependence pane: the
dependence type (control, true, anti, or output), the source and sink variable names involved
in the dependence (if any), a hybrid dependence vector containing direction and/or distance
information (an exclamation point indicates that this information is exact), the loop level on
which the dependence first occurs, and the common block containing the array references.
As we will show in the next section, dependences that are of interest can be further classified
and organized to assist users in concentrating on some important group of dependences.
4.1 The Dependence Filter Facility
Ped has a facility for further filtering classes of dependences out of the display or restricting
the display to certain classes. This feature is needed because there are often too many dependences
for the user to effectively comprehend. For example, the filtering mechanism permits
the user to hide any dependences that have already been examined, or to show only the class
of dependences that the user wishes to deal with at the moment. When an edge is hidden, it
is still in the dependence graph, and all of the transformation algorithms still consider it; it
is simply not visible in the dependence display. Users can also delete dependences that they
feel are false dependences inserted as the result of imprecise dependence analysis. When an
edge is deleted, it is removed from the dependence graph and is no longer considered by the
transformation algorithms.
The dependence filter facility is shown in Figure 2. A class of dependences is specified
by a query. Queries can be used to select sets of dependences according to specific criteria.
query criteria are the names of variables involved in dependences, the names of common
blocks containing variables of interest, source and sink variable references, dependence type,
and the number of array dimensions. All of the queries, except for the source and sink variable
references, require the user to type a string into the appropriate query field. The variable
reference criteria are set when the user selects the variable reference of interest in the text
pane, and then selects the sink reference or source reference buttons, or both.
Once one or more of the query criteria have been specified, the user can choose to show or
hide the matching dependences. With the show option all of the dependences in the current
dependence list whose attributes match the query become the new dependence list. In Figure
2, we have selected show with the single query criterion: the variable name, drk. With the
hide option all of the dependences in the current list whose characteristics match the query
are hidden from the current list, and the remaining dependences become the new list. All of
the criteria can be set to empty by using the clear button.
The user can also push sets of dependences onto a stack. A push makes the set of dependences
matching the current query become the current database for all subsequent queries. A
pop returns to the dependence database that was active at the time of the last push. Multiple
pushes and corresponding pops are supported. A show all presents all the dependences that
are part of the current database.
The dependence list can be sorted by source reference, sink reference, dependence type,
or common block. Any group of dependences can be selected and deleted from the database
by using the delete button. Delete is destructive, and a removed dependence will no longer
be considered in the transformation algorithms nor appear in the dependence display. In
Section 6, we discuss the implications of dependence deletion.
4.2 Variable Classification
One of the most important parts of determining whether a loop can be run in parallel is
the identification of variables that can be made private to the loop body. This is important
because private variables do not inhibit parallelism. Hence, the more variables that can
legally be made private, the more likely it is that the loop may be safely parallelized.
The variable classification dialog, illustrated in Figure 3, is used to show the classification
of variables referenced in the loop as shared or private. Initially, this classification is based
upon dataflow analysis of the program. Any variable that is:
ffl defined before the loop and used inside the loop,
ffl defined inside the loop and used after the loop, or
Figure
Ped Variable Classification
ffl defined on one iteration of the loop and used on another
is assumed to be shared. In each of these cases, the variable must be accessible to more than
one iteration. All other variables are assumed to be private. To be accessible by all iterations,
shared variables must be allocated to global storage. Private variables must be allocated for
every iteration of a parallel loop, and may be put in a processor's local storage. Notice in
Figure
3, the first loop is parallel. The induction variable, i, is declared as private, because
each iteration needs to have its own copy.
Consider the second loop in Figure 3, but assume n is not live after the loop. Then the
values of n are only needed for a single iteration of the loop. They are not needed by any
other iteration, or after the execution of the loop. Each iteration of the loop must have its
own copy of n, if the results of executing the loop in parallel are to be the same as sequential
execution. Otherwise, if n were shared, the problem would be that one iteration might use
a value of n that was stored by some other iteration. This problem inhibits parallelism, if n
cannot be determined to be private.
In
Figure
3, the variable classification dialog displays the shared variables in the left
list, and the private variables in the right list for the current loop. If the current loop were
transformed into a parallel loop, variables in the shared list would be implicitly specified to be
in global storage, and the variables in the private list would be explicitly included in a private
statement. The user can select variables from either list with the mouse. Once a variable is
selected the reason for its classification will be displayed. Notice in Figure 3, the variable a
is underlined, indicating that it is selected, and the reason it must be in shared-memory is
displayed at the bottom of the pane.
Users need not accept the classification provided by Ped. They can transfer variables from
one list to the other by selecting a variable and then selecting the arrow button that points
to the other list. When users transfer variables from one list to another, they are making an
assertion that overrides the results of dependence analysis, and there is no guarantee that
the semantics of the sequential loop will be the same as its parallel counterpart.
Usually programmers will try to move variables from shared to private storage to increase
parallelism and to reduce memory contention and latency. To assist users in this task, the
classify vars button supports further classification of the variable lists. This mechanism helps
users to identify shared variables that may be moved to private storage by using transforma-
tions, or by correcting conservative dependences.
5 Structured Program Transformations
Ped provides a variety of interactive structured transformations that can be applied to programs
to enhance or expose parallelism. In Ped transformations are applied according to a
power steering paradigm: the user specifies the transformation to be made, and the system
provides advice and carries out the mechanical details. A single transformation may result in
many changes to the source, which if done one at a time may leave the intermediate program
either semantically incorrect, syntactically incorrect, or both. Power steering avoids incorrect
intermediate stages that may result if the user were required to do code restructuring without
assistance. Also, because the side effects of a transformation on the dependence graph are
known, the graph can be updated directly, avoiding any unnecessary dependence analysis.
In order to provide its users with flexibility, Ped differentiates between safe, unsafe, and
inapplicable transformations. An inapplicable transformation cannot be performed because
it is not mechanically possible. For example, loop interchange is inapplicable when there is
only a single loop. Transformations are safe when they preserve the sequential semantics of
the program. Some transformations always preserve the dependence pattern of the program
and therefore can always be safely applied if mechanically possible. Others are only safe when
the dependence pattern of the program is of a specific form.
An unsafe transformation does not maintain the original program's semantics, but is
mechanically possible. When a transformation is unsafe, users are often given the option to
override the system advice and apply it anyway. For example, if a user selects the parallel
button on a sequential loop with loop-carried dependences, Ped reminds the user of the
dependences. If the user wishes to ignore them, the loop can still be made parallel. This
override ability is extremely important in an interactive tool, where the user is being given
an opportunity to apply additional knowledge that is unavailable to the tool.
To perform a transformation, the user selects a sequential loop and chooses a transformation
from the menu. Only transformations that are enabled may be selected. Transformations
are enabled based on the control flow contained in the selected loop. All the transformations
are enabled when a loop and any loops nested within it contain no other control flow; most
of the transformations are enabled when they contain structured control flow; and only a few
are enabled when there is arbitrary control flow.
Once a transformation is selected, Ped responds with a diagnostic. If the transformation
is safe, a profitability estimate is given on the effectiveness of the transformation. Additional
advice, such as a suggested number of iterations to skew, may be offered as well. If the
transformation is unsafe, a warning explains what makes the transformation unsafe. If the
transformation is inapplicable, a diagnostic describes why the transformation cannot be per-
formed. If the transformation is applicable, and the user decides to execute it, the user selects
the do !transformation name? button. The Fortran source and the dependence graph are
then automatically updated to reflect the transformed code.
The transformations are divided into four categories: reordering transformations, dependence
breaking transformations, memory optimizing transformations, and a few miscellaneous
transformations. Each category and the transformations that Ped currently supports
are briefly described below. 4
5.1 Reordering Transformations
Reordering transformations change the order in which statements are executed, either within
or across loop iterations, without violating any dependence relationships. These transformations
are used to expose or enhance loop level parallelism in the program. They are often
performed in concert with other transformations to structure computations in a way that
allows useful parallelism to be introduced.
ffl Loop distribution partitions independent statements inside a loop into multiple loops
with identical headers. It is used to separate statements which may be parallelized from
those that must be executed sequentially [37, 38, 40]. The partitioning of the statements
is tuned for vector or parallel hardware as specified by the user.
ffl Loop interchange interchanges the headers of two perfectly nested loops, changing
the order in which the iteration space is traversed. When loop interchange is safe, it
can be used to adjust the granularity of parallel loops [9, 38, 56].
4 The details of Ped's implementation of several of these transformations appear in [38].
ffl Loop skewing adjusts the iteration space of two perfectly nested loops by shifting the
work per iteration in order to expose parallelism. When possible, Ped computes and
suggests the optimal skew degree. Loop skewing may be used with loop interchange in
Ped to perform the wavefront method [38, 54].
ffl Loop reversal reverses the order of execution of loop iterations.
ffl Loop adjusting adjusts the upper and lower bounds of a loop by a constant. It is
used in preparation for loop fusion.
ffl Loop fusion can increase the granularity of parallel regions by fusing two contiguous
loops when dependences are not violated [4, 43].
ffl Statement interchange interchanges two adjacent independent statements.
5.2 Dependence Breaking Transformations
The following transformations can be used to break specific dependences that inhibit par-
allelism. Often if a particular dependence can be eliminated, the safe application of other
transformations is enabled. Of course, if all the dependences carried on a loop are eliminated,
the loop may then be run in parallel.
ffl Scalar expansion makes a scalar variable into a one-dimensional array. It breaks
output and anti dependences which may be inhibiting parallelism [41].
ffl Array renaming, also known as node splitting [41], is used to break anti dependences
by copying the source of an anti dependence into a newly introduced temporary array
and renaming the sink to the new array [9]. Loop distribution may then be used
to separate the copying statement into a separate loop, allowing both loops to be
parallelized.
ffl Loop peeling peels off the first or last k iterations of a loop as specified by the user.
It is useful for breaking dependences which arise on the first or last k iterations of the
loop [4].
ffl Loop splitting, or index set splitting, separates the iteration space of one loop into
two loops, where the user specifies at which iteration to split. For example, if do
1, 100 is split at 50, the following two loops result: do
100. Loop splitting is useful in breaking crossing dependences, dependences that cross
a specific iteration [9].
5.3 Memory Optimizing Transformations
The following transformations adjust a program's balance between computations and memory
accesses to make better use of the memory hierarchy and functional pipelines. These
transformations are useful for scalar and parallel machines.
ffl Strip mining takes a loop with step size of 1, and changes the step size to a new user
specified step size greater than 1. A new inner loop is inserted which iterates over the
new step size. If the minimum distance of the dependences in the loop is less than
the step size, the resultant inner loop may be parallelized. Used alone the order of the
iterations is unchanged, but used in concert with loop interchange the iteration space
may be tiled [55] to utilize memory bandwidth and cache more effectively [24].
ffl Scalar replacement takes array references with consistent dependences and replaces
them with scalar temporaries that may be allocated into registers [15]. It improves the
performance of the program by reducing the number of memory accesses required.
Unrolling decreases loop overhead and increases potential candidates for scalar replacement
by unrolling the body of a loop [4, 38].
ffl Unroll and Jam increases the potential candidates for scalar replacement and pipelining
by unrolling the body of an outer loop in a loop nest and fusing the resulting inner
loops [15, 16, 38].
5.4 Miscellaneous Transformations
Finally Ped has a few miscellaneous transformations.
Parallel converts a sequential DO loop into a parallel loop, and vice
versa.
ffl Statement addition adds an assignment statement.
ffl Statement deletion deletes an assignment statement.
ffl Preserved dependence? indicates whether the current selected dependence is preserved
by any post and wait event synchronization in the loop.
ffl Constant replacement performs global constant propagation for each procedure in
the program, using the sparse conditional constant algorithm [53]. Any variable found to
have a constant value is replaced with that value, increasing the precision of subsequent
dependence analysis.
5.5 Example
The following example is intended to give the reader the flavor of this type of transformational
system. Consider the first group of nested loops in Figure 4. Let S 1
be the first assignment
statement involving the array D, and S 2
be the second assignment statement involving the
array E. There are two loop-carried dependences, S 1 ffiS 1
. The first is a true
dependence on D carried by the I loop in the first subscript position, and the second is a true
dependence on E carried by the J loop in the second subscript position. We notice that both
loops are inhibited from running in parallel by different dependences which are not involved
with each other. To separate these independent statements, we consider distributing the
loops. Distribution on the inner loop results in the message shown in Figure 4. The message
indicates what the results of performing distribution on this loop would be. The execution
of distribution results in the following code.
ENDDO
ENDDO
ENDDO
Unfortunately, the dependence on S 1
carried by the I loop still inhibits the I loop parallelism
. We perform distribution once again, this time on the outer loop.
Figure
ENDDO
ENDDO
ENDDO
ENDDO
We have decided to distribute for parallelism in this example. So even though S 2
and S 3
are independent, the algorithm leaves them together to increase the amount of computation
in the parallel loop. If we had selected vectorization they would have been placed in separate
loops.
Continuing our example, notice the second I loop can now be run in parallel, and the inner
J loop in the first nest can be run in parallel. To achieve a higher granularity of parallelism
on the first loop nest, the user can interchange the loops, safely moving the parallelism to
the outer loop. As can be seen in the second loop nest of Figure 4, we have safely separated
the two independent statements and their dependences, achieving two parallel loops.
6 Relating User Changes to Analysis
In previous sections we discussed how users may direct the parallelization process by making
assertions about dependences and variables, as well as by applying structured transforma-
tions. This section first briefly describes editing in Ped. Then, the interaction between
program changes and analysis is examined.
Editing is fundamental for any program development tool because it is the most flexible
means of making program changes. Therefore, the ParaScope Editor integrates advanced
editing features along with its other capabilities. Ped supplies simple text entry and template-based
editing with its underlying hybrid text and structure editor. It also provides search
and replace functions, intelligent and customizable view filters, and automatic syntax and
type checking.
Unlike transformations or assertions, editing causes existing dependence information to be
unreliable. As a result, the transformations and the dependence display are disabled during
editing because they rely on dependence information which may be out of date. After users
finish editing, they can request the program be reanalyzed by selecting the analysis button.
Syntax and type checking are performed first, and any errors are reported. If there are no
errors, dependence analysis is performed. Ped's analysis may be incremental when the scope
of an edit is contained within a loop nest or is an insertion or deletion of a simple assignment
statement. The details of incremental analysis after edits and transformations are discussed
elsewhere [38].
The purpose of an edit may be error correction, new code development, or just to rearrange
existing code. Unlike with transformations, where the correctness of pre-existing source
is assumed, Ped does not know the intent of an edit. Consequently, the user is not advised
as to the correctness of the edit. Instead, the "new" program becomes the basis for dependence
analysis and any subsequent changes. No editing history is maintained. Similarly, any
transformations the user performs before an edit, whether safe or unsafe, are included in the
new basis. However, if prior to editing the user made any assertions, analysis causes them to
be lost.
For example, suppose the user knows the value of a symbolic. Based on this knowledge, the
user deletes several overly conservative dependences in a loop and transforms it into a parallel
loop. Later, the user discovers an error somewhere else in the program and corrects it with
a substantial edit. The user then reanalyzes the program. In the current implementation,
the parallel loop will remain parallel, but any deleted dependences will reappear and, as
experience has shown, annoy users.
As a result, a more sophisticated mechanism is planned [29]. In the future, edges that
are deleted by users will be marked, rather than removed from the dependence graph. Ad-
ditionally, the time, date, user, and an optional user-supplied explanation will be recorded
with any assertions. This mechanism will also support more general types of assertions, such
as variable ranges and values which may affect many dependence edges. These records will
be used during analysis to keep deleted dependences from reappearing. However, to prevent
errors when edits conflict with assertions, users will be given an opportunity to reconsider
any assertions which may have been affected by the edit. Users may delay or ignore this
opportunity. With this mechanism, the assertions will also be available during execution and
debugging. There, if an assertion is found to be erroneous, users can be presented with any
anomalies which may have been ignored, overlooked, or introduced.
7 Related Work
Several other research groups are also developing advanced interactive parallel programming
tools. Ped is distinguished by its large collection of transformations, the expert guidance
provided for each transformation, and the quality of its program analysis and user interface.
Below we briefly describe Sigmacs [48], Pat [50], MIMDizer [1], and Superb [57], placing
emphasis on their unique features.
Sigmacs is an interactive emacs-based programmable parallelizer in the Faust programming
environment. It utilizes dependence information fetched from a project database maintained
by the database server. Sigmacs displays dependences and provides some interactive
program transformations. Work is in progress to support automatic updating of dependence
information after statement insertion and deletion. Faust can compute and display call and
process graphs that may be animated dynamically at run-time [31]. Each node in a process
graph represents a task or a process, which is a separate entity running in parallel. Faust
also provides performance analysis and prediction tools for parallel programs.
Pat can analyze programs containing general parallel constructs. It builds and displays
a statement dependence graph over the entire program. In Pat the program text that corresponds
with a selected portion of the graph can be perused. The user may also view the
list of dependences for a given loop. However, Pat can only analyze programs where only
one write occurs to each variable in a loop. Like Ped, incremental dependence analysis is
used to update the dependence graph after structured transformations [51]. Rather than
analyzing the effects of existing synchronization, Pat can instead insert synchronization to
preserve specific dependences. Since Pat does not compute distance or direction vectors,
loop reordering transformations such as loop interchange and skewing are not supported.
MIMDizer is an interactive parallelization system for both shared and distributed-memory
machines. Based on Forge, MIMDizer performs dataflow and dependence analysis to support
interactive loop transformations. Cray microtasking directives may be output for successfully
parallelized loops. Associated tools graphically display control flow, dependence, profiling,
and call graph information. A history of the transformations performed on a program is
saved for the user. MIMDizer can also generate communication for programs to be executed
on distributed-memory machines.
Though designed to support parallelization for distributed-memory multiprocessors, Superb
provides dependence analysis and display capabilities similar to that of Ped. Superb
also possesses a set of interactive program transformations designed to exploit data parallelism
for distributed-memory machines. Algorithms are described for the incremental update
of use-def and def-use chains following structured program transformations [39].
Conclusions
Programming for explicitly parallel machines is much more difficult than sequential program-
ming. If we are to encourage scientists to use these machines, we will need to provide new
tools that have a level of sophistication commensurate with the difficulty of the task. We
believe that the ParaScope Editor is such a tool: it permits the user to develop programs
with the full knowledge of the data relationships in the program; it answers complex questions
about potential sources of error; and it correctly carries out complicated transformations to
enhance parallelism.
Ped is an improvement over completely automatic systems because it overcomes both the
imprecision of dependence analysis and the inflexibility of automatic parallel code generation
techniques by permitting the user to control the parallelization process. It is an improvement
over dependence browsers because it supports incremental change while the user is reviewing
potential problems with the proposed parallelization. Ped has also proven to be a useful
basis for the development of several other advanced tools, including a compiler [34] and data
decomposition tool [10, 11] for distributed-memory machines, as well an on-the-fly access
anomaly detection system for shared-memory machines [35].
We believe that Ped is representative of a new generation of intelligent, interactive programming
tools that are needed to facilitate the task of parallel programming.
Acknowledgments
We would like to thank Donald Baker, Vasanth Balasundaram, Paul Havlak, Marina Kalem,
Ulrich Kremer, Rhonda Reese, Jaspal Subhlok, Scott Warren, and the PFC research group for
their many contributions to this work. Their efforts have made Ped the useful research tool
it is today. In addition, we gratefully acknowledge the contribution of the IR n and ParaScope
research groups, who have provided the software infrastructure upon which Ped is built.
--R
The MIMDizer: A new parallelization tool.
An overview of the PTRAN analysis system for multiprocessing.
A framework for detecting useful parallelism.
A catalogue of optimizing transformations.
Dependence Analysis for Subscripted Variables and Its Application to Program Transformations.
Automatic decomposition of scientific programs for parallel execution.
PFC: A program to convert Fortran to parallel form.
Automatic translation of Fortran programs to vector form.
An interactive environment for data partitioning and distribution.
A static performance estimator to guide data partitioning decisions.
A technique for summarizing data access and its use in parallelism enhancing transformations.
Dependence Analysis for Supercomputing.
Interprocedural dependence analysis and parallelization.
Improving register allocation for subscripted variables.
Estimating interlock and improving balance for pipelined machines.
ParaScope: A parallel programming environment.
Interprocedural constant propa- gation
Vectorizing compilers: A test suite and results.
Analysis of interprocedural side effects in a parallel programming environment.
Analysis of event synchronization in a parallel programming tool.
Static analysis of low-level synchronization
Blocking linear algebra codes for memory hierarchies.
The impact of interprocedural analysis and optimization in the IR n programming environment.
An efficient method of computing static single assignment form.
Experiences using control dependence in PTRAN.
The program dependence graph and its use in optimization.
Practical dependence testing.
An integrated environment for parallel programming.
Experience with interprocedural analysis of array side effects.
On the use of diagnostic dependency-analysis tools in parallel programming: Experiences using PTOOL
Compiler support for machine-independent parallel programming in Fortran D
Parallel program debugging with on-the- fly anomaly detection
Advanced tools and techniques for automatic parallelization.
The Structure of Computers and Computations
Analysis and transformation of programs for parallel computation.
The structure of an advanced retargetable vectorizer.
Dependence graphs and compiler optimizations.
PCF Fortran: Language Definition
A Guidebook to Fortran on Supercomputers.
Guide to Parallel Programming on Sequent Computer Systems.
A vectorizing Fortran compiler.
SIGMACS: A programmable programming environment.
An empirical investigation of the effectiveness of and limitations of automatic parallelization.
Incremental dependence analysis for interactive parallelization.
Analysis of Synchronization in a Parallel Programming Environment.
Constant propagation with conditional branches.
Loop skewing: The wavefront method revisited.
More iteration space tiling.
Optimizing Supercompilers for Supercomputers.
SUPERB: A tool for semi-automatic MIMD/SIMD parallelization
--TR
The impact of interprocedural analysis and optimization in the R<sup>n</sup> programming environment
Interprocedural constant propagation
Interprocedural dependence analysis and parallelization
A vectorizing Fortran compiler
The program dependence graph and its use in optimization
Automatic translation of FORTRAN programs to vector form
Loop skewing: the wavefront method revisited
A practical environment for scientific programming
Automatic decomposition of scientific programs for parallel execution
Estimating interlock and improving balance for pipelined architectures
A framework for determining useful parallelism
Guide to parallel programming on Sequent computer systems: 2nd edition
Vectorizing compilers: a test suite and results
Static analysis of low-level synchronization
Analysis of interprocedural side effects in a parallel programming environment
An overview of the PTRAN analysis system for multiprocessing
A technique for summarizing data access and its use in parallelism enhancing transformations
An efficient method of computing static single assignment form
More iteration space tiling
On the use of diagnostic dependence-analysis tools in parallel programming
Experiences using control dependence in PTRAN
Improving register allocation for subscripted variables
Analysis of event synchronization in a parallel programming tool
Analysis and transformation in the ParaScope editor
A static performance estimator to guide data partitioning decisions
Parallel program debugging with on-the-fly anomaly detection
Loop distribution with arbitrary control flow
Experience with interprocedural analysis of array side effects
Practical dependence testing
Analysis of synchronization in a parallel programming environment
Incremental dependence analysis for interactive parallelization
Optimizing Supercompilers for Supercomputers
Dependence Analysis for Supercomputing
Dependence graphs and compiler optimizations
A Guidebook to FORTRAN on Supercomputers
Structure of Computers and Computations
Faust
Blocking Linear Algebra Codes for Memory Hierarchies
Constant Propagation with Conditional Branches
Dependence analysis for subscripted variables and its application to program transformations
--CTR
P. Havlak , K. Kennedy, An Implementation of Interprocedural Bounded Regular Section Analysis, IEEE Transactions on Parallel and Distributed Systems, v.2 n.3, p.350-360, July 1991
Eran Gabber , Amir Averbuch , Amiram Yehudai, Portable, Parallelizing Pascal Compiler, IEEE Software, v.10 n.2, p.71-81, March 1993
Yang Bo , Zheng Fengzhou , Wang Dingxing , Zheng Weimin, Interactive and symbolic data dependence analysis based on ranges of expressions, Journal of Computer Science and Technology, v.17 n.2, p.160-171, March 2002
Flanagan , Matthew Flatt , Shriram Krishnamurthi , Stephanie Weirich , Matthias Felleisen, Catching bugs in the web of program invariants, ACM SIGPLAN Notices, v.31 n.5, p.23-32, May 1996
Warren, The D editor: a new interactive parallel programming tool, Proceedings of the 1994 conference on Supercomputing, p.733-ff., December 1994, Washington, D.C., United States
Hiranandani , Ken Kennedy , Chau Wen Tseng , Scott Warren, The D editor: a new interactive parallel programming tool, Proceedings of the 1994 ACM/IEEE conference on Supercomputing, November 14-18, 1994, Washington, D.C.
Ishfaq Ahmad , Yu-Kwong Kwok , Min-You Wu , Wei Shu, CASCH: A Tool for Computer-Aided Scheduling, IEEE Concurrency, v.8 n.4, p.21-33, October 2000
Mary W. Hall , Ken Kennedy , Kathryn S. McKinley, Interprocedural transformations for parallel code generation, Proceedings of the 1991 ACM/IEEE conference on Supercomputing, p.424-434, November 18-22, 1991, Albuquerque, New Mexico, United States
Mary W. Hall , Timothy J. Harvey , Ken Kennedy , Nathaniel McIntosh , Kathryn S. McKinley , Jeffrey D. Oldham , Michael H. Paleczny , Gerald Roth, Experiences using the ParaScope Editor: an interactive parallel programming tool, ACM SIGPLAN Notices, v.28 n.7, p.33-43, July 1993
Gina Goff , Ken Kennedy , Chau-Wen Tseng, Practical dependence testing, ACM SIGPLAN Notices, v.26 n.6, p.15-29, June 1991
Ken Kennedy , Kathryn S. McKinley , Chau-Wen Tseng, Analysis and transformation in the ParaScope editor, Proceedings of the 5th international conference on Supercomputing, p.433-447, June 17-21, 1991, Cologne, West Germany
Thomas Fahringer , Bernhard Scholz, A Unified Symbolic Evaluation Framework for Parallelizing Compilers, IEEE Transactions on Parallel and Distributed Systems, v.11 n.11, p.1105-1125, November 2000 | scientific programmers;efficient parallel programs;general user editing;hybrid text;modified program;integrated collection;parascopeproject;exploratory programming style;text editing;FORTRAN;powerful interactive program transformations;interactive programming;Index TermsParaScope Editor;structureediting facility;intelligent interactive editor;parallel programming;parallel Fortran programs |
629048 | An Implementation of Interprocedural Bounded Regular Section Analysis. | Regular section analysis, which summarizes interprocedural side effects on subarrays in a form useful to dependence analysis, while avoiding the complexity of prior solutions, is shown to be a practical addition to a production compiler. Optimizing compilers should produce efficient code even in the presence of high-level language constructs. However, current programming support systems are significantly lacking in their ability to analyze procedure calls. This deficiency complicates parallel programming, because loops withcalls can be a significant source of parallelism. The performance of regular section analysis is compared to two benchmarks: the LINPACK library of linear algebra subroutines and the Rice Compiler Evaluation Program Suite (RiCEPS), a set of complete application codes from a variety of scientific disciplines. The experimental results demonstrate that regular section analysis is an effective means of discovering parallelism, given programs written in an appropriately modular programming style. | Introduction
A major goal of compiler optimization research is to generate code that is efficient enough to
encourage the use of high-level language constructs. In other words, good programming practice
This research has been supported by the National Science Foundation under grant CCR 88-09615, by the IBM
Corporation, and by Intel Scientific Computers
should be rewarded with fast execution time.
The use of subprograms is a prime example of good programming practice that requires compiler
support for efficiency. Unfortunately, calls to subprograms inhibit optimization in most programming
support systems, especially those designed to support parallel programming in Fortran. In
the absence of better information, compilers must assume that any two calls can read and write the
same memory locations, making parallel execution nondeterministic. This limitation particularly
discourages calls in loops, where most compilers look for parallelism.
Traditional interprocedural analysis can help in only a few cases. Consider the following loop:
100 CONTINUE
If SOURCE only modifies locations in the Ith column of A, then parallel execution of the loop is
deterministic. Classical interprocedural analysis only discovers which variables are used and which
are defined as side effects of procedure calls. We must determine the subarrays that are accessed
in order to safely exploit the parallelism.
In an earlier paper, Callahan and Kennedy proposed a method called regular section analysis
for tracking interprocedural side-effects. Regular sections describe side effects to common sub-structures
of arrays such as elements, rows, columns and diagonals [1, 2]. This paper describes
an implementation of regular section analysis in the Rice Parallel Fortran Converter (PFC) [3], an
automatic parallelization system that also computes dependences for the ParaScope programming
environment[4]. The overriding concern in the implementation is that it be efficient enough to be
incorporated in a practical compilation system.
Algorithm 1 summarizes the steps of the analysis, whiich is integrated with the three-phase
interprocedural analysis and optimization structure of PFC [5, 6]. Regular section analysis added
less than 8000 lines to PFC, a roughly 150,000-line PL/I program which runs under IBM VM/CMS.
The remainder of the paper is organized as follows. Section 2 compares various methods for
representing side effects to arrays. Section 3 gives additional detail on the exact variety of bounded
regular sections implemented. Sections 4 and 5 describe the construction of local sections and their
propagation, respectively. Section 6 examines the performance of regular section analysis on two
benchmarks: the Linpack library of linear algebra subroutines and the Rice Compiler Evaluation
Local Analysis:
for each procedure
for each array (formal parameter, global, or static)
save section describing shape
for each reference
build ranges for subscripts
merge resulting section with summary MOD or USE section
save summary sections
for each call site
for each array actual parameter
save section describing passed location(s)
for each scalar (actual parameter or global)
save range for passed value
Interprocedural Propagation:
solve other interprocedural problems
call graph construction
classical MOD and USE summary
constant propagation
mark section subscripts and scalars invalidated by modifications as ?
iterating over the call sites
translate summary sections into call context
merge translated sections into caller's summary
Dependence Analysis:
for each procedure
for each call site
for each summary section
simulate a DO loop running through the elements of the section
test for dependences (Banerjee's, GCD)
Algorithm 1: Overview of Regular Section Analysis
Program Suite (Riceps), a set of complete application codes from a variety of scientific disciplines.
Sections 7 and 8 suggest areas for future research and give our conclusions.
Array Side Effects
A simple way to make dependence testing more precise around a call site is to perform inline
expansion, replacing the called procedure with its body [7]. This precisely represents the effects
of the procedure as a sequence of ordinary statements, which are readily understood by existing
dependence analyzers. However, even if the whole program becomes no larger, the loop nest which
contained the call may grow dramatically, causing a time and space explosion due to the non-linearity
of array dependence analysis [8].
To gain some of the benefits of inline expansion without its drawbacks, we must find another
representation for the effects of the called procedure. For dependence analysis, we are interested in
the memory locations modified or used by a procedure. Given a call to procedure p at statement
and an array global variable or parameter A, we wish to compute:
ffl the set M A
of locations in A that may be modified via p called at S 1 and
ffl the set U A
of locations in A that may be used via p called at S 1 .
We need comparable sets for simple statements as well. We can then test for dependence by
intersecting sets. For example, there exists a true dependence from a statement S 1 to a following
statement based on an array A, only if
Several representations have been proposed for representing interprocedural array access sets.
The contrived example in Figure 1 shows the different patterns that they can represent precisely.
Evaluating these methods involves examining the complexity and precision of:
A
Classical Summary Triolet
Regular Sections
Bounds With Bounds & Strides
DAD/Simple Section
Figure
1: Summarizing the References A[1; 2], A[4; 8], and A[10; 6]
ffl representing the sets M A
and U A
merging descriptors to summarize multiple accesses (we call this the meet operation, because
most descriptors may be viewed as forming a lattice),
ffl testing two descriptors for intersection (dependence testing), and
translating descriptors at call sites (especially when there are array reshapes).
Handling of recursion turns out not to be an issue. Iterative techniques can guarantee convergence
to a fixed point solution using Cousot's technique of widening operators [9, 10]. Li and
Yew proposed a preparatory analysis of recursive programs that guarantees termination in three
iterations [11, 12]. Either of these methods may be adapted for regular sections.
2.1 True Summaries
True summary methods use descriptors whose size is largely independent of the number of references
being summarized. This may make the descriptors and their operations more complicated, but
limits the expense of translating descriptors during interprocedural propagation and intersecting
them during dependence analysis.
Classical Methods The classical methods of interprocedural summary dataflow analysis compute
mod and use sets indicating which parameters and global variables may be modified or used
in the procedure [13, 14, 15]. Such summary information costs only two bits per variable. Meet
and intersection may be implemented using single-bit or bit-vector logical operations. Also, there
exist algorithms that compute complete solutions, in which the number of meets is linear in the
number of procedures and call sites in the program, even when recursion is permitted [16].
Unfortunately, our experiences with PFC and Ptool indicate that this summary information is
too coarse for dependence testing and the effective detection of parallelism [1]. The problem is that
the only access sets representable in this method are "the whole array" and "none of the array"
(see
Figure
1). Such coarse information limits the detection of data decomposition, an important
source of parallelism, in which different iterations of a loop work on distinct subsections of a given
array.
Triolet Regions Triolet, Irigoin and Feautrier proposed to calculate linear inequalities bounding
the set of array locations affected by a procedure call [17, 18]. This representation and its intersection
operation are precise for convex regions. Other patterns, such as array accesses with non-unit
stride and non-convex results of meet operations, are given convex approximations.
Operations on these regions are expensive; the meet operation requires finding the convex hull
of the combined set of inequalities and intersection uses a potentially exponential linear inequality
solver [19]. A succession of meet operations can also produce complicated regions with potentially
as many inequalities as the number of primitive accesses merged together. Translation at calls sites
is precise only when the formal parameter array in the called procedure maps to a (sub)array of
the same shape in the caller. Otherwise, the whole actual parameter array is assumed accessed by
the call. The region method ranks high in precision, but is too expensive because of its complex
representation.
2.2 Reference Lists
Some proposed methods do not summarize, but represent each reference separately. Descriptors
are then lists of references, the meet operation is list concatenation (possibly with a check for
duplicates), and translation and intersection are just the repeated application of the corresponding
operations on simple references. However, this has two significant disadvantages:
ffl translation of a descriptor requires time proportional to the number of references, and
ffl intersection of descriptors requires time quadratic in the number of references.
Reference list methods are simple and precise, but are asymptotically as expensive as in-line
expansion.
Linearization Burke and Cytron proposed representing each multidimensional array reference
by linearizing its subscript expressions to a one-dimensional address expression. Their method also
retains bounds information for loop induction variables occurring in the expressions [20]. They
describe two ways of implementing the meet operation. One involves merely keeping a list of the
individual address expressions. The other constructs a composite expression that can be polynomial
in the loop induction variables. The disadvantages of the first method are described above. The
second method appears complicated and has yet to be rigorously described. Linearization in its pure
form is ill-suited to summarization, but might be a useful extension to a true summary technique
because of its ability to handle arbitrary reshapes.
Atom Images Li and Yew extended Parafrase to compute sets of atom images describing the side
effects of procedures [21, 11]. Like the original version of regular sections described in Callahan's
thesis [2], these record subscript expressions that are linear in loop induction variables along with
bounds on the induction variables. Any reference with linear subscript expressions in a triangular
iteration space can be precisely represented, and they keep a separate atom image for each reference.
The expense of translating and intersecting lists of atom images is too high a price to pay for
their precision. Converting atom images to a summary method would produce something similar
to the regular sections described below.
2.3
Summary
Sections
The precise methods described above are expensive because they allow arbitrarily large representations
of a procedure's access sets. The extra information may not be useful in practice; simple
array access patterns are probably more common than others. To avoid expensive intersection and
translation operations, descriptor size should be independent of the number of references summa-
rized. Operations on descriptors should be linear or, at worst, quadratic in the rank of the array.
Researchers at Rice have defined several variants of regular sections to represent common access
patterns while satisfying these constraints [2, 1, 22, 23].
Original Regular Sections Callahan's thesis proposed two regular section frameworks. The
first, resembling Li and Yew's atom images, he dismissed due to the difficulty of devising efficient
standardization and meet operations [2].
Restricted Regular Sections. The second framework, restricted regular sections [2, 1], is limited
to access patterns in which each subscript is
ffl a procedure-invariant expression (with constants and procedure inputs),
ffl unknown (and assumed to vary over the entire range of the dimension), or
ffl unknown but diagonal with one or more other subscripts.
The restricted sections have efficient descriptors: their size is linear in the number of subscripts,
their meet operation quadratic (because of the diagonals), and their intersection operation linear.
However, they lose too much precision by omitting bounds information. While we originally thought
that these limitations were necessary for efficient handling of recursive programs, Li and Yew have
adapted iterative techniques to work with more general descriptors [12].
Bounded Regular Sections Anticipating that restricted regular sections would not be precise
enough for effective parallelization, Callahan and Kennedy proposed an implementation of regular
sections with bounds. That project is the subject of this paper. The regular sections implemented
include bounds and stride information, but omit diagonal constraints. The resulting analysis is
therefore less precise in the representation of convex regions than Triolet regions or the Data Access
Descriptors described below. However, this is the first interprocedural summary implementation
with stride information, which provides increased precision for non-convex regions.
The size of bounded regular section descriptors and the time required for the meet operation are
both linear in the number of subscripts. Intersection is implemented using standard dependence
tests, which also take time proportional to the number of subscripts. 1
Data Access Descriptors Concurrently with our implementation, Balasundaram and Kennedy
developed Data Access Descriptors (DADs) as a general technique for describing data access [22,
23, 24]. DADs represent information about both the shapes of array accesses and their traversal
for our comparison we are interested only in the shapes. The simple section part of a DAD
represents a convex region similar to those of Triolet et al., except that boundaries are constrained
to be parallel to one coordinate axis or at a 45 ffi angle to two axes. Stride information is represented
in another part of the DAD.
Data Access Descriptors are probably the most precise summary method that can be implemented
with reasonable efficiency. They can represent the most likely rectangular, diagonal, trian-
gular, and trapezoidal accesses. In size and in time required for meet and intersection they have
This analysis ignores the greatest common divisor computation used in merging and intersecting sections with strides;
this can take time proportional to the values of the strides.
complexity quadratic in the number of subscripts (which is reasonable given that most arrays have
few subscripts).
The bounded sections implemented here are both less expensive and less precise than DADs.
Our implementation can be extended to compute DADs if the additional precision proves useful.
Bounded Sections and Ranges
Bounded regular sections comprise the same set of rectangular subarrays that can be written using
triplet notation in the proposed Fortran 90 standard [25]. They can represent sparse regions such
as stripes and grids and dense regions such as columns, rows, and blocks.
(n+1):2
Expressions
Ranges of Size 2
Ranges of Size 3
Finite Ranges
Unknown
Figure
2: Lattice for Regular Section Subscripts
3.1 Representation
The descriptors for bounded regular sections are vectors of elements from the subscript lattice in
Figure
2. Lattice elements include:
ffl invariant expressions, containing only constants and symbols representing the values of parameters
and global variables on entry to the procedure;
ffl ranges, giving invariant expressions for the lower bound, upper bound, and stride of a variant
subscript; and
ffl ?, indicating no knowledge of the subscript value.
While ranges may be constructed through a sequence of meet operations, the more common case
is that they are read directly from the bounds of a loop induction variable used in a subscript.
Since no constraints between subscripts are maintained, merging two regular sections for an
array of rank d requires only d independent invocations of the subscript meet operation. We test
for intersection of two sections with a single invocation of standard d-dimensional dependence tests.
Translation of a formal parameter section to one for an actual parameter is also an O(d) operation
(where d is the larger of the two ranks).
3.2 Operations on Ranges
Ranges are typically built to represent the values of loop induction variables, such as I in the
following loop.
ENDDO
We represent the value of I as [l : While l and u are often referred to as the lower and
upper bound, respectively, their roles are reversed if s is negative. We can produce a standard lower-
to-upper bound form if we know l - u or s - 1; this operation is described in detail in Algorithm 2.
Standardization may cause loss of information; therefore, we postpone standardization until it is
required by some operation, such as merging two sections.
begin
if diff and s are both constant then
if sign(diff) 6= sign(s) then return(?) /* empty range */
direction * (abs(diff) mod abs(s))
else if diff is constant then direction = sign(diff)
else if s is constant then direction = sign(s)
else return(?)
select direction
when ?
perfect then return([u
select
end.
Algorithm 2: Standardizing a Range to Lower-Bound-First Form
Expressions in ranges are converted to ranges; for example, 2*I+1 in the above loop is represented
as [(2 l Only invariant expressions are accurately added to or
multiplied with a range; Algorithm 3 constructs approximations for sums of ranges.
Ranges are merged by finding the lowest lower bound and the highest upper bound, then
correcting the stride. An expression is merged with a range or another expression by treating it as
a range with a lower bound equal to its upper bound. Algorithm 4 thus computes the same result
The most interesting subscript expressions are those containing references to scalar parameters
and global variables. We represent such symbolic expressions as global value numbers so that they
may be tested for equality by the standardization and merge operations.
For each procedure, we construct symbolic subscript expressions and accumulate initial regular
sections with no knowledge of interprocedural effects. The precision of our analysis depends on
function build range(e)
begin
if e is a leaf expression (constant, formal, or global value; or ?) then
return(e)
for each subexpression s of e
replace s with build range(s)
select form of e
when
[l 0: u 0: s
return([(l 0+ l
when a
when a [l
otherwise return(?)
select
end.
Algorithm 3: Moving Ranges to the Top Level of an Expression
recording questions about side effects, but not answering them until the results of other interprocedural
analyses are available.
4.1 Symbolic Analysis
Constructing regular sections requires the calculation of symbolic expressions for variables used in
subscripts. While there are many published algorithms for performing symbolic analysis and global
value numbering [26, 27, 28], their preliminary transformations and complexity make them difficult
to integrate into PFC. Our implementation builds global value numbers with the help of PFC's
existing dataflow analysis machinery.
Leaf value numbers are constants and the global and parameter values available on procedure
entry. We build value numbers for expressions by recursively obtaining the value numbers for
subexpressions and reaching definitions. Value numbers reaching the same reference along different
def-use edges are merged. If either the merging or the occurrence of an unknown operator creates
a unknown (?) value, the whole expression is lowered to ?.
Induction variables are recognized by their defining loop headers and replaced with the inductive
range. (Auxiliary induction variables are currently not identified.) For example, consider the
following code fragment.
function merge(a, b)
begin
if
if
if a is a range then let [l a ; u a ; s a
else let [l a ; u a ; s a
if b is a range then let [l b ;
else let [l b ;
and abs can return ? */
returns a */
if l
else if s
else return([l
end.
Algorithm 4: Merging Expressions and Ranges
DIMENSION A(N)
ENDDO
RETURN
END
Dataflow analysis constructs def-use edges from the subroutine entry to the uses of N and M, and
from the DO loop to the use of I. It is therefore simple to compute the subscript in A's regular
section: M [1 which is converted to the range names M and N are actually
replaced by their formal parameter indices). Note that expressions that are nonlinear during local
analysis may become linear in later phases, especially after constant propagation.
4.2 Avoiding Compilation Dependences
To construct accurate value numbers, we require knowledge about the effects of call sites on scalar
variables. However, using interprocedural analysis to determine these effects can be costly.
A programming support system using interprocedural analysis must examine each procedure
at least twice: 2 once when gathering information to be propagated between procedures, and again
when using the results of this propagation in dependence analysis and/or transformations. By
precomputing the local information, we can construct an interprocedural propagation phase which
iterates over the call graph without additional direct examination of any procedure.
To achieve this minimal number of passes, all interprocedural analyses must gather local information
in one pass, without the benefit of each others' interprocedural solutions. However, to build
precise local regular sections, we need information about the side effects of calls on scalars used
in subscripts. In the following code fragment, we must assume that M is modified to an unknown
value unless proven otherwise:
DIMENSION A(N)
RETURN
END
To achieve precision without adding a separate local analysis phase for regular sections, we
build regular section subscripts as if side effects did not occur, while annotating each subscript
expression with its hazards, or side effects that would invalidate it. We thus record that A(M) is
modified, with the sole parameter of CLOBBER as a hazard on M. During the interprocedural phase,
after producing the classical scalar side effect solution, but before propagating regular sections, we
check to see if CLOBBER may change M. If so, we change S1's array side effect to A(?). A similar
technique has proven successful for interprocedural constant propagation in PFC [31, 6].
Hazards must be recorded with each scalar expression saved for use in regular section analysis:
scalar actual parameters and globals at call sites as well as array subscripts. When we merge two
expressions or ranges, we take the union of their hazard sets.
4.3 Building Summary Regular Sections
With the above machinery in place, the use and mod regular sections for the local effects of a
procedure are constructed easily. In one pass through the procedure, we examine each reference to
2 This is not strictly true; a system computing only summary information (use, mod) or context information (alias)
can make do with one pass. Both the PFC and IR n /ParaScope systems perform summary and context analysis, as
well as constant propagation, and therefore require at least two passes [30, 5, 6].
a formal parameter, global, or static array. The symbolic analyzer provides value numbers for the
subscripts on demand; the resulting vector is a regular section. After the section for an individual
reference is constructed, it is immediately merged with the appropriate cumulative section(s), then
discarded.
5 Interprocedural Propagation
Regular sections for formal parameters are translated into sections for actual parameters as we
traverse edges in the call graph. The translated sections are merged with the summary regular sections
of the caller, requiring another translation and propagation step if this changes the summary.
To extend our implementation to recursive programs and have it terminate, we must bound the
number of times a change occurs.
5.1 Translation into a Call Context
If we were analyzing Pascal arrays, mapping the referenced section of a formal parameter array to
one for the corresponding actual parameter would be simple. We would only need to replace formal
parameters in subscript values of the formal section with their corresponding actual parameter
values, then copy the resulting subscript values into the new section. However, Fortran provides
no guarantee that formal parameter arrays will have the same shape as their actual parameters,
nor even that arrays in common blocks will be declared to have the same shape in every procedure.
Therefore, to describe the effects of a called procedure for the caller, we must translate the referenced
sections according to the way the arrays are reshaped.
The easiest translation method would be to linearize the subscripts for the referenced section
of a formal parameter, adding the offset of the passed location of the actual parameter [20].
The resulting section would give referenced locations of the actual as if it were a one-dimensional
array. However, if some subscripts of the original section are ranges or non-linear expressions,
linearization contaminates the other subscripts, greatly reducing the precision of dependence anal-
ysis. For this reason, we forego linearization and translate significantly reshaped dimensions as ?.
Algorithm 5 shows one method for translating a summary section for a formal parameter F into a
function translate(bounds F , ref F , bounds A , pass A )
begin
if ref
if not consistent then ref A
else if i ? rank(F) then if consistent then ref A
else ref A
else
replace scalar formal parameters in bounds F and ref F
with their corresponding actual parameters
bounds lo(bounds F [i]) pass A [i]
lo(bounds F [i]) pass A [i]
if consistent then ref A
else if stride(ref i lo(bounds A [i
/* delinearization is possible */
fits in bounds A [i]) or assume fit) then
ref A
else ref A
end for
end.
Algorithm 5: Translating a Summary Section
section for its corresponding actual parameter A. Translation proceeds from left to right through the
dimensions, and is precise until a dimension is encountered where the formal and actual parameter
are inconsistent (having different sizes or non-zero offset). The first inconsistent dimension is also
translated precisely if it is the last dimension of F and the referenced section subscript value(s) fit
in the bounds for A. Delinearization, which is not implemented, may be used to recognize that a
reference to F with a column stride the same as the column size of A corresponds to a row reference
in A.
5.2 Treatment of Recursion
The current implementation handles only non-recursive Fortran. Therefore, it is sufficient to proceed
in reverse invocation order on the call graph, translating sections up from leaf procedures
to their callers. The final summary regular sections are built in order, so that incomplete regular
sections need never be translated into a call site. However, the proposed Fortran 90 standard allows
recursion [25], and we plan an extension or re-implementation that will handle it. Unfortunately, a
straightforward iterative approach to the propagation of regular sections will not terminate, since
the lattice has unbounded depth.
Li and Yew [11] and Cooper and Kennedy [16] describe approaches for propagating subarrays
that are efficient regardless of the depth of the lattice. However, it may be more convenient to
implement a simple iterative technique while simulating a bounded-depth lattice. If we maintain
a counter with the summary regular section for each array and procedure, then we can limit the
number of times we allow the section to become larger (lower in the lattice) before going to ?. The
best way to do this is by keeping one small counter (e.g., two bits) per subscript. Variant subscripts
will then go quickly to ?, leaving precise subscripts unaffected. If we limit each subscript to being
lowered in the subscript lattice k times, then an array of rank d will have an effective lattice depth
of kd + 1.
Since each summary regular section is lowered at most O(kd) times, each associated call site
is affected at most O(kdv) times (each time involving an O(d) merge), where v is the number of
referenced global and parameter variables. In the worst case, we then require O(kd 2 ve) subscript
merge and translation operations, where e is the number of edges in the call graph. This technique
allows us to use a lattice with bounds information while keeping time complexity comparable to
that obtained with the restricted regular section lattice.
6 Experimental Results
The precision, efficiency, and utility of regular section analysis must be demonstrated by experiments
on real programs. Our current candidates for "real programs" are the Linpack library of
linear algebra subroutines [32], the Rice Compiler Evaluation Program Suite, and the Perfect Club
benchmarks [33]. We ran the programs through regular section analysis and dependence analysis
in PFC, then examined the resulting dependence graphs by hand and in the ParaScope editor, an
interactive dependence browser and program transformer [4].
LINPACK Analysis of Linpack provides a basis for comparison with other methods for analyzing
interprocedural array side effects. Both Li and Yew [21] and Triolet [18] found several parallel
calls in Linpack using their implementations in the University of Illinois translator, Parafrase.
Linpack proves that useful numerical codes can be written in the modular programming style for
which parallel calls can be detected.
RiCEPS The Rice Compiler Evaluation Program Suite is a collection of 10 complete applications
codes from a broad range of scientific disciplines. Our colleagues at Rice have already run several
experiments on Riceps. Porterfield modeled cache performance using an adapted version of PFC
[34]. Goff, Kennedy and Tseng studied the performance of dependence tests on Riceps and other
benchmarks [35]. Some Riceps and Riceps candidate codes have also been examined in a study on
the utility of inline expansion of procedure calls [8]. The six programs studied here are two Riceps
codes linpackd and track) and four codes from the inlining study.
Perfect Club Benchmarks This suite was originally collected for benchmarking the performance
of supercomputers on complete applications. While we hope to test the performance of our
implementation on these programs, a delay in receiving them prevented us from obtaining more
than very preliminary results for this paper.
6.1 Precision
The precision of regular sections, or their correspondence to the true access sets, is largely a function
of the programming style being analyzed. Linpack is written in a style which uses many calls to
the BLAS (basic linear algebra subroutines), whose true access sets are precisely regular sections.
We did not determine the true access sets for the subroutines in Riceps, but of the six programs
analyzed, only dogleg and linpackd, which actually call Linpack, exhibited the Linpack coding
style.
While there exist regular sections to precisely describe the effects of the BLAS, our local analysis
was unable to construct them under complicated control flow. With changes to the BLAS to
eliminate unrolled loops and the conditional computation of values used in subscript expressions,
our implementation was able to build minimal regular sections that precisely represented the true
access sets. The modified DSCAL, for example, looks as follows:
DOUBLE PRECISION DA, DX(*)
IF (N .LE.
ENDDO
RETURN
END
Obtaining precise symbolic information is a problem in all methods for describing array side
effects. Triolet made similar changes to the BLAS; Li and Yew avoided them by first performing
interprocedural constant propagation. The fundamental nature of this problem indicates the desirability
of a clearer Fortran programming style or more sophisticated handling of control flow (such
as that described in Section 7).
6.2 Efficiency
We measured the total time taken by PFC to analyze the six Riceps programs. 3 Parsing, local
analysis, interprocedural propagation, and dependence analysis were all included in the execution
times.
Table
1 compares the analysis time required using classical interprocedural summary analysis
alone ("IP only") with that using summary analysis and regular section analysis combined ("IP
RS"). 4
program IP IP %
name Lines Procs only +RS Change
efie 1254
euler 1113 13 117 138 +15
vortex 540 19
track 1711 34 191 225 +15
dogleg 4199 48 272 377 +28
linpackd 28 44 +36
total 9172 142 882 1103 +25
Table
1: Analysis times in seconds (PFC on an IBM 3081D)
3 While we were able to run most of the Perfect programs through PFC, we have not yet obtained reliable timings on
the recently-upgraded IBM system at Rice.
4 We do not present times for the dependence analysis with no interprocedural information because it performs less
analysis on loops with call sites. Discounting this advantage, the time taken for classical summary analysis seems to
be less than 10 percent.
The most time time-consuming part of our added code is the local symbolic analysis for subscript
values, which includes an invocation of dataflow analysis. More symbolic analysis would improve
the practicality of the entire method. Overall, the additional analysis time is comparable to that
required to analyze programs after heuristically-determined inline expansion in Cooper, Hall and
Torczon's study [8].
We have not seen published execution times for the array side effect analyses implemented in
Parafrase by Triolet and by Li and Yew, except that Li and Yew state that their method runs
2.6 times faster than Triolet's [21]. Both experiments were run only on Linpack; it would be
particularly interesting to know how their methods would perform on complete applications.
6.3 Utility
We chose three measures of utility:
ffl reduced numbers of dependences and dependent references,
ffl increased numbers of calls in parallel loops, and
reduced parallel execution time.
Reduced Dependence Table 2 compares the dependence graphs produced using classical interprocedural
summary analysis alone ("IP") and summary analysis plus regular section analysis
All Array Dep. on Calls in Loops
Dependences loop carried loop independent
source IP RS %
efie 12338 12338 177 177 81 81
euler
vortex 1966 1966 220 220 73 73
track 4737 4725 0.25 68 67 1.5 27 26 3.7
dogleg
linpackd 496 399 19.6 191 116 39.3 67 45 32.8
Riceps 23213 22921 1.25 952 818 14.1 358 314 12.3
Linpack 12336 11035 10.5 3071 2064 32.8 1348 1054 21.8
Table
2: Effects of Regular Section Analysis on Dependences
("RS"). 5
Linpack was analyzed without interprocedural constant propagation, since library routines may
be called with varying array sizes. The first set of three columns gives the sizes of the dependence
graphs produced by PFC, counting all true, anti and output dependence edges on scalar and array
references in DO loops (including those references not in call sites). The other sets of columns count
only those dependences incident on array references in call sites in loops, with separate counts for
loop-carried and loop-independent dependences. Preliminary results for eight of the 13 Perfect
benchmarks indicate a reduction of 0.6 percent in the total size of the dependence graphs. 6
Parallelized Calls Table 3 examines the number of calls in Linpack which were parallelized
after summary interprocedural analysis alone ("IP"), after Li and Yew's analysis [21], and after
regular section analysis ("RS"). (Triolet's results from Parafrase resembled Li and Yew's.) Most
(17) of these call sites were parallelized in ParaScope, based on PFC's dependence graph, with no
transformations being necessary. The eight parallel call sites detected with summary interprocedural
analysis alone were apparent in ParaScope, but exploiting the parallelism requires a variant of
statement splitting that is not yet supported. Starred entries (?) indicate parallel calls which were
precisely summarized by regular section analysis, but which were not detected as parallel due to a
deficiency in PFC's symbolic dependence test for triangular loops. One call in QRDC was mistakenly
parallelized by Parafrase [36].
These results indicate, at least for Linpack, that there is no benefit to the generality of Triolet's
and Li and Yew's methods. Regular section analysis obtains exactly the same precision, with
a different number of loops parallelized only because of differences in dependence analysis and
transformations.
Improved Execution Time Two calls in the Riceps programs were parallelized: one in dogleg
and one in linpackd. Both were the major computational loops (linpackd's in DGEFA, dogleg's in
5 The dependence graphs resulting from no interprocedural analysis at all are not comparable, since no calls can be
parallelized and their dependences are collapsed to conserve space.
6 Sections are not yet propagated for arrays in common blocks. This deficiency probably resulted in more dependences
for the larger programs.
routine calls in Parallel Calls
name DO loops IP Li-Yew RS
\DeltaGBCO
\DeltaGECO
\DeltaPBCO
\DeltaPOCO
\DeltaPPCO
\DeltaTRCO
\DeltaGEDI
\DeltaPODI
\DeltaQRDC 9 5 4
\DeltaSIDI
\DeltaSIFA
\DeltaSVDC
\DeltaTRDI
other
Table
3: Parallelization of Linpack
DQRDC). 7 Running linpackd on 19 processors with the one call parallelized was enough to speed its
execution by a factor of five over sequential execution on the Sequent Symmetry at Rice. Further
experiments on improvements in parallel execution time await our acquisition of more Fortran codes
written in an appropriate style.
7 Future Work
More experiments are required to fully evaluate the performance of regular section analysis on
complete applications and find new areas for improvement. Based on the studies conducted so far,
extensions to provide better handling of conditionals and flow-sensitive side effects seem promising.
7.1 Conditional Symbolic Analysis
Consider the following example, derived from the BLAS:
DOUBLE PRECISION DA, DX(*)
ENDDO
RETURN
END
The two computations of the initial value for IX correspond to different ranges for the subscript of
It turns out that these
can both be represented by [1 For the merge operation to produce
this precise result requires that it have an understanding of the control conditions under which
expressions are computed.
7 In the inlining study at Rice, none of the commercial compilers was able to detect the parallel call in dogleg even
after inlining, presumably due to complicated control flow [8].
7.2 Killed Regular Sections
We have already found programs (scalgam and euler) in which the ability to recognize and localize
temporary arrays would cut the number of dependences dramatically, allowing some calls to be
parallelized. We could recognize interprocedural temporary arrays by determining when an entire
array is guaranteed to be modified before being used in a procedure. While this is a flow-sensitive
problem, and therefore expensive to solve in all its generality, even a very limited implementation
should be able to catch the initialization of many temporaries.
The subscript lattice for killed sections is the same one used for use and mod sections; however,
since kill analysis must produce underestimates of the affected region in order to be conservative,
the lattice needs to be inverted. In addition, this approach requires an intraprocedural dependence
analysis capable of using array kill information, such as those described by Rosene [37] and by
Gross and Steenkiste [38].
8 Conclusion
Regular section analysis can be a practical addition to a production compiler. Its local analysis
and interprocedural propagation can be integrated with those for other interprocedural techniques.
The required changes to dependence analysis are trivial-the same ones needed to support Fortran
90 sections.
These experiments demonstrate that regular section analysis is an effective means of discovering
parallelism, given programs written in an appropriately modular programming style. Such a style
can benefit advanced analysis in other ways, for example, by keeping procedures small and simplifying
their internal control flow. Our techniques will not do as well on programs written in a style
that minimizes the use of procedure calls to compensate for the lack of interprocedural analysis in
other compilers. Compilers must reward the modular programming style with fast execution time
for it to take hold among the computation-intensive users of supercomputers. In the long run it
should make programs easier for both their writers and automatic analyzers to understand.
Acknowledgements
We would like to thank our colleagues on the PFC and ParaScope projects, who made this research
possible. We further thank David Callahan for his contributions to regular section analysis, and
Kathryn M c Kinley, Mary Hall, and the reviewers for their critiques of this paper.
--R
"Analysis of interprocedural side effects in a parallel programming environment,"
A Global Approach to Detection of Parallelism.
"PFC: A program to convert Fortran to parallel form,"
"Interactive parallel programming using the ParaScope Editor,"
"An implementation of interprocedural analysis in a vectorizing Fortran compiler,"
"Interprocedural constant propagation,"
"A catalogue of optimizing transformations,"
"An experiment with inline substitution,"
"Abstract interpretation: a unified lattice model for static analysis of programs by construction or approximation of fixpoints,"
"Semantic foundations of program analysis,"
"Interprocedural analysis and program restructuring for parallel pro- grams,"
"Program parallelization with interprocedural analysis,"
A Method for Determining the Side Effects of Procedure Calls.
"An interprocedural data flow analysis algorithm,"
"Efficient computation of flow insensitive interprocedural summary information,"
"Interprocedural side-effect analysis in linear time,"
"Direct parallelization of CALL statements,"
"Interprocedural analysis for program restructuring with Parafrase,"
"A direct parallelization of CALL statements - a review,"
"Interprocedural dependence analysis and parallelization,"
"Efficient interprocedural analysis for program parallelization and re- structuring,"
Interactive Parallelization of Numerical Scientific Programs.
"A technique for summarizing data access and its use in parallelism enhancing transformations,"
"A mechanism for keeping useful internal information in parallel programming tools: the Data Access Descriptor,"
X3J3 Subcommittee of ANSI
"Affine relationships among variables of a program,"
"Symbolic program analysis in almost-linear time,"
"Global value numbers and redundant com- putations,"
"Compiler analysis of the value ranges for variables,"
"The impact of interprocedural analysis and optimization in the IR n programming environment,"
Compilation Dependences in an Ambitious Optimizing Compiler.
Philadelphia: SIAM Publications
"Supercomputer performance evaluation and the Perfect benchmarks,"
Software Methods for Improvement of Cache Performance on Supercomputer Applications.
"Practical dependence testing,"
"Private communication,"
Incremental Dependence Analysis.
"Structured dataflow analysis for arrays and its use in an optimizing compiler,"
--TR
The impact of interprocedural analysis and optimization in the R<sup>n</sup> programming environment
Interprocedural constant propagation
Interprocedural dependence analysis and parallelization
Direct parallelization of call statements
A global approach to detection of parallelism
Analysis of interprocedural side effects in a parallel programming environment
Interprocedural side-effect analysis in linear time
Efficient interprocedural analysis for program parallelization and restructuring
Global value numbers and redundant computations
A technique for summarizing data access and its use in parallelism enhancing transformations
A mechanism for keeping useful internal information in parallel programming tools: the data access descriptor
Structured dataflow analysis for arrays and its use in an optimizing complier
Practical dependence testing
Supercomputer performance evaluation and the Perfect Benchmarks
Efficient computation of flow insensitive interprocedural summary information
An interprocedural data flow analysis algorithm
Abstract interpretation
Interactive Parallel Programming using the ParaScope Editor
A method for determining the side effects of procedure calls.
Compilation dependences in an ambitious optimizing compiler (interprocedural, recompilation)
Interactive parallelization of numerical scientific programs
Software methods for improvement of cache performance on supercomputer applications
Incremental dependence analysis
--CTR
Peiyi Tang, Exact side effects for interprocedural dependence analysis, Proceedings of the 7th international conference on Supercomputing, p.137-146, July 19-23, 1993, Tokyo, Japan
Donald G. Morris , David K. Lowenthal, Accurate data redistribution cost estimation in software distributed shared memory systems, ACM SIGPLAN Notices, v.36 n.7, p.62-71, July 2001
M. Jimnez , J. M. Llabera , A. Fernndez , E. Morancho, A general algorithm for tiling the register level, Proceedings of the 12th international conference on Supercomputing, p.133-140, July 1998, Melbourne, Australia
S. Carr , K. Kennedy, Compiler blockability of numerical algorithms, Proceedings of the 1992 ACM/IEEE conference on Supercomputing, p.114-124, November 16-20, 1992, Minneapolis, Minnesota, United States
D. Brent Weatherly , David K. Lowenthal , Mario Nakazawa , Franklin Lowenthal, Dyn-MPI: Supporting MPI on Non Dedicated Clusters, Proceedings of the ACM/IEEE conference on Supercomputing, p.5, November 15-21,
Linda Burton , William Hatchett , Mari Hobkirk , Charles Powell, Using high performance GIS software to visualize data: a hands-on software demonstration, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-14, November 07-13, 1998, San Jose, CA
Helen Parke , Alisa Chapman, A proposal for preservice student technology competence, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-9, November 07-13, 1998, San Jose, CA
Compilation techniques for block-cyclic distributions, Proceedings of the 8th international conference on Supercomputing, p.392-403, July 11-15, 1994, Manchester, England
Kathryn S. McKinley, Evaluating automatic parallelization for efficient execution on shared-memory multiprocessors, Proceedings of the 8th international conference on Supercomputing, p.54-63, July 11-15, 1994, Manchester, England
Umit Rencuzogullari , Sandhya Dwardadas, Dynamic adaptation to available resources for parallel computing in an autonomous network of workstations, ACM SIGPLAN Notices, v.36 n.7, p.72-81, July 2001
Craig Chase , Kay Crowley , Joel Saltz , Anthony Reeves, Compiler and runtime support for irregularly coupled regular meshes, Proceedings of the 6th international conference on Supercomputing, p.438-446, July 19-24, 1992, Washington, D. C., United States
R. Veldema , R. F. H. Hofman , R. A. F. Bhoedjang , C. J. H. Jacobs , H. E. Bal, Source-level global optimizations for fine-grain distributed shared memory systems, ACM SIGPLAN Notices, v.36 n.7, p.83-92, July 2001
G. Agrawal , A. Sussman , J. Saltz, Compiler and runtime support for structured and block structured applications, Proceedings of the 1993 ACM/IEEE conference on Supercomputing, p.578-587, December 1993, Portland, Oregon, United States
Honghui Lu , Alan L. Cox , Sandhya Dwarkadas , Ramakrishnan Rajamony , Willy Zwaenepoel, Compiler and software distributed shared memory support for irregular applications, ACM SIGPLAN Notices, v.32 n.7, p.48-56, July 1997
Tor E. Jeremiassen , Susan J. Eggers, Static Analysis of Barrier Synchronization in Explicitly Parallel Programs, Proceedings of the IFIP WG10.3 Working Conference on Parallel Architectures and Compilation Techniques, p.171-180, August 24-26, 1994
Mary W. Hall , Ken Kennedy, Efficient call graph analysis, ACM Letters on Programming Languages and Systems (LOPLAS), v.1 n.3, p.227-242, Sept. 1992
Jae Bum Lee , Chu Shik Jhon, Reducing coherence overhead of barrier synchronization in software DSMs, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-18, November 07-13, 1998, San Jose, CA
Keith D. Cooper , Ken Kennedy, Interprocedural side-effect analysis in linear time, ACM SIGPLAN Notices, v.39 n.4, April 2004
Sotiris Ioannidis , Umit Rencuzogullari , Robert Stets , Sandhya Dwarkadas, CRAUL: Compiler and run-time integration for adaptation under load[1]This work was supported in part by NSF grants CDA-9401142, CCR-9702466, and CCR-9705594; and an external research grant from Compaq., Scientific Programming, v.7 n.3-4, p.261-273, August 1999
Jay P. Hoeflinger , Yunheung Paek , Kwang Yi, Unified Interprocedural Parallelism Detection, International Journal of Parallel Programming, v.29 n.2, p.185-215, April 2001
Radu Rugina , Martin Rinard, Automatic parallelization of divide and conquer algorithms, ACM SIGPLAN Notices, v.34 n.8, p.72-83, Aug. 1999
Jaydeep Marathe , Frank Mueller , Tushar Mohan , Bronis R. de Supinski , Sally A. McKee , Andy Yoo, METRIC: tracking down inefficiencies in the memory hierarchy via binary rewriting, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, March 23-26, 2003, San Francisco, California
Sandhya Dwarkadas , Alan L. Cox , Willy Zwaenepoel, An integrated compile-time/run-time software distributed shared memory system, ACM SIGPLAN Notices, v.31 n.9, p.186-197, Sept. 1996
Junjie Gu , Zhiyuan Li , Gyungho Lee, Experience with efficient array data flow analysis for array privatization, ACM SIGPLAN Notices, v.32 n.7, p.157-167, July 1997
Mary H. Hall , Saman P. Amarasinghe , Brian R. Murphy , Shih-Wei Liao , Monica S. Lam, Detecting coarse-grain parallelism using an interprocedural parallelizing compiler, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.49-es, December 04-08, 1995, San Diego, California, United States
Satish Chandra , James R. Larus, Optimizing communication in HPF programs on fine-grain distributed shared memory, ACM SIGPLAN Notices, v.32 n.7, p.100-111, July 1997
Compiler optimizations for Fortran D on MIMD distributed-memory machines, Proceedings of the 1991 ACM/IEEE conference on Supercomputing, p.86-100, November 18-22, 1991, Albuquerque, New Mexico, United States
Dhruva R. Chakrabarti , Prithviraj Banerjee, Global optimization techniques for automatic parallelization of hybrid applications, Proceedings of the 15th international conference on Supercomputing, p.166-180, June 2001, Sorrento, Italy
Chen Ding , Ken Kennedy, Improving cache performance in dynamic applications through data and computation reorganization at run time, ACM SIGPLAN Notices, v.34 n.5, p.229-241, May 1999
Compiling Fortran D for MIMD distributed-memory machines, Communications of the ACM, v.35 n.8, p.66-80, Aug. 1992
Arun Chauhan , Ken Kennedy, Reducing and Vectorizing Procedures for Telescoping Languages, International Journal of Parallel Programming, v.30 n.4, p.291-315, August 2002
D. Brent Weatherly , David K. Lowenthal , Mario Nakazawa , Franklin Lowenthal, Dyn-MPI: supporting MPI on medium-scale, non-dedicated clusters, Journal of Parallel and Distributed Computing, v.66 n.6, p.822-838, June 2006
Saman P. Amarasinghe , Monica S. Lam, Communication optimization and code generation for distributed memory machines, ACM SIGPLAN Notices, v.28 n.6, p.126-138, June 1993
Exploiting cache affinity in software cache coherence, Proceedings of the 8th international conference on Supercomputing, p.264-273, July 11-15, 1994, Manchester, England
Jaydeep Marathe , Frank Mueller , Tushar Mohan , Sally A. Mckee , Bronis R. De Supinski , Andy Yoo, METRIC: Memory tracing via dynamic binary rewriting to identify cache inefficiencies, ACM Transactions on Programming Languages and Systems (TOPLAS), v.29 n.2, p.12-es, April 2007
Tor E. Jeremiassen , Susan J. Eggers, Reducing false sharing on shared memory multiprocessors through compile time data transformations, ACM SIGPLAN Notices, v.30 n.8, p.179-188, Aug. 1995
Yunheung Paek , Jay Hoeflinger , David Padua, Efficient and precise array access analysis, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.1, p.65-109, January 2002
Junjie Gu , Zhiyuan Li , Gyungho Lee, Symbolic array dataflow analysis for array privatization and program parallelization, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.47-es, December 04-08, 1995, San Diego, California, United States
Y. Paek , A. Navarro , E. Zapata , J. Hoeflinger , D. Padua, An Advanced Compiler Framework for Non-Cache-Coherent Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.13 n.3, p.241-259, March 2002
Manish Gupta , Edith Schonberg , Harini Srinivasan, A Unified Framework for Optimizing Communication in Data-Parallel Programs, IEEE Transactions on Parallel and Distributed Systems, v.7 n.7, p.689-704, July 1996
Radu Rugina , Martin Rinard, Symbolic bounds analysis of pointers, array indices, and accessed memory regions, ACM SIGPLAN Notices, v.35 n.5, p.182-195, May 2000
Evaluation of compiler optimizations for Fortran D on MIMD distributed memory machines, Proceedings of the 6th international conference on Supercomputing, p.1-14, July 19-24, 1992, Washington, D. C., United States
Arun Chauhan , Ken Kennedy, Optimizing strategies for telescoping languages: procedure strength reduction and procedure vectorization, Proceedings of the 15th international conference on Supercomputing, p.92-101, June 2001, Sorrento, Italy
Ayon Basumallik , Rudolf Eigenmann, Optimizing irregular shared-memory applications for distributed-memory systems, Proceedings of the eleventh ACM SIGPLAN symposium on Principles and practice of parallel programming, March 29-31, 2006, New York, New York, USA
Ayon Basumallik , Rudolf Eigenmann, Towards automatic translation of OpenMP to MPI, Proceedings of the 19th annual international conference on Supercomputing, June 20-22, 2005, Cambridge, Massachusetts
M. W. Hall , S. Hiranandani , K. Kennedy , C.-W. Tseng, Interprocedural compilation of Fortran D for MIMD distributed-memory machines, Proceedings of the 1992 ACM/IEEE conference on Supercomputing, p.522-534, November 16-20, 1992, Minneapolis, Minnesota, United States
Steve Carr , R. B. Lehoucq, Compiler blockability of dense matrix factorizations, ACM Transactions on Mathematical Software (TOMS), v.23 n.3, p.336-361, Sept. 1997
Victor Delaluz , Mahmut Kandemir , N. Vijaykrishnan , Anand Sivasubramaniam , Mary Jane Irwin, Hardware and Software Techniques for Controlling DRAM Power Modes, IEEE Transactions on Computers, v.50 n.11, p.1154-1173, November 2001
Kathryn S. McKinley, A Compiler Optimization Algorithm for Shared-Memory Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.9 n.8, p.769-787, August 1998
Dhruva R. Chakrabarti , Prithviraj Banerjee, Static Single Assignment Form for Message-Passing Programs, International Journal of Parallel Programming, v.29 n.2, p.139-184, April 2001
Junjie Gu , Zhiyuan Li, Efficient Interprocedural Array Data-Flow Analysis for Automatic Program Parallelization, IEEE Transactions on Software Engineering, v.26 n.3, p.244-261, March 2000
Mary W. Hall , Timothy J. Harvey , Ken Kennedy , Nathaniel McIntosh , Kathryn S. McKinley , Jeffrey D. Oldham , Michael H. Paleczny , Gerald Roth, Experiences using the ParaScope Editor: an interactive parallel programming tool, ACM SIGPLAN Notices, v.28 n.7, p.33-43, July 1993
Chen Ding , Ken Kennedy, Improving effective bandwidth through compiler enhancement of global cache reuse, Journal of Parallel and Distributed Computing, v.64 n.1, p.108-134, January 2004
Manish Gupta , Sayak Mukhopadhyay , Navin Sinha, Automatic Parallelization of Recursive Procedures, International Journal of Parallel Programming, v.28 n.6, p.537-562, December 2000
Ken Kennedy , Kathryn S. McKinley , Chau-Wen Tseng, Analysis and transformation in the ParaScope editor, Proceedings of the 5th international conference on Supercomputing, p.433-447, June 17-21, 1991, Cologne, West Germany
Radu Rugina , Martin C. Rinard, Symbolic bounds analysis of pointers, array indices, and accessed memory regions, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.2, p.185-235, March 2005
Mary W. Hall , Saman P. Amarasinghe , Brian R. Murphy , Shih-Wei Liao , Monica S. Lam, Interprocedural parallelization analysis in SUIF, ACM Transactions on Programming Languages and Systems (TOPLAS), v.27 n.4, p.662-731, July 2005
Mohammad R. Haghighat , Constantine D. Polychronopoulos, Symbolic analysis for parallelizing compilers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.18 n.4, p.477-518, July 1996 | programmingsupport systems;linear algebra subroutines;program testing;index termsoptimizing compilers;procedure calls;application codes;high-level language constructs;subarrays;interprocedural bounded regular section analysis;production compiler;RiceCompiler Evaluation Program Suite;dependence analysis;RiCEPS;scientific disciplines;parallel programming;LINPACK library;program compilers;modular programming style |
629058 | Compile-Time Techniques for Data Distribution in Distributed Memory Machines. | A solution to the problem of partitioning data for distributed memory machines is discussed. The solution uses a matrix notation to describe array accesses in fully parallel loops, which allows the derivation of sufficient conditions for communication-free partitioning (decomposition) of arrays. A series of examples that illustrate the effectiveness of the technique for linear references, the use of loop transformations in deriving the necessary data decompositions, and a formulation that aids in deriving heuristics for minimizing a communication when communication-free partitions are not feasible are presented. | Introduction
In distributed memory machines such as the Intel iPSC/2 and NCUBE, each process has its own
address space and processes must communicate explicitly by sending and receiving messages. Local
memory accesses on these machines are much faster than those involving inter-processor commu-
nication. As a result, the programmer faces the enormously difficult task of orchestrating the
parallel execution. The programmer is forced to manually distribute code and data in addition
to managing communication among tasks explicitly. This task, in addition to being error-prone
and time-consuming, generally leads to non-portable code. Hence, parallelizing compilers for these
machines have been an active area of research recently [3, 4, 5, 9, 14, 19, 20, 22, 24, 25].
The enormity of the task is to some extent relieved by the hypercube programmer's paradigm
[6] where attention is paid to the partitioning of tasks alone, while assuming a fixed data partition
or a programmer-specified (in the form of annotations) data partition [9, 7, 14, 20]. A number of
efforts are under way to develop parallelizing compilers for multicomputers where the programmer
specifies the data decomposition and the compiler generates the tasks with appropriate message
passing constructs [4, 7, 14, 15, 20, 22, 25]. Though these rely on the intuition (based on domain
knowledge) of the programmer, it is not always possible to verify that the annotations indeed result
in efficient execution.
In a recent paper on programming of multiprocessors, Alan Karp [12] observes:
we see that data organization is the key to parallel algorithms even on shared
memory systems The importance of data management is also a problem for people
writing automatic parallelization compilers. Todate, our compiler technology has been
directed toward optimizing control flow today when hierarchical (distributed)
memories make program performance a function of data organization, no compiler in
existence changes the data addresses specified by the programmer to improve perfor-
mance. If such compilers are to be successful, particularly on message-passing systems,
a new kind of analysis will have to be developed. This analysis will have to match the
data structures to the executable code in order to minimize memory traffic."
This paper is an attempt at providing this "new" kind of analysis - we present a matrix
notation to describe array accesses in fully parallel loops which lets us present sufficient conditions
for communication-free partitioning (decomposition) of arrays. In the case of a commonly occurring
class of accesses, we present a formulation as a fractional integer programming problem to minimize
communication costs, when communication-free partitioning of arrays is not possible.
The rest of the paper is organized as follows. In section 2, we present the background and
the assumptions we make, and discuss related work. Section 3 illustrates through examples, the
importance and the difficulty of finding good array decompositions. In section 4, we present a
matrix-based formulation of the problem of determining the existence of communication-free partitions
of arrays; we then present conditions for the case of constant offset array access. In section 5, a
series of examples are presented to illustrate the effectiveness of the technique for linear references;
in addition, we show the use of loop transformations in deriving the necessary data decomposi-
tions. Section 6 generalizes the formulation presented in section 4 for arbitrary linear references. In
section 7, we present a formulation that aids in deriving heuristics for minimizing communication
when communication-free partitions are not feasible. Section 8 concludes with a summary and
discussion.
Assumptions and related work
Communication in message passing machines could arise from the need to synchronize and from
the non-locality of data. The impact of the absence of a globally shared memory on the compiler
writer is severe. In addition to managing parallelism, it is now essential for the compiler writer to
appreciate the significance of data distribution and decide when data should be copied or generated
in local memory. We focus on distribution of arrays which are commonly used in scientific
computation. Our primary interest is in arrays accessed during the execution of nested loops. We
consider the following model where a processor owns a data element and has to make all updates
to it and there is exactly one copy. Even in the case of fully parallel loops, care must be taken to
ensure appropriate distribution of data.
In the next sections, we explore techniques that a compiler can use to determine if the data
can be distributed such that no communication is incurred. Operations involving two or more
operands require that the operands be aligned, that is the corresponding operands are stored in the
memory of the processor executing the operation. In the model we consider here, this means that
the operands used in an operation must be communicated to the processor that holds the operand
which appears on the left hand side of an assignment statement. Alignment of operands generally
requires interprocessor communication.
In current day machines, interprocessor communication is more time-consuming than instruction
execution. If insufficient attention is paid to the data allocation problem, then the amount of time
spent in interprocessor communication might be so high as to seriously undermine the benefits of
parallelism. It is therefore worthwhile for a compiler to analyze patterns of data usage to determine
allocation, in order to minimize interprocessor communication. We present a machine-independent
analysis of communication-free partitions.
We make the following assumptions:
1. There is exactly one copy of every array element and the processor in whose local memory
the element is stored is said to "own" the element.
2. The owner of an array element makes all updates to the element, i.e. all instructions that
modify the value of the element are executed by the "owner" processor.
3. There is a fixed distribution of array elements. (Data re-organization costs are architecture-specific
2.1 Related work
The research on problems related to memory optimizations goes back to studies of the organization
of data for paged memory systems [1]. Balasundaram and others [3] are working on interactive
parallelization tools for multicomputers that provide the user with feedback on the interplay between
data decomposition and task partitioning on the performance of programs. Gallivan et al.
discuss problems associated with automatically restructuring data so that it can be moved to
and from local memories in the case of shared memory machines with complex memory hierarchies.
They present a series of theorems that enable one to describe the structure of disjoint sub-lattices
accessed by different processors, use this information to make "correct" copies of data in local mem-
ories, and write the data back to the shared address space when the modifications are complete.
Gannon et al. [8] discuss program transformations for effective complex-memory management for a
CEDAR-like architecture with a three-level memory. Gupta and Banerjee [10] present a constraint-based
system to automatically select data decompositions for loop nests in a program. Hudak and
Abraham [11] discuss the generation of rectangular and hexagonal partitions of arrays accessed in
sequentially iterated parallel loops. Knobe et al. [13] discuss techniques for automatic layout of
arrays in a compiler targeted to SIMD architectures such as the Connection Machine computer
system. Li and Chen [17] (and [5]) have addressed the problem of index domain alignment which
is that of finding a set of suitable alignment functions that map the index domains of the arrays
into a common index domain in order to minimize the communication cost incurred due to data
movement. The class of alignment functions that they consider primarily are permutations and
embeddings. The kind of alignment functions that we deal with are more general than these. Mace
[18] proves that the problem of finding optimal data storage patterns for parallel processing (the
"shapes" problem) is NP-complete, even when limited to one- and two-dimensional arrays; in ad-
dition, efficient algorithms are derived for the shapes problem for programs modeled by a directed
acyclic graph (DAG) that is derived by series-parallel combinations of tree-like subgraphs. Wang
and Gannon [23] present a heuristic state-space search method for optimizing programs for memory
hierarchies.
In addition several researchers are developing compilers that take a sequential program augmented
with annotations that specify data distribution, and generate the necessary communication.
Koelbel et al. [14, 15] address the problem of automatic process partitioning of programs written in
a functional language called BLAZE given a user-specified data partition. A group led by Kennedy
at Rice University [4] is studying similar techniques for compiling a version of FORTRAN for local
memory machines, that includes annotations for specifying data decomposition. They show how
some existing transformations could be used to improve performance. Rogers and Pingali [20]
present a method which, given a sequential program and its data partition, performs task partitions
to enhance locality of references. Zima et al. [25] have developed SUPERB, an interactive
system for semi-automatic transformation of FORTRAN programs into parallel programs for the
SUPRENUM machine, a loosely-coupled hierarchical multiprocessor.
3 Deriving good partitions: some examples
Consider the following loop:
Example 1:
If we allocate row i of array A and row i of array B to the local memory of the same processor, then
no communication is incurred. If we allocate by columns or blocks, interprocessor communication
is incurred. There are no data dependences in the above example; such loops are referred to as
doall loops. It is easy to see that allocation by rows would result in zero communication since,
there is no offset in the access of B along the first dimension. Figure 1 shows the partitions of
arrays A and B.
In the next example, even though there is a non-zero offset along each dimension, communication-
free partitioning is possible:
Example 2:
In this case, row, column or block allocation of arrays A and B would result in interprocessor
communication. In this case, if A and B are partitioned into a family of parallel lines whose
equation is no communication will result. Figure 2 shows the
partitions of A and B. The kth line in array A i.e. the line in A and the line
in array B must be assigned to the same processor
Figure
1: Partitions of arrays A and B for Example 1
Figure
2: Partitions of arrays A and B for Example 2
In the above loop structure, array A is modified as a function of array B; a communication-
partitioning in this case is referred to as a compatible partition. Consider the following loop
skeleton:
Example 3:
Array A is modified in loop L 1 as a function of elements of array B while loop L 2 modifies array
using elements of array A. Loops L 1 and L 2 are adjacent loops at the same level of nesting.
The effect of a poor partition is exacerbated here since every iteration of the outer loop suffers
inter-processor communication; in such cases, the communication-free partitioning where possible,
is extremely important. Communication-free partitions of arrays involved in adjacent loops is
referred to as mutually compatible partitions.
In all the preceding examples, we had the following array access pattern: for computing some
element A[i; j], element B[i 0
is required where
where c i and c j are constants. Consider the following example:
Example 4:
In this example, allocation of row i of array A and column i of array B to the same processor
would result in no communication. See Figure 3 for the partitions of A and B in this example.
Note that i 0
is a function of j and j 0
is a function of i here.
In the presence of arbitrary array access patterns, the existence of communication-free partitions
is determined by the connectivity of the data access graph, described below. To each array element
that is accessed (either written or read), we associate a node in this graph. If there are k different
arrays that are accessed, this graph has k groups of nodes; all nodes belonging to a given group
are elements of the same array. Let the node associated with the left hand side of an assignment
statement S be referred to as write(S) and the set of all nodes associated with the array elements on
Figure
3: Array partitions for Example 4
the right hand side of the assignment statement S be called read-set(S). There is an edge between
and every member of read-set(S) in the data access graph. If this graph is connected, then
there is no communication-free partition [19].
Reference-based data decomposition
Consider a nested loop of the following form which accesses arrays A and B:
Example 5:
are linear functions of i and j, i.e.
With disjoint partitions of array A, can we find corresponding or required disjoint partitions of array B
in order to eliminate communication? A partition of B is required for a given partition of A if the
elements in that partition (of B) appear in the right hand side of the assignment statements in
the body of the loop which modify elements in the partition of A. For a given partition of A, the
required referred to as its image(s) or map(s). We discuss array partitioning
in the context of fully parallel loops. Though the techniques are presented for 2-dimensional arrays,
these generalize easily to higher dimensions.
In particular, we are interested in partitions of arrays defined by a family of parallel hyper-
planes; such partitions are beneficial from the point of view of restructuring compilers in that the
portion of loop iterations that are executed by a processor can be generated by a relatively simple
transformation of the loop. Thus the question of partitioning can be stated as follows: Can we
find partitions induced by parallel hyperplanes in A and B such that there is no communication?
We focus our attention on 2-dimensional arrays. A hyperplane in 2 dimensions is a line; hence, we
discuss techniques to find partitions of A and B into parallel lines that incur zero communication.
In most loops that occur in scientific computation, the functions f and g are linear in i and j.
The equation
defines a family of parallel lines for different values of c, given that ff and fi are constants and at
most one of them is zero. For example,
defines columns, while
defines diagonals.
Given a family of parallel lines in array A defined by
can we find a corresponding family of lines in array B given by
such that there is no communication among processors?
The conditions on the solutions are: at most one of ff and fi can be zero; similarly, at most one
of ff 0
and fi 0
can be zero. Otherwise, the equations do not define parallel lines. A solution that
satisfies these conditions is referred to as a non-trivial solution and the corresponding partition is
called a non-trivial partition. Since i 0
are given by equations 1 and 2, we have
(a
(a
which implies i
a
a 21
a 22
a 20
Since a family of lines in A is defined by ffi
a
a 21 (5)
a 22 (6)
a 20 (7)
A solution to the above system of equations would imply zero communication. In matrix notation,
we have 0
a 11 a 21 0
a 12 a 22 0
c 0C A =B @
ff
cC A
The above set of equations decouples into:
a 11 a 21
a 12 a 22
ff
and
We illustrate the use of the above sufficient condition with the following example.
Example
for each element A[i; j], we need two elements of B. Consider the element B[i \Gamma 1; j]. For
communication-free partitioning, the systemB @
c 0C A =B @
ff
cC A
must have a solution. Similarly, considering the element solution must exist for the
following system as well: 0
c 0C A =B @
ff
cC A
Given that there is a single allocation for A and B, the two systems of equations must admit a
solution. This reduces to the following system:
Figure
4: Partitions of arrays A and B for Example 6
The set of equations reduce to
which has a solution say This implies that
both A and B are partitioned by anti-diagonals. Figure 4 shows the partitions of the arrays for
zero communication. The relations between c and c 0
give the corresponding lines in A and B.
With a minor modification of example 6 (as shown below),
Example 7:
the reduced system of equations would be:
which has a solution
2.
Figure
5 shows the lines in arrays A and B which
incur zero communication.
The next example shows a nested loop in which arrays can not be partitioned such that there
is no communication.
Example 8:
Figure
5: Lines in arrays A and B for Example 7
The system of equations in this case is:
which reduces to
which has only one solution ff 0
which is not a non-trivial solution. Thus there is no
communication-free partitioning of arrays A and B.
The examples considered so far involve constant offsets for access of array elements and we
had to find compatible partitions. The next case considered is one where we need to find mutually
compatible partitions. Consider the nested loop:
Example 9:
In this case, the accesses due to loop L 1 result in the system:
and the accesses due to loop L 2 result in the system:
Therefore, for communication-free partitioning, the two systems of equations written above must
admit a solution - thus, we get the reduced system:
which has a solution
Figure 6 shows partitions of A and B into
diagonals.
4.1 Constant offsets in reference
We discuss the important special case of array accesses with constant offsets which occur in codes
for the solution of partial differential equations. Consider the following loop:
k and q j
m) are integer constants. The vectors ~
are referred to as
offset vectors. There are m such offset vectors, one for each access pair A[i; j] and B[i
Figure
Mutually compatible partition of A and B for Example 9
In such cases, the system of equations is (for each access indexed by k, where 1 k m):B @
c 0C A =B @
ff
cC A
which reduces to the following constraints:
Therefore, for a given collection of offset vectors, communication-free partitioning is possible, if
If we consider the offset vectors (q i
k ) as points in 2-dimensional space, then communication
free partitioning is possible, if the points (q i
are collinear. In addition, if all
no communication is required between rowwise partitions; similarly, if all q j
then partitioning the arrays into columns results in zero communication. For zero communication
in nested loops involving K-dimensional arrays, this means that offset vectors treated as points in
K-dimensional space must lie on a dimensional hyperplane.
In all the above examples, there was one solution to the set of values for ff; ff 0
. In the next
section, we show an example where there are an infinite number of solutions, and loop transformations
play a role in the choice of a specific solution.
5 Partitioning for linear references and program transformations
In this section, we discuss communication-free partitioning of arrays when references are not constant
offsets but linear functions. Consider the loop in example 10:
Example 10:
Communication free partitioning is possible if the system of equationsB @
c 0C A =B @
ff
cC A
has a solution where no more than one of ff and fi is zero and no more than one of ff 0
and fi 0
is
zero. The set reduces to:
This set has an infinite number of solutions. We present four of them and discuss their relative
merits.
The first two equations involve four variables, fixing two of which leads to values of the other
two. For example, if we set array A is partitioned into rows. From the set
of equations, we get ff 0
array B is being partitioned into
columns; since
, if we assign row k of array A and column k of array to the same processor,
then there is no communication. See Figure 1 for the partitions.
A second partition can be chosen by setting 1. In this case, array A is partitioned
into columns. Therefore, ff 0
means B is partitioned into rows.
Figure
7(a) for this partition. If we set array A is partitioned into
anti-diagonals. From the set of equations, we get ff 0
means
array B is also being partitioned into anti-diagonals. Figure 7(b) shows the third partition. A
fourth partition can be chosen by setting \Gamma1. In this case, array A is partitioned
into diagonals. Therefore, ff 0
means B is also partitioned into
anti-diagonals. In this case, the kth sub-diagonal (below the diagonal) in A corresponds to the kth
super-diagonal (above the diagonal) in array B. Figure 7(c) illustrates this partition.
Figure
7: Decompositions of arrays A and B for Example 10
From the point of loop transformations [2, 24], we can rewrite the loop to indicate which
processor executes which portion of the loop iterations, partitions 1 and 2 are easy. Let us assume
that the number of processors is p and the number of rows and columns in N and N is a multiple
of p. In such a case, partition 1 partitioned into rows) can be rewritten as:
Processor k executes (1 k p) :
to N by p
and all rows r of A such that r mod are assigned to processor k; all columns r
of B such that r mod are also assigned to processor k.
In the case of partition 2 partitioned into columns), the loop can be rewritten as:
Processor k executes (1 k p) :
to N by p
and all columns r of A such that r mod rows r of B such that r mod
are also assigned to processor k. Since there are no data dependences anyway, the
loops can be interchanged and written as:
Processor k executes (1 k p) :
to N by p
Partitions 3 and 4 can result in complicated loop structures. In partition 3, 1. The
steps we perform to transform the loop use loop skewing and loop interchange transformations [24].
We perform the following steps:
1. If distribute the iterations of the outer loop in a round-robin manner;
in this case, processor k (in a system of p processors) executes all iterations (i;
. This is referred to as wrap distribution.
If apply loop interchange and wrap-distribute the iterations of the
interchanged outer loop. If not, apply the following transformation to the index set:
ff fi
2. Apply loop interchanging so that the outer loop now can be stripmined. Since these loops
do not involve flow or anti-dependences, loop interchanging is always legal. After the first
transformation, the loop need not be rectangular. Therefore, the rules for interchange of
trapezoidal loops in [24] are used for performing the loop interchange.
The resulting loop after transformation and loop interchange is the following:
Example 11:
The load-balanced version of the loop is:
Processor k executes (1 k p) :
to 2N by p
The reason we distribute the outer loop iterations in a wrap-around manner is that such a distribution
will result in load balanced execution when N ?? p.
In partition 4, \Gamma1. The resulting loop after transformation and loop interchange is
the following:
Processor k executes (1 k p) :
Next, we consider a more complicated example to illustrate partitioning of linear recurrences:
Example 12:
The access B[i results in the following system:B @
c 0C A =B @
ff
cC A
and the second access results in the system:B @
c 0C A =B @
ff
cC A
which together give rise to the following set of equations:
which has only one solution which is
0: Thus communication-free partitioning
has been shown to be impossible.
But for the following loop, communication-free partitioning into columns is possible.
Example 13:
The accesses give the following set of equations:
In this case, we have a solution:
1. Thus both A and B
are partitioned into columns.
6 Generalized linear references
In this section, we discuss the generalization of the problem formulation discussed in section 4.
Example 14:
are linear functions of i and j, i.e.
Thus the statement in the loop is:
In this case, the family of lines in array B are given by
and lines in array A are given by:
Thus the families of lines are:
Array
(a
(a
which is rewritten as:
Array
a 11 ff 0
a
a 12 ff 0
a 22 fi 0
a 20 (15)
Therefore, for communication-free partitioning, we should find a solution for following system of
equations (with the constraint that at most one of ff; fi is zero and at most one of ff 0
is zero):B @
a 11 a 21 0
a 12 a 22 0
c 0C A =B @
ff
cC A
Consider the following example:
Example 15:
Figure
8: Partitions of arrays A and B for Example 15
The accesses result in the following system of equations:B @
c 0C A =B @
ff
cC A
which leads to the following set of equations:
which has a solution
1. See
Figure
8 for the partitions.
Now for a more complicated example:
Example
The accesses result in the following system of equations:B @
c 0C A =B @
ff
cC A
which leads to the following set of equations:
which has solutions where:/
ff
The system has the following solution:
1. The loop after transformation
is:
Processor k executes (1 k p) :
to 2N by p
The following section deals with a formulation of the problem for communication minimization,
when communication-free partitions are not feasible.
7 Minimizing communication for constant offsets
In this section, we present a formulation of the communication minimization problem, that can
be used when communication-free partitioning is impossible. We focus on two-dimensional arrays
with constant offsets in accesses. The results can be generalized to higher dimensional arrays.
We consider the following loop model:
The array accesses in the above loop give rise to the set of offset vectors, ~
. The 2 \Theta m
matrix Q whose columns are the offset vectors q i is referred to as the offset matrix. Since A[i; j]
is computed in iteration (i; j), a partition of the array A defines a partition of the iteration space
and vice-versa. For constant offsets, the shape of the partitions for the two arrays A and B will be
the same; the array boundaries depend on the offset vectors.
Given the offset vectors, the problem is to derive partitions such that the processors have equal
amount of work and communication is minimized. We assume that there are N 2 iterations (N 2
elements of array A are being computed) and the number of processors is p. We also assume that
N 2 is a multiple of p. Thus, the workload for each processors is N 2
.
The shapes of partitions considered are parallelograms, of which rectangles are a special case.
A parallelogram is defined by two vectors each of which is normal to one side of the parallelogram.
Let the normal vectors be ~
The matrix S refers to:
S
If i and j are the array indices, ~
defines a family of lines given by S different
values of c 1 and the vector ~
defines a family of lines given by S different values
of c 2 . S must be non-singular in order to define parallelogram blocks that span the entire array.
The matrix S defines a linear transformation applied to each point (i; j); the image of the point
(i; j). We consider parallelograms defined by solutions to the following
set of linear inequalities:
where r 1 l and r 2 l are the lengths of the sides of the parallelograms.
The number of points in the discrete Cartesian space enclosed by this region (which must be the
same as the workload for each processor, N 2
when det(S) 6= 0. The non-zero entries in
the matrix Q 0
(i) be the sum of the absolute
values of the entries in the ith row of Q
, i.e.
The communication volume incurred is:
2l
Thus the problem of finding blocks which minimize inter-processor communication is that of
finding the matrix S, the value l and the aspect ratios r 1 and r 2 such that the communication
volume is minimized subject to the constraint that the processors have about the same amount of
workload i.e.
The elements of matrix S determine the shape of the partitions and the values of r
the size of the partitions.
Summary
In current day distributed memory machines, interprocessor communication is more time-consuming
than instruction execution. If insufficient attention is paid to the data allocation problem, then so
much time may be spent in interprocessor communication that much of the benefit of parallelism
is lost. It is therefore worthwhile for a compiler to analyze patterns of data usage to determine
allocation, in order to minimize interprocessor communication. In this paper, we formulated the
problem of determining if communication-free array partitions (decompositions) exist and presented
machine-independent sufficient conditions for the same for a class of parallel loops without flow or
anti dependences, where array references are affine functions of loop index variables. In addition,
where communication-free decomposition is not possible, we presented a mathematical formulation
that aids in minimizing communication.
Acknowlegdment
We gratefully acknowledge the helpful comments of the referees in improving the earlier draft of
this paper.
--R
"On the Performance Enhancement of Paging Systems through Program Analysis and Transformations,"
"Automatic Translation of FORTRAN Programs to Vector Form,"
"An interactive environment for data partitioning and distribution,"
"Compiling Programs for Distributed-Memory Multiprocessors,"
"Compiling Parallel Programs by Optimizing Performance,"
Solving Problems on Concurrent Processors - Volume <Volume>1</Volume>: General Techniques and Regular Problems
"On the Problem of Optimizing Data Transfers for Complex Memory Systems,"
"Strategies for Cache and Local Memory Management by Global Program Transformations,"
"Array Distribution in SUPERB,"
"Automatic Data Partitioning on Distributed Memory Multipro- cessors,"
"Compiler Techniques for Data Partitioning of Sequentially Iterated Parallel Loops,"
"Programming for Parallelism,"
"Data Optimization: Allocation of Arrays to Reduce Communication on SIMD Machines,"
"Semi-automatic Process Partitioning for Parallel Computation,"
"Supporting Shared Data Structures on Distributed Memory Machines,"
Compiling programs for non-shared memory machines
"Index Domain Alignment: Minimizing Cost of Cross-Referencing between Distributed Arrays,"
Memory Storage Patterns in Parallel Processing
"Process Decomposition Through Locality of Reference,"
Compiling for locality of reference
"Mapping Data to Processors in Distributed Memory Computa- tions,"
"Applying AI Techniques to Program Optimization for Parallel Computers,"
Optimizing Supercompilers for Supercomputers
"SUPERB: A Tool for Semi-automatic MIMD-SIMD Parallelization,"
--TR
Programming for parallelism
Automatic translation of FORTRAN programs to vector form
Memory storage patterns in parallel processing
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Strategies for cache and local memory management by global program transformation
On the problem of optimizing data transfers for complex memory systems
process partitioning for parallel computation
Process decomposition through locality of reference
Data optimization: allocation of arrays to reduce communication on SIMD machines
Supporting shared data structures on distributed memory architectures
Compile-time techniques for parallel execution of loops on distributed memory multiprocessors
Compiling programs for nonshared memory machines
Compiler techniques for data partitioning of sequentially iterated parallel loops
Array distribution in SUPERB
Optimizing Supercompilers for Supercomputers
Compiling for locality of reference
--CTR
G. N. Srinivasa Prasanna , A. Agrawal , B. R. Musicus, Hierarchical Compilation of Macro Dataflow Graphs for Multiprocessors with Local Memory, IEEE Transactions on Parallel and Distributed Systems, v.5 n.7, p.720-736, July 1994
Jih-Woei Huang , Chih-Ping Chu, An Efficient Communication Scheduling Method for the Processor Mapping Technique Applied Data Redistribution, The Journal of Supercomputing, v.37 n.3, p.297-318, September 2006
Weng-Long Chang , Jih-Woei Huang , Chih-Ping Chu, Using Elementary Linear Algebra to Solve Data Alignment for Arrays with Linear or Quadratic References, IEEE Transactions on Parallel and Distributed Systems, v.15 n.1, p.28-39, January 2004
array partitioning based on the Smith normal form, International Journal of Parallel Programming, v.33 n.1, p.35-56, February 2005
Ravi Ponnusamy , Yuan-Shin Hwang , Raja Das , Joel H. Saltz , Alok Choudhary , Geoffrey Fox, Supporting Irregular Distributions Using Data-Parallel Languages, IEEE Parallel & Distributed Technology: Systems & Technology, v.3 n.1, p.12-24, March 1995
Anant Agarwal , David A. Kranz , Venkat Natarajan, Automatic Partitioning of Parallel Loops and Data Arrays for Distributed Shared-Memory Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.6 n.9, p.943-962, September 1995
Wenrui Gong , Gang Wang , R. Kastner, Storage assignment during high-level synthesis for configurable architectures, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.3-6, November 06-10, 2005, San Jose, CA
Kuei-Ping Shih , Jang-Ping Sheu , Chua-Huang Huang, Statement-Level Communication-Free Partitioning Techniques for Parallelizing Compilers, The Journal of Supercomputing, v.15 n.3, p.243-269, Mar.1.2000
M. Kandemir , A. Choudhary , N. Shenoy , P. Banerjee , J. Ramanujam, A hyperplane based approach for optimizing spatial locality in loop nests, Proceedings of the 12th international conference on Supercomputing, p.69-76, July 1998, Melbourne, Australia
Skewed Data Partition and Alignment Techniques for Compiling Programs on Distributed Memory Multicomputers, The Journal of Supercomputing, v.21 n.2, p.191-211, February 2002
Manish Gupta , Prithviraj Banerjee, PARADIGM: a compiler for automatic data distribution on multicomputers, Proceedings of the 7th international conference on Supercomputing, p.87-96, July 19-23, 1993, Tokyo, Japan
Minyi Guo, Linear data distribution based on index analysis, High performance scientific and engineering computing: hardware/software support, Kluwer Academic Publishers, Norwell, MA, 2004
Catherine Mongenet, Mappings for communication minimization using distribution and alignment, Proceedings of the IFIP WG10.3 working conference on Parallel architectures and compilation techniques, p.185-193, June 27-29, 1995, Limassol, Cyprus
T. S. Chen , J. P. Sheu, Communication-Free Data Allocation Techniques for Parallelizing Compilers on Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.5 n.9, p.924-938, September 1994
Paul Feautrier, Toward automatic partitioning of arrays on distributed memory computers, Proceedings of the 7th international conference on Supercomputing, p.175-184, July 19-23, 1993, Tokyo, Japan
Esin Onbasioglu , Linet zdamar, Optimization of Data Distribution and Processor Allocation Problem Using Simulated Annealing, The Journal of Supercomputing, v.25 n.3, p.237-253, July
Jordi Garcia , Eduard Ayguad , Jesus Lebarta, A novel approach towards automatic data distribution, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.78-es, December 04-08, 1995, San Diego, California, United States
Thomas Rauber , Gudula Rnger, Deriving Array Distributions by Optimization Techniques, The Journal of Supercomputing, v.15 n.3, p.271-293, Mar.1.2000
Akimasa Yoshida , Kenichi Koshizuka , Hironori Kasahara, Data-localization for Fortran macro-dataflow computation using partial static task assignment, Proceedings of the 10th international conference on Supercomputing, p.61-68, May 25-28, 1996, Philadelphia, Pennsylvania, United States
Vincent Loechner , Catherine Mongenet, Communication Optimization for Affine Recurrence Equations Using Broadcast and Locality, International Journal of Parallel Programming, v.28 n.1, p.47-102, February 2000
David A. Garza-Salazar , Wim Bhm, Reducing communication by honoring multiple alignments, Proceedings of the 9th international conference on Supercomputing, p.87-96, July 03-07, 1995, Barcelona, Spain
Wei Li , Keshav Pingali, Access normalization: loop restructuring for NUMA computers, ACM Transactions on Computer Systems (TOCS), v.11 n.4, p.353-375, Nov. 1993
Wei Li , Keshav Pingali, Access normalization: loop restructuring for NUMA compilers, ACM SIGPLAN Notices, v.27 n.9, p.285-295, Sept. 1992
Micha Cierniak , Wei Li, Unifying data and control transformations for distributed shared-memory machines, ACM SIGPLAN Notices, v.30 n.6, p.205-217, June 1995
James R. Larus, Compiling for shared-memory and message-passing computers, ACM Letters on Programming Languages and Systems (LOPLAS), v.2 n.1-4, p.165-180, MarchDec. 1993
M. Kandemir , J. Ramanujam , A. Choudhary , P. Banerjee, A Layout-Conscious Iteration Space Transformation Technique, IEEE Transactions on Computers, v.50 n.12, p.1321-1336, December 2001
PeiZong Lee, Efficient Algorithms for Data Distribution on Distributed Memory Parallel Computers, IEEE Transactions on Parallel and Distributed Systems, v.8 n.8, p.825-839, August 1997
Mahmut Kandemir , Alok Choudhary , Nagaraj Shenoy , Prithviraj Banerjee , J. Ramanujam, A Linear Algebra Framework for Automatic Determination of Optimal Data Layouts, IEEE Transactions on Parallel and Distributed Systems, v.10 n.2, p.115-135, February 1999
Ender zcan , Esin Onbaioglu, Memetic algorithms for parallel code optimization, International Journal of Parallel Programming, v.35 n.1, p.33-61, February 2007
Mahmut Taylan Kandemir, A compiler technique for improving whole-program locality, ACM SIGPLAN Notices, v.36 n.3, p.179-192, March 2001
Mahmut Kandemir , Alok Choudhary , J. Ramanujam , Prith Banerjee, Reducing False Sharing and Improving Spatial Locality in a Unified Compilation Framework, IEEE Transactions on Parallel and Distributed Systems, v.14 n.4, p.337-354, April
Mahmut Taylan Kandemir, Improving whole-program locality using intra-procedural and inter-procedural transformations, Journal of Parallel and Distributed Computing, v.65 n.5, p.564-582, May 2005
Peizong Lee , Zvi Meir Kedem, Automatic data and computation decomposition on distributed memory parallel computers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.1, p.1-50, January 2002 | matrix algebra;matrixnotation;data decompositions;parallel loops;communication-freepartitioning;data distribution;linear references;index termscompile time;program compilers;parallel programming;heuristics;sufficient conditions;array accesses;data partitioning;distributed memory machines;loop transformations |
629077 | Demonstration of Automatic Data Partitioning Techniques for Parallelizing Compilers on Multicomputers. | An approach to the problem of automatic data partitioning is introduced. The notion of constraints on data distribution is presented, and it is shown how, based on performance considerations, a compiler identifies constraints to be imposed on the distribution of various data structures. These constraints are then combined by the compiler to obtain a complete and consistent picture of the data distribution scheme, one that offers good performance in terms of the overall execution time. Results of a study performed on Fortran programs taken from the Linpack and Eispack libraries and the Perfect Benchmarks to determine the applicability of the approach to real programs are presented. The results are very encouraging, and demonstrate the feasibility of automatic data partitioning for programs with regular computations that may be statically analyzed, which covers an extremely significant class of scientific application programs. | Introduction
Distributed memory multiprocessors (multicomputers) are increasingly being used for providing high levels
of performance for scientific applications. The distributed memory machines offer significant advantages
over their shared memory counterparts in terms of cost and scalability, but it is a widely accepted fact that
they are much more difficult to program than shared memory machines. One major reason for this difficulty
is the absence of a single global address space. As a result, the programmer has to distribute code and
data on processors himself, and manage communication among tasks explicitly. Clearly there is a need for
parallelizing compilers to relieve the programmer of this burden.
The area of parallelizing compilers for multicomputers has seen considerable research activity during the
last few years. A number of researchers are developing compilers that take a program written in a sequential
or shared-memory parallel language, and based on the user-specified partitioning of data, generate the target
parallel program for a multicomputer. These research efforts include the Fortran D compiler project at Rice
University [9], the SUPERB project at Bonn University [22], the Kali project at Purdue University [13],
and the DINO project at Colorado University [20], all of them dealing with imperative languages that are
extensions of Fortran or C. The Crystal project at Yale University [5] and the compiler [19] are
also based on the same idea, but are targeted for functional languages. The parallel program generated by
most of these systems corresponds to the SPMD (single program, multiple-data) [12] model, in which each
processor executes the same program but operates on distinct data items.
The current work on parallelizing compilers for multicomputers has, by and large, concentrated on automating
the generation of messages for communication among processors. Our use of the term "paralleliz-
ing compiler" is somewhat misleading in this context, since all parallelization decisions are really left to the
programmer who specifies data partitioning. It is the method of data partitioning that determines when
interprocessor communication takes place, and which of the independent computations actually get executed
on different processors.
The distribution of data across processors is of critical importance to the efficiency of the parallel program
in a distributed memory system. Since interprocessor communication is much more expensive than
computation on processors, it is essential that a processor be able to do as much of computation as possible
using just local data. Excessive communication among processors can easily offset any gains made by the
use of parallelism. Another important consideration for a good data distribution pattern is that it should
allow the workload to be evenly distributed among processors so that full use is made of the parallelism
inherent in the computation. There is often a tradeoff involved in minimizing interprocessor communication
and balancing load on processors, and a good scheme for data partitioning must take into account both
communication and computation costs governed by the underlying architecture of the machine.
The goal of automatic parallelization of sequential code remains incomplete as long as the programmer is
forced to think about these issues and come up with the right data partitioning scheme for each program. The
task of determining a good partitioning scheme manually can be extremely difficult and tedious. However,
most of the existing projects on parallelization systems for multicomputers have so far chosen not to tackle
this problem at the compiler level because it is known to be a difficult problem. Mace [16] has shown that
the problem of finding optimal data storage patterns for parallel processing, even for 1-D and 2-D arrays,
is NP-complete. Another related problem, the component alignment problem has been discussed by Li and
Chen [15], and shown to be NP-complete.
Recently several researchers have addressed this problem of automatically determining a data partitioning
scheme, or of providing help to the user in this task. Ramanujan and Sadayappan [18] have worked on
deriving data partitions for a restricted class of programs. They, however, concentrate on individual loops
and strongly connected components rather than considering the program as a whole. Hudak and Abraham
[11], and Socha [21] present techniques for data partitioning for programs that may be modeled as sequentially
iterated parallel loops. Balasundaram et al. [1] discuss an interactive tool that provides assistance to the
user for data distribution. The key element in their tool is a performance estimation module, which is used
to evaluate various alternatives regarding the distribution scheme. Li and Chen [15] address the issue of
data movement between processors due to cross-references between multiple distributed arrays. They also
describe how explicit communication can be synthesized and communication costs estimated by analyzing
reference patterns in the source program [14]. These estimates are used to evaluate different partitioning
schemes.
Most of these approaches have serious drawbacks associated with them. Some of them have a problem
of restricted applicability, they apply only to programs that may be modeled as single, multiply nested
loops. Some others require a fairly exhaustive enumeration of possible data partitioning schemes, which
may render the method ineffective for reasonably large problems. Clearly, any strategy for automatic data
partitioning can be expected to work well only for applications with a regular computational structure and
static dependence patterns that can be determined at compile time. However, even though there exists a
significant class of scientific applications with these properties, there is no data to show the effectiveness of
any of these methods on real programs.
In this paper, we present a novel approach, which we call the constraint-based approach [7], to the problem
of automatic data partitioning on multicomputers. In this approach, the compiler analyzes each loop in the
program, and based on performance considerations, identifies some constraints on the distribution of various
data structures being referenced in that loop. There is a quality measure associated with each constraint
that captures its importance with respect to the performance of the program. Finally, the compiler tries to
combine constraints for each data structure in a consistent manner so that the overall execution time of the
parallel program is minimized. We restrict ourselves to the partitioning of arrays. The ideas underlying our
approach can be applied to most distributed memory machines, such as the Intel iPSC/2, the NCUBE, and
the WARP systolic machine. Our examples are all written in a Fortran-like language, and we present results
on Fortran programs. However, the ideas developed on the partitioning of arrays are equally applicable to
any similar programming language.
The rest of this paper is organized as follows. Section 2 describes our abstract machine and the kind of
distributions that arrays may have in our scheme. Section 3 introduces the notion of constraints and describes
the different kinds of constraints that may be imposed on array distributions. Section 4 describes how a
compiler analyzes program references to record constraints and determine the quality measures associated
with them. Section 5 presents our strategy for determining the data partitioning scheme. Section 6 presents
the results of our study on Fortran programs performed to determine the applicability of our approach to
real programs. Finally, conclusions are presented in Section 7.
2 Data Distribution
The abstract target machine we assume is a D-dimensional (D is the maximum dimensionality of any array
used in the program) grid of N 1 N 2 processors. Such a topology can easily be embedded
on almost any distributed memory machine. A processor in such a topology is represented by the tuple
D. The correspondence between a tuple (p
processor number in the range 0 to established by the scheme which embeds the virtual processor
grid topology on the real target machine. To make the notation describing replication of data simpler, we
extend the representation of the processor tuple in the following manner. A processor tuple with an X
appearing in the ith position denotes all processors along the ith grid dimension. Thus for a 2 2 grid of
processors, the tuple (0; X) represents the processors (0; 0) and (0; 1), while the tuple (X; X) represents all
the four processors.
The scalar variables and small arrays used in the program are assumed to be replicated on all processors.
For other arrays, we use a separate distribution function with each dimension to indicate how that array is
distributed across processors. This turns out to be more convenient than having a single distribution function
associated with a multidimensional array. We refer to the kth dimension of an array A as A k . Each array
dimension A k gets mapped to a unique dimension D, of the processor grid. If
the number of processors along that grid dimension is one, we say that the array dimension A k has
been sequentialized. The sequentialization of an array dimension implies that all elements whose subscripts
differ only in that dimension are allocated to the same processor. The distribution function for A k takes as
its argument an index i and returns the component of the tuple representing the processor which
owns the element A[\Gamma; denotes an arbitrary value, and i is the index appearing in
the k th dimension. The array dimension A k may either be partitioned or replicated on the corresponding
grid dimension. The distribution function is of the form
ae
b i\Gammaoffset
block c[modN
replicated
where the square parentheses surrounding modN
indicate that the appearance of this part in the
expression is optional. At a higher level, the given formulation of the distribution function can be thought of
as specifying the following parameters: (1) whether the array dimension is partitioned across processors or
replicated, (2) method of partitioning - contiguous or cyclic, (3) the grid dimension to which the kth array
dimension gets mapped, (4) the block size for distribution, i.e., the number of elements residing together as
a block on a processor, and (5) the displacement applied to the subscript value for mapping.
Examples of some data distribution schemes possible for a array on a 4-processor machine are
shown in Figure 1. The numbers shown in the figure indicate the processor(s) to which that part of the array
is allocated. The machine is considered to be an N 1 N 2 mesh, and the processor number corresponding to
the tuple (p . The distribution functions corresponding to the different figures
are given below. The array subscripts are assumed to start with the value 1, as in Fortran.
a)
c)
d)
f)
The last example illustrates how our notation allows us to specify partial replication of data, i.e., replication
of an array dimension along a specific dimension of the processor grid. An array is replicated completely
on all the processors if the distribution function for each of its dimensions takes the value X.
If the dimensionality (D) of the processor topology is greater than the dimensionality (d) of an array,
we need D \Gamma d more distribution functions in order to completely specify the processor(s) owning a given
element of the array. These functions provide the remaining D \Gamma d numbers of the processor tuple. We
restrict these "functions" to take constant values, or the value X if the array is to be replicated along the
corresponding grid dimension.
Most of the arrays used in real scientific programs, such as routines from LINPACK and EISPACK
libraries and most of the Perfect Benchmark programs [6], have fewer than three dimensions. We believe
that even for programs with higher dimensional arrays, restricting the number of dimensions that can be
distributed across processors to two usually does not lead to any loss of effective parallelism. Consider the
completely parallel loop nest shown below:
do
do
do
Even though the loop has parallelism at all three levels, a two-dimensional grid topology in which Z 1 and Z 2
are distributed and Z 3 is sequentialized would give the same performance as a three-dimensional topology
with the same number of processors, in which all of Z are distributed. In order to simplify our
strategy, and with the above observation providing some justification, we shall assume for now that our
underlying target topology is a two-dimensional mesh. For the sake of notation describing the distribution
of an array dimension on a grid dimension, we shall continue to regard the target topology conceptually as
a D-dimensional grid, with the restriction that the values of N are later set to one.
3 Constraints on Data Distribution
The data references associated with each loop in the program indicate some desirable properties that the
final distribution for various arrays should have. We formulate these desirable characteristics as constraints
on the data distribution functions. Our use of this term differs slightly from its common usage in the sense
that constraints on data distribution represent requirements that should be met, and not requirements that
necessarily have to be met.
Corresponding to each statement assigning values to an array in a parallelizable loop, there are two kinds
of constraints, parallelization constraints and communication constraints. The former kind gives constraints
on the distribution of the array appearing on the left hand side of the assignment statement. The distribution
should be such that the array elements being assigned values in a parallelizable loop are distributed evenly
on as many processors as possible, so that we get good performance due to exploitation of parallelism. The
communication constraints try to ensure that the data elements being read in a statement reside on the same
processor as the one that owns the data element being written into. The motivation for that is the owner
computes rule [9] followed by almost all parallelization systems for multicomputers. According to that rule,
the processor responsible for a computation is the one that owns the data item being assigned a value in that
computation. Whenever that computation involves the use of a value not available locally on the processor,
there is a need for interprocessor communication. The communication constraints try to eliminate this need
for interprocessor communication, whenever possible.
In general, depending on the kind of loop (a single loop may correspond to more than one category), we
have rules for imposing the following kinds of constraints:
1. Parallelizable loop in which array A gets assigned values - parallelization constraints on the distribution
of A.
2. Loop in which assignments to array A use values of array B - communication constraints specifying
the relationship between distributions of A and B.
3. Loop in which assignments to certain elements of A use values of different elements of A - communication
constraints on the distribution of A.
4. Loop in which a single assignment statement uses values of multiple elements of array B - communication
constraints on the distribution of B.
The constraints on the distribution of an array may specify any of the relevant parameters, such as the
number of processors on which an array dimension is distributed, whether the distribution is contiguous or
cyclic, and the block size of distribution. There are two kinds of constraints on the relationship between
distribution of arrays. One kind specifies the alignment between dimensions of different arrays. Two array
dimensions are said to be aligned if they get distributed on the same processor grid dimension. The other
kind of constraint on relationships formulates one distribution function in terms of the other for aligned
dimensions. For example, consider the loop shown below:
do
The data references in this loop suggest that A 1 should be aligned with B 1 , and A 2 should be sequentialized.
Secondly, they suggest the following distribution function for B 1 , in terms of that for A 1 .
A (i)
A (bi=c 2 c) (1)
Thus, given parameters regarding the distribution of A, like the block size, the offset, and the number of
processors, we can determine the corresponding parameters regarding the distribution of B by looking at
the relationship between the two distributions.
Intuitively, the notion of constraints provides an abstraction of the significance of each loop with respect to
data distribution. The distribution of each array involves taking decisions regarding a number of parameters
described earlier, and each constraint specifies only the basic minimal requirements on distribution. Hence
the parameters related to the distribution of an array left unspecified by a constraint may be selected by
combining that constraint with others specifying those parameters. Each such combination leads to an
improvement in the distribution scheme for the program as a whole.
However, different parts of the program may also impose conflicting requirements on the distribution of
various arrays, in the form of constraints inconsistent with each other. In order to resolve those conflicts,
we associate a measure of quality with each constraint. Depending on the kind of constraint, we use one
of the following two quality measures - the penalty in execution time, or the actual execution time. For
constraints which are finally either satisfied or not satisfied by the data distribution scheme (we refer to
them as boolean constraints, an example of such a constraint is one specifying the alignment of two array
dimensions), we use the first measure which estimates the penalty paid in execution time if that constraint is
not honored. For constraints specifying the distribution of an array dimension over a number of processors,
we use the second measure which expresses the execution time as a simple function of the number of proces-
sors. Depending on whether a constraint affects the amount of parallelism exploited or the interprocessor
communication requirement, or both, the expression for its quality measure has terms for the computation
time, the communication time, or both.
One problem with estimating the quality measures of constraints is that they may depend on certain
parameters of the final distribution that are not known beforehand. We express those quality measures
as functions of parameters not known at that stage. For instance, the quality measure of a constraint on
alignment of two array dimensions depends on the numbers of processors on which the two dimensions are
otherwise distributed, and is expressed as a function of those numbers.
Determining Constraints and their Quality Measures
The success of our strategy for data partitioning depends greatly on the compiler's ability to recognize
data reference patterns in various loops of the program, and to record the constraints indicated by those
references, along with their quality measures. We limit our attention to statements that involve assignment
to arrays, since all scalar variables are replicated on all the processors. The computation time component
of the quality measure of a constraint is determined by estimating the time for sequential execution based
on a count of various operations, and by estimating the speedup. Determining the communication time
component is a relatively tougher problem. We have developed a methodology for compile-time estimation
of communication costs incurred by a program [8]. It is based on identifying the primitives needed to carry
out interprocessor communication and determining the message sizes. The communication costs are obtained
as functions of the numbers of processors over which various arrays are distributed, and of the method of
partitioning, namely, contiguous or cyclic. The quality measures of various communication constraints are
based on the estimates obtained by following this methodology. We shall briefly describe it here, further
details can be found in [8].
Communication Primitives We use array reference patterns to determine which communication routines
out of a given library best realize the required communication for various loops. This idea was first
presented by Li and Chen [14] to show how explicit communication can be synthesized by analyzing data
reference patterns. We have extended their work in several ways, and are able to handle a much more
comprehensive set of patterns than those described in [14]. We assume that the following communication
routines are supported by the operating system or by the run-time library:
sending a message from a single source to a single destination processor.
OneToManyMulticast : multicasting a message to all processors along the specified dimension(s) of the
processor grid.
reducing (in the sense of the APL reduction operator) data using a simple associative
operator, over all processors lying on the specified grid dimension(s).
ManyToManyMulticast : replicating data from all processors on the given grid dimension(s) on to
themselves.
Table
1 shows the cost complexities of functions corresponding to these primitives on the hypercube
architecture. The parameter m denotes the message size in words, seq is a sequence of numbers representing
the numbers of processors in various dimensions over which the aggregate communication primitive is carried
out. The function num applied to a sequence simply returns the total number of processors represented
by that sequence, namely, the product of all the numbers in that sequence. In general, a parallelization
system written for a given machine must have a knowledge of the actual timing figures associated with these
primitives on that machine. One possible approach to obtaining such timing figures is the "training set
method" that has recently been proposed by Balasundaram et al. [2].
Subscript Types An array reference pattern is characterized by the loops in which the statement appears,
and the kind of subscript expressions used to index various dimensions of the array. Each subscript expression
is assigned to one of the following categories:
if the subscript expression evaluates to a constant at compile time.
if the subscript expression reduces to the form c 1 are constants and i is a
loop index. Note that induction variables corresponding to a single loop index also fall in this category.
ffl variable: this is the default case, and signifies that the compiler has no knowledge of how the subscript
expression varies with different iterations of the loop.
For subscripts of the type index or variable, we define a parameter called change-level, which is the level
of the innermost loop in which the subscript expression changes its value. For a subscript of the type index,
that is simply the level of the loop that corresponds to the index appearing in the expression.
Method For each statement in a loop in which the assignment to an array uses values from the same or
a different array (we shall refer to the arrays appearing on the left hand side and the right hand side of the
assignment statement as lhs and rhs arrays), we express estimates of the communication costs as functions
of the numbers of processors on which various dimensions of those arrays are distributed. If the assignment
statement has references to multiple arrays, the same procedure is repeated for each rhs array. For the sake
of brevity, here we shall give only a brief outline of the steps of the procedure. The details of the algorithm
associated with each step are given in [8].
1. For each loop enclosing the statement (the loops need not be perfectly nested inside one another),
determine whether the communication required (if any) can be taken out of that loop. This step
ensures that whenever different messages being sent in different iterations of a loop can be combined,
we recognize that opportunity and use the cost functions associated with aggregate communication
primitives rather than those associated with repeated Transfer operations. Our algorithm for taking
this decision also identifies program transformations, such as loop distribution and loop permutations,
that expose opportunities for combining of messages.
2. For each rhs reference, identify the pairs of dimensions from the arrays on rhs and lhs that should be
aligned. The communication costs are determined assuming such an alignment of array dimensions.
To determine the quality measures of alignment constraints, we simply have to obtain the difference
in costs between the cases when the given dimensions are aligned and when they are not.
3. For each pair of subscripts in the lhs and rhs references corresponding to aligned dimensions, identify
the communication term(s) representing the "contribution" of that pair to the overall communication
costs. Whenever at least one subscript in that pair is of the type index or variable, the term represents
a contribution from an enclosing loop identified by the value of change-level. The kind of contribution
from a loop depends on whether or not the loop has been identified in step 1 as one from which
communication can be taken outside. If communication can be taken outside, the term contributed by
that loop corresponds to an aggregate communication primitive, otherwise it corresponds to a repeated
Transfer.
4. If there are multiple references in the statement to an rhs array, identify the isomorphic references,
namely, the references in which the subscripts corresponding to each dimension are of the same type.
The communication costs pertaining to all isomorphic references are obtained by looking at the costs
corresponding to one of those references, as in step 3, and determining how they get modified by
"adjustment" terms from the remaining references.
5. Once all the communication terms representing the contributions of various loops and of various loop-
independent subscript pairs have been obtained, compose them together using an appropriate ordering,
and determine the overall communication costs involved in executing the given assignment statement
in the program.
Examples We now present some example program segments to show the kind of constraints inferred from
data references and the associated quality measures obtained by applying our methodology. Along with
each example, we provide an explanation to justify the quality measure derived for each constraint. The
expressions for quality measures are, however, obtained automatically by following our methodology.
Example 1: do
do
Parallelization Constraints: Distribute A 1 and A 2 in a cyclic manner.
Our example shows a multiply nested parallel loop in which the extent of variation of the index in an inner
loop varies with the value of the index in an outer loop. A simplified analysis indicates that if A 1 and A 2
are distributed in a cyclic manner, we would obtain a speedup of nearly N , otherwise the imbalance caused
by contiguous distribution would lead to the effective speedup decreasing by a factor of two. If C p is the
estimated time for sequential execution of the given program segment, the quality measure is:
Example 2: do
do
Communication Constraints: Align A 1 with ensure that their distributions are
related in the following manner:
A (j) (2)
A (i)
A (b i
c) (3)
If the dimension pairs we mentioned are not aligned or if the above relationships do not hold, the elements
of B residing on a processor may be needed by any other processor. Hence all the n 1 n 2 =(N I N J ) elements
held by each processor are replicated on all the processors.
do
do
Communication Constraints :
ffl Align A 1 with
As seen in the previous example, the values of B held by each of the N I N J processors have to be
replicated if the indicated dimensions are not aligned.
is distributed on N I ? 1 processors, each processor needs to get elements on the "boundary"
rows of the two "neighboring" processors.
The given term indicates that a Transfer operation takes place only if the condition (N I ? 1) is
The analysis is similar to that for the previous case.
in a contiguous manner.
distributed cyclically, each processor needs to communicate all of its B elements to its two
neighboring processors.
in a contiguous manner.
The analysis is similar to that for the previous case.
Note : The above loop also has parallelization constraints associated with it. If C p indicates the estimated
sequential execution time of the loop, by combining the computation time estimate given by the parallelization
constraint with the communication time estimates given above, we get the following expression for
execution time:
I N J
It is interesting to see that the above expression captures the relative advantages of distribution of arrays A
and B by rows, columns, or blocks for different cases corresponding to the different values of n 1 and n 2 .
5 Strategy for Data Partitioning
The basic idea in our strategy is to consider all constraints on distribution of various arrays indicated by the
important segments of the program, and combine them in a consistent manner to obtain the overall data
distribution. We resolve conflicts between mutually inconsistent constraints on the basis of their quality
measures.
The quality measures of constraints are often expressed in terms of n i (the number of elements along an
arry dimension), and N I (the number of processors on which that dimension is distributed). To compare
them numerically, we need to estimate the values of n i and N I . The value of n i may be supplied by the
user through an assertion, or specified in an interactive environment, or it may be estimated by the compiler
on the basis of the array declarations seen in the program. The need for values of variables of the form
I poses a circular problem - these values become known only after the final distribution scheme has been
determined, and are needed at a stage when decisions about data distribution are being taken. We break this
circularity by assuming initially that all array dimensions are distributed on an equal number of processors.
Once enough decisions on data distribution have been taken so that for each boolean constraint we know
whether it is satisfied or not, we start using expressions for execution time as functions of various N I , and
determine their actual values so that the execution time is minimized.
Our strategy for determining the data distribution scheme, given information about all the constraints,
consists of the steps given below. Each step involves taking decisions about some aspect of the data distribu-
tion. In this manner, we keep building upon the partial information describing the data partitioning scheme
until the complete picture emerges. Such an approach fits in naturally with our idea of using constraints on
distributions, since each constraint can itself be looked upon as a partial specification of the data distribu-
tion. All the steps presented here are simple enough to be automated. Hence the "we" in our discussion
really refers to the parallelizing compiler.
1. Determine the alignment of dimensions of various arrays: This problem has been referred to as the
component alignment problem by Li and Chen in [15]. They prove the problem NP-complete and
give an efficient heuristic algorithm for it. We adapt their approach to our problem and use their
algorithm to determine the alignment of array dimensions. An undirected, weighted graph called a
component affinity graph (CAG) is constructed from the source program. The nodes of the graph
represent dimensions of arrays. For every constraint on the alignment of two dimensions, an edge
having a weight equal to the quality measure of the constraint is generated between the corresponding
two nodes. The component alignment problem is defined as partitioning the node set of the CAG
into D (D being the maximum dimension of arrays) disjoint subsets so that the total weight of edges
across nodes in different subsets is minimized, with the restriction that no two nodes corresponding to
the same array are in the same subset. Thus the (approximate) solution to the component alignment
problem indicates which dimensions of various arrays should be aligned. We can now establish a
one-to-one correspondence between each class of aligned array dimensions and a virtual dimension of
the processor grid topology. Thus, the mapping of each array dimension to a virtual grid dimension
becomes known at the end of this step.
2. Sequentialize array dimensions that need not be partitioned : If in a given class of aligned array dimen-
sions, there is no dimension which necessarily has to be distributed across more than one processor to
get any speedup (this is determined by looking at all the parallelization constraints), we sequentialize
all dimensions in that class. This can lead to significant savings in communication costs without any
loss of effective parallelism.
3. Determine the following parameters for distribution along each dimension - contiguous/cyclic and
relative block sizes: For each class of dimensions that is not sequentialized, all array dimensions with
the same number of elements are given the same kind of distribution, contiguous or cyclic. For all
such array dimensions, we compare the sum total of quality measures of the constraints advocating
contiguous distribution and those favoring cyclic distribution, and choose the one with the higher total
quality measure. Thus a collective decision is taken on all dimensions in that class to maximize overall
gains.
If an array dimension is distributed over a certain number of processors in a contiguous manner, the
block size is determined by the number of elements along that dimension. However, if the distribution
is cyclic, we have some flexibility in choosing the size of blocks that get cyclically distributed. Hence,
if cyclic distribution is chosen for a class of aligned dimensions, we look at constraints on the relative
block sizes pertaining to the distribution of various dimensions in that class. All such constraints
may not be mutually consistent. Hence, the strategy we adopt is to partition the given class of aligned
dimensions into equivalence sub-classes, where each member in a sub-class has the same block size. The
assignment of dimensions to these sub-classes is done by following a greedy approach. The constraints
implying such relationships between two distributions are considered in the non-increasing order of
their quality measures. If any of the two concerned array dimensions has not yet been assigned to a
sub-class, the assignment is done on the basis of their relative block sizes implied by that constraint.
If both dimensions have already been assigned to their respective sub-classes, the present constraint
is ignored, since the assignment must have been done using some constraint with a higher quality
measure. Once all the relative block sizes have been determined using this heuristic, the smallest block
size is fixed at one, and the related block sizes determined accordingly.
4. Determine the number of processors along each dimension: At this point, for each boolean constraint
we know whether it has been satisfied or not. By adding together the terms for computation time and
communication time with the quality measures of constraints that have not been satisfied, we obtain an
expression for the estimated execution time. Let D 0 denote the number of virtual grid dimensions not
yet sequentialized at this point. The expression obtained for execution time is a function of variables
representing the numbers of processors along the corresponding grid dimensions. For
most real programs, we expect the value of D 0 to be two or one. If D 0 ? 2, we first sequentialize all
except for two of the given dimensions based on the following heuristic. We evaluate the execution
time expression of the program for C D 0cases, each case corresponding to 2 different N i variables set
to
N , and the other D set to 1 (N is the total number of processors in the system).
The case which gives the smallest value for execution time is chosen, and the corresponding D
dimensions are sequentialized.
Once we get down to two dimensions, the execution time expression is a function of just one variable,
1 , since N 2 is given by N=N 1 . We now evaluate the execution time expression for different values of
various factors of N ranging from 1 to N , and select the one which leads to the smallest execution
time.
5. Take decisions on replication of arrays or array dimensions: We take two kinds of decisions in this step.
The first kind consists of determining the additional distribution function for each one-dimensional
array when the finally chosen grid topology has two real dimensions. The other kind involves deciding
whether to override the given distribution function for an array dimension to ensure that it is replicated
rather than partitioned over processors in the corresponding grid dimension. We assume that there
is enough memory on each processor to support replication of any array deemed necessary. (If this
assumption does not hold, the strategy simply has to be modified to become more selective about
choosing arrays or array dimensions for replication).
The second distribution function of a one-dimensional array may be an integer constant, in which case
each array element gets mapped to a unique processor, or may take the value X, signifying that the
elements get replicated along that dimension. For each array, we look at the constraints corresponding
to the loops where that array is being used. The array is a candidate for replication along the second
grid dimension if the quality measure of some constraint not being satisfied shows that the array has
to be multicast over that dimension. An example of such an array is the array B in the example loop
shown in Section 3, if A 2 is not sequentialized. A decision favoring replication is taken only if each
time the array is written into, the cost of all processors in the second grid dimension carrying out that
computation is less than the sum of costs of performing that computation on a single processor and
multicasting the result. Note that the cost for performing a computation on all processors can turn
out to be less only if all the values needed for that computation are themselves replicated. For every
one-dimensional array that is not replicated, the second distribution function is given the constant
value of zero.
A decision to override the distribution function of an array dimension from partitioning to replication
on a grid dimension is taken very sparingly. Replication is done only if no array element is written
more than once in the program, and there are loops that involve sending values of elements from that
array to processors along that grid dimension.
A simple example illustrating how our strategy combines constraints across loops is shown below:
do
do
do
The first loop imposes constraints on the alignment of A 1
with
, since the same variable is being used as a
subscript in those dimensions. It also suggests sequentialization of A 2
, so that regardless of the values
of c 1
and c 2
, the elements
may reside on the same processor. The second loop imposes
a requirement that the distribution of A be cyclic. The compiler recognizes that the range of the inner loop
is fixed directly by the value of the outer loop index, hence there would be a serious imbalance of load on
processors carrying out the partial summation unless the array is distributed cyclically. These constraints
are all consistent with each other and get accepted in steps 1, 4 and 3 respectively, of our strategy. Hence
finally, the combination of these constraints leads to the following distributions - row-wise cyclic for A, and
column-wise cyclic for B.
In general, there can be conflicts at each step of our strategy because of different constraints implied
by various loops not being consistent with each other. Such conflicts get resolved on the basis of quality
measures.
6 Study of Numeric Programs
Our approach to automatic data partitioning presupposes the compiler's ability to identify various dependences
in a program. We are currently in the process of implementing our approach using Parafrase-2 [17], a
source-to-source restructurer being developed at the University of Illinois, as the underlying tool for analyzing
programs. Prior to that, we performed an extensive study using some well known scientific application
programs to determine the applicability of our proposed ideas to real programs. One of our objectives was
to determine to what extent a state-of-the-art parallelizing compiler can provide information about data
references in a program so that our system may infer appropriate constraints on the distribution of arrays.
However, even when complete information about a program's computation structure is available, the problem
of determining an optimal data decomposition scheme is NP-hard. Hence, our second objective was to
find out if our strategy leads to good decisions on data partitioning, given enough information about data
references in a program.
Application Programs Five different Fortran programs of varying complexity are used in this study.
The simplest program chosen uses the routine dgefa from the Linpack library. This routine factors a real
matrix using gaussian elimination with partial pivoting. The next program uses the Eispack library routine,
tred2 , which reduces a real symmetric matrix to a symmetric tridiagonal matrix, using and accumulating
orthogonal similarity transformations. The remaining three programs are from the Perfect Club Benchmark
Suite [6]. The program trfd simulates the computational aspects of a two-electron integral transformation.
The code for mdg provides a molecular dynamics model for water molecules in the liquid state at room
temperature and pressure. Flo52 is a two-dimensional code providing an analysis of the transonic inviscid
flow past an airfoil by solving the unsteady Euler equations.
Methodology The testbed for implementation and evaluation of our scheme is the Intel iPSC/2 hyper-cube
system. Our objective is to obtain good data partitionings for programs running on a 16-processor
configuration. Obtaining the actual values of quality measures for various constraints requires us to have a
knowledge of the costs of various communication primitives and of arithmetic operations on the machine.
We use the following approximate function [10] to estimate the time taken, in microseconds, to complete a
ransfer operation on l bytes :
ae
In our parallel code, we implement the ManyToManyMulticast primitive as repeated calls to the OneToMany-
Multicast primitive, hence in our estimates of the quality measures, the former functions is expressed in terms
of the latter one. Each OneToManyMulticast operation sending a message to p processors is assumed to be
pe times as expensive as a Transfer operation for a message of the same size. The time taken to execute
a double precision floating point add or multiply operation is taken to be approximately 5 microseconds.
The floating point division is assumed to be twice as expensive, a simple assignment (load and store) about
one-tenth as expensive, and the overhead of making an arithmetic function call about five times as much.
The timing overhead associated with various control instructions is ignored.
In this study, apart from the use of Parafrase-2 to indicate which loops were parallelizable, all the steps
of our approach were simulated by hand. We used this study more as an opportunity to gain further insight
into the data decomposition problem and examine the feasibility of our approach.
Results For a large majority of loops, Parafrase-2 is able to generate enough information to enable
appropriate formulation of constraints and determination of their quality measures by our approach. There
are some loops for which the information about parallelization is not adequate. Based on an examination of
all the programs, we have identified the following techniques with which the underlying parallelizing compiler
used in implementing our approach needs to be augmented.
ffl More sophisticated interprocedural analysis - there is a need for constant propagation across procedures
[3], and in some cases, we need additional reference information about variables in the procedure or
in-line expansion of the procedure.
ffl An extension of the idea of scalar expansion, namely, the expansion of small arrays. This is essential
to get the benefits of our approach in which, like scalar variables, we also treat small arrays as being
replicated on all processors. This helps in the removal of anti-dependence and output-dependence from
loops in a number of cases, and often saves the compiler from getting "fooled" into parallelizing the
smaller loops involving those arrays, at the expense of leaving the bigger parallelizable loops sequential.
ffl Recognition of reduction operators, so that a loop with such an operator may get parallelized appro-
priately. Examples of these are the addition, the min and the max operators.
Since none of these features are beyond the capabilities of Parafrase-2 when it gets fully developed (or
for that matter, any state-of-the-art parallelizing compiler), we assume in the remaining part of our study
that these capabilities are present in the parallelizing compiler supporting our implementation.
We now present the distribution schemes for various arrays we arrive at by applying our strategy to
the programs after various constraints and the associated quality measures have been recorded. Table 2
summarizes the final distribution of significant arrays for all the programs. We use the following informal
notation in this description. For each array, we indicate the number of elements along each dimension and
specify how each dimension is distributed (cyclic/contiguous/replicated). We also indicate the number of
processors on which that dimension is distributed. For the special case in which that number is one, we
indicate that the dimension has been sequentialized. For example, the first entry in the table shows that
the 2-D arrays A and Z consisting of 512 x 512 elements each are to be distributed cyclically by rows on
processors. We choose the tred2 routine to illustrate the steps of our strategy in greater detail, since it
is a small yet reasonably complex program which defies easy determination of "the right" data partitioning
scheme by simple inspection. For the remaining programs, we explain on what basis certain important
decisions related to the formulation of constraints on array distributions in them are taken, sometimes with
the help of sample program segments. For the tred2 program, we show the effectiveness of our strategy
through actual results on the performance of different versions of the parallel program implemented on the
iPSC/2 using different data partitioning strategies.
TRED2 The source code of tred2 is listed in Figure 2. Along with the code listing, we have shown the
probabilities of taking the branch on various conditional go to statements. These probabilities are assumed to
be supplied to the compiler. Also, corresponding to each statement in a loop that imposes some constraints,
we have indicated which of the four categories (described in Section 3) the constraint belongs to.
Based on the alignment constraints, a component affinity graph (CAG), shown in Figure 3, is constructed
for the program . Each node of a CAG [15] represents an array dimension, the weight on an edge denotes
the communication cost incurred if the array dimensions represented by the two nodes corresponding to that
edge are not aligned. The edge weights for our CAG are as follows:
I OneToManyMulticast(n=N I ; hN J i) (line 83)
Along with each term, we have indicated the line number in the program to which the constraint corresponding
to the quality measure may be traced. The total number of processors is denoted by N , while
N I and N J refer to the number of processors along which various array dimensions are initially assumed to
be distributed. Applying the algorithm for component alignment [15] on this graph, we get the following
classes of dimensions - class 1 consisting of A 1 consisting of A 2 ; Z 2 . These classes
get mapped to the dimensions 1 and 2 respectively of the processor grid.
In Step 2 of our strategy, none of the array dimensions is sequentialized because there are parallelization
constraints favoring the distribution of both dimensions Z 1 and Z 2 . In Step 3, the distribution functions for
all array dimensions in each of the two classes are determined to be cyclic, because of numerous constraints
on each dimension of arrays Z; D and E favoring cyclic distribution. The block size for the distribution for
each of the aligned array dimensions is set to one. Hence, at the end of this step, the distributions for various
array dimensions are:
Moving on to Step 4, we now determine the value of N 1 , the value of N 2 simply gets fixed as N=N 1 . By
adding together the actual time measures given for various constraints, and the penalty measures of various
constraints not getting satisfied, we get the following expression for execution time of the program (only the
part dependent on N 1 ).
c
For (in fact, for all values of N ranging from 4 to 16), we see that the above
expression for execution time gets minimized when the value of N 1 is set to N . This is easy to see since the
first term (appearing in boldface), which dominates the expression, vanishes when N Incidentally,
that term comes from the quality measures of various constraints to sequentialize Z 2 . The real processor
grid, therefore, has only one dimension, all array dimensions in the second class get sequentialized. Hence
the distribution functions for the array dimensions at the end of this step are:
Since we are using the processor grid as one with a single dimension, we do not need the second distribution
function for the arrays D and E to uniquely specify which processors own various elements of these arrays.
None of the array dimensions is chosen for replication in Step 5. As specified above by the formal definitions
of distribution functions, the data distribution scheme that finally emerges is - distribute arrays A and Z
by rows in a cyclic fashion, distribute arrays D and E also cyclically, on all the N processors.
DGEFA The dgefa routine operates on a single n x n array A, which is factorized using gaussian elimination
with partial pivoting. Let N 1 and N 2 denote, respectively, the number of processors over which the
rows and the columns of the array are distributed. The loop computing the maximum of all elements in a
column (pivot element) and the one scaling all elements in a column both yield execution time terms that
show the communication time part increasing and the computation time part decreasing due to increased
parallelism, with increase in N 1 . The loop involving exchange of rows (due to pivoting) suggests a constraint
to sequentialize A 1 , i.e., setting N 1 to 1, to internalize the communication corresponding to the exchange of
rows. The doubly-nested loop involving update of array elements corresponding to the triangularization of
a column shows parallelism and potential communication (if parallelization is done) at each level of nesting.
All these parallelizable loops are nested inside a single inherently sequential loop in the program, and the
number of iterations they execute varies directly with the value of the outer loop index. Hence these loops
impose constraints on the distribution of A along both dimensions to be cyclic, to have a better load balance.
The compiler needs to know the value of n to evaluate the expression for execution time with N 1 (or
being the only unknown. We present results for two cases, 256. The analysis shows that
for the first case, the compiler would come up with N 1
16, where as for the second case, it would
8. Thus given information about the value of n, the compiler would favor column-cyclic
distribution of A for smaller values of n, and grid-cyclic distribution for larger values of n.
TRFD The trfd benchmark program goes through a series of passes, each pass essentially involves setting
up some data arrays and making repeated calls in a loop to a particular subroutine. We apply our approach
to get the distribution for arrays used in that subroutine. There are nine arrays that get used, as shown in
Table
3 (some of them are actually aliases of each other). To give a flavor for how these distributions get
arrived at, we show some program segments below:
do
do 70
do
70 continue
The first loop leads to the following constraints - alignment of xrsiq 1 with v 2 , and sequentialization of xrsiq 2
and v 1 . The second loop advocates alignment of xij 1 with v 2 , sequentialization of v 1 , and cyclic distribution
of xij 1 (since the number of iterations of the inner loop modifying xij varies with the value of the outer
loop index). The constraint on cyclic distribution gets passed on from xij 1 to v 2 , and from v 2 to xrsiq 1 . All
these constraints together imply a column-cyclic distribution for v, a row-cyclic distribution for xrsiq, and
a cyclic distribution for xij. Similarly, appropriate distribution schemes are determined for the arrays xijks
and xkl. The references involving arrays xrspq; xijrs; xrsij; xijkl are somewhat complex, the variation of
subscripts in many of these references cannot be analyzed by the compiler. The distribution of these arrays
is specified as being contiguous to reduce certain communication costs involving broadcast of values of these
arrays.
MDG This program uses two important arrays, var and vm, and a number of small arrays which are all
replicated according to our strategy. The array vm is divided into three parts, which get used as different
arrays named xm; ym; zm in various subroutines. Similarly, the array var gets partitioned into twelve parts
which appear in different subroutines with different names. In Table 3 we show the distributions of these
individual, smaller arrays. The arrays fx; fy; fz all correspond to two different parts of var each, in different
invocations of the subroutine interf.
In the program there are numerous parallelizable loops in each of which three distinct contiguous elements
of an array corresponding to var get accessed together in each iteration. They lead to constraints that the
distributions of those arrays use a block size that is a multiple of three. There are doubly nested loops in
subroutines interf and poteng operating over some of these arrays with the number of iterations of the inner
loop varying directly with the value of the outer loop index. As seen for the earlier programs, this leads
to constraints on those arrays to be cyclically distributed. Combined with the previous constraints, we get
a distribution scheme in which those arrays are partitioned into blocks of three elements distributed
cyclically. We show parts of a program segment, using a slightly changed syntax (to make the code more
concise), that illustrates how the relationship between distributions of parts of var (x; and parts of vm
do 1000
1000 continue
In this loop, the variables iwo; iw1; iw2 get recognized as induction variables and are expressed in terms
of the loop index i. The references in the loop establish the correspondence of each element of xm with
a three-element block of x, and yield similar relationships involving arrays ym and zm. Hence the arrays
xm; ym; zm are given a cyclic distribution (completely cyclic distribution with a block size of one).
FLO52 This program has computations involving a number of significant arrays shown in Table 3. Many
arrays declared in the main program really represent a collection of smaller arrays of different sizes, referenced
in different steps of the program by the same name. For instance, the array w is declared as a big 1-D array
in the main program, different parts of which get supplied to subroutines such as euler as a parameter (the
formal parameter is a 3-D array w) in different steps of the program. In such cases, we always refer to these
smaller arrays passed to various subroutines, when describing the distribution of arrays. Also, when the size
of an array such as w varies in different steps of the program, the entry for size of the array in the table
indicates that of the largest array.
A number of parallelizable loops referencing the 3-D arrays w and x access all elements varying along the
third dimension of the array together in a single assignment statement. Hence the third dimension of each
of these arrays is sequentialized. There are numerous parallelizable loops that establish constraints on all
of the 2-D arrays listed in the table to have identical distributions. Moreover, the two dimensions of these
array are aligned with the first two dimensions of all listed 3-D arrays, as dictated by the reference patterns
in several other loops. Some other interesting issues are illustrated by the following program segments.
do
do
do
do
38 continue
These loops impose constraints on the (first) two dimensions of all of these arrays to have contiguous rather
than cyclic distributions, so that the communications involving p values occur only "across boundaries"
of regions of the arrays allocated to various processors. Let N 1 and N 2 denote the number of processors
on which the first and second dimensions are distributed. The first part of the program segment specifies
a constraint to sequentialize p 1 , while the second one gives a constraint to sequentialize p 2 . The quality
measures for these constraints give terms for communication costs that vanish when N 1 and N 2 respectively
are set to one. In order to choose the actual values of N 1 and N 2 (given that N 1 the compiler
has to evaluate the expression for execution time for different values of N 1 and N 2 . This requires it to
know the values of array bounds, specified in the above program segment by variables il and jl. Since these
values actually depend on user input, the compiler would assume the real array bounds to be the same as
those given in the array declarations. Based on our simplified analysis, we expect the compiler to finally
come up with Given more accurate information or under different assumptions about the
array bounds, the values chosen for N 1
and N 2
may be different. The distributions for other 1-D arrays
indicated in the table get determined appropriately - in some cases, based on considerations of alignment
of array dimensions, and in others, due to contiguous distribution on processors being the default mode of
distribution.
Experimental Results on TRED2 Program Implementation We now show results on the performance
of different versions of the parallel tred2 program implemented on the iPSC/2 using different data
partitioning strategies. The data distribution scheme selected by our strategy, as shown in Table 2 is -
distribute arrays A and Z by rows in a cyclic fashion, distribute array D and E also in a cyclic manner, on
all the N processors.
Starting from the sequential program, we wrote the target host and node programs for the iPSC/2 by
hand, using the scheme suggested for a parallelizing compiler in [4] and [22], and hand-optimized the code.
We first implemented the version that uses the data distribution scheme suggested by our strategy, i.e,
row cyclic. An alternate scheme that also looks reasonable by looking at various constraints is one which
distributes the arrays A and Z by columns instead of rows. To get an idea of the gains made in performance
by sequentializing a class of dimensions, i.e., by not distributing A and Z in a blocked (grid-like) manner,
and also gains made by choosing a cyclic rather than contiguous distribution for all arrays, we implemented
two other versions of the program. These versions correspond to the "bad" choices on data distribution
that a user might make if he is not careful enough. The programs were run for two different data sizes
corresponding to the values 256 and 512 for n.
The plots of performance of various versions of the program are shown in Figure 4. The sequential time
for the program is not shown for the case since the program could not be run on a single node
due to memory limitations. The data partitioning scheme suggested by our strategy performs much better
than other schemes for that data size as shown in Figure 7 (b). For the smaller data size (Figure 7 (a)),
the scheme using column distribution of arrays works slightly better when fewer processors are being used.
Our approach does identify a number of constraints that favor the column distribution scheme, they just
get outweighed by the constraints that favor row-wise distribution of arrays. Regarding other issues, our
strategy clearly advocates the use of cyclic distribution rather than contiguous, and also the sequentialization
of a class of dimensions, as suggested by numerous constraints to sequentialize various array dimensions.
The fact that both these observations are indeed crucial can be seen from the poor performance of the
program corresponding to contiguous (row-wise, for A and Z) distribution of all arrays, and also of the one
corresponding to blocked (grid-like) distribution of arrays A and Z. These results show for this program
that our approach is able to take the right decisions regarding certain key parameters of data distribution,
and does suggest a data partitioning scheme that leads to good performance.
Conclusions
We have presented a new approach, the constraint-based approach, to the problem of determining suitable
data partitions for a program. Our approach is quite general, and can be applied to a large class of programs
having references that can be analyzed at compile time. We have demonstrated the effectiveness of our
approach for real-life scientific application programs. We feel that our major contributions to the problem
of automatic data partitioning are:
ffl Analysis of the entire program: We look at data distribution from the perspective of performance of
the entire program, not just that of some individual program segments. The notion of constraints
makes it easier to capture the requirements imposed by different parts of the program on the overall
data distribution. Since constraints associated with a loop specify only the basic minimal requirements
on data distribution, we are often able to combine constraints affecting different parameters relating
to the distribution of the same array. Our studies on numeric programs confirm that situations where
such a combining is possible arise frequently in real programs.
ffl Balance between parallelization and communication considerations: We take into account both communication
costs and parallelization considerations so that the overall execution time is reduced.
ffl Variety of distribution functions and relationships between distributions : Our formulation of the distribution
functions allows for a rich variety of possible distribution schemes for each array. The idea
of relationship between array distributions allows the constraints formulated on one array to influence
the distribution of other arrays in a desirable manner.
Our approach to data partitioning has its limitations too. There is no guarantee about the optimality
of results obtained by following our strategy (the given problem is NP-hard). The procedure for compile-time
determination of quality measures of constraints is based on a number of simplifying assumptions.
For instance, we assume that all the loop bounds and the probabilities of executing various branches of a
conditional statement are known to the compiler. For now, we expect the user to supply this information
interactively or in the form of assertions. In the future, we plan to use profiling to supply the compiler with
information regarding how frequently various basic blocks of the code are executed.
As mentioned earlier, we are in the process of implementing our approach for the Intel iPSC/2 hypercube
using the Parafrase-2 restructurer as the underlying system. We are also exploring a number of possible
extensions to our approach. An important issue being looked at is data reorganization: for some programs
it might be desirable to partition the data one way for a particular program segment, and then repartition
it before moving on to the next program segment. We also plan to look at the problem of interprocedural
analysis, so that the formulation of constraints may be done across procedure calls. Finally, we are examining
how better estimates could be obtained for quality measures of various constraints in the presence of compiler
optimizations like overlap of communication and computation, and elimination of redundant messages via
liveness analysis of array variables [9].
The importance of the problem of data partitioning is bound to continue growing as more and more
machines with larger number of processors keep getting built. There are a number of issues that need to
be resolved through further research before a truly automated, high quality system can be built for data
partitioning on multicomputers. However, we believe that the ideas presented in this paper do lay down an
effective framework for solving this problem.
Acknowledgements
We wish to express our sincere thanks to Prof. Constantine Polychronopoulos for giving us access to the
source code of the Parafrase-2 system, and for allowing us to build our system on top of it. We also wish to
thank the referees for their valuable suggestions.
--R
An interactive environment for data partitioning and distribution.
A static performance estimator to guide data partitioning decisions.
Interprocedural constant propagation.
Compiling programs for distributed-memory multiprocessors
Compiling parallel programs by optimizing performance.
The Perfect Club.
Automatic data partitioning on distributed memory multiprocessors.
Compiler support for machine-independent parallel programming in Fortran D
A message passing coprocessor for distributed memory multicomputers.
Compiler techniques for data partitioning of sequentially iterated parallel loops.
Programming for parallelism.
Compiler transformations for non-shared memory machines
Generating explicit communication from shared-memory program references
Index domain alignment: Minimizing cost of cross-referencing between distributed arrays
Memory Storage Patterns in Parallel Processing.
A methodology for parallelizing programs for multicomputers and complex memory multiprocessors.
Process decomposition through locality of reference.
The DINO parallel programming language.
An approach to compiling single-point iterative programs for distributed memory com- puters
SUPERB: A tool for semi-automatic MIMD/SIMD parallelization
--TR
Programming for parallelism
Memory storage patterns in parallel processing
Process decomposition through locality of reference
A methodology for parallelizing programs for multicomputers and complex memory multiprocessors
A static performance estimator to guide data partitioning decisions
A message passing coprocessor for distributed memory multicomputers
Generating explicit communication from shared-memory program references
Compiler techniques for data partitioning of sequentially iterated parallel loops
--CTR
Kuei-Ping Shih , Jang-Ping Sheu , Chua-Huang Huang, Statement-Level Communication-Free Partitioning Techniques for Parallelizing Compilers, The Journal of Supercomputing, v.15 n.3, p.243-269, Mar.1.2000
Rohit Chandra , Ding-Kai Chen , Robert Cox , Dror E. Maydan , Nenad Nedeljkovic , Jennifer M. Anderson, Data distribution support on distributed shared memory multiprocessors, ACM SIGPLAN Notices, v.32 n.5, p.334-345, May 1997
Tom Bennet, Distributed message routing and run-time support for message-passing parallel programs derived from ordinary programs, Proceedings of the 1994 ACM symposium on Applied computing, p.510-514, March 06-08, 1994, Phoenix, Arizona, United States
Manish Gupta , Edith Schonberg, Static analysis to reduce synchronization costs in data-parallel programs, Proceedings of the 23rd ACM SIGPLAN-SIGACT symposium on Principles of programming languages, p.322-332, January 21-24, 1996, St. Petersburg Beach, Florida, United States
A. Zaafrani , M. R. Ito, Partitioning the global space for distributed memory systems, Proceedings of the 1993 ACM/IEEE conference on Supercomputing, p.327-336, December 1993, Portland, Oregon, United States
A. Zaafrani , Mabo Robert Ito, Efficient Execution of Doacross Loops on Distributed Memory Systems, Proceedings of the IFIP WG10.3. Working Conference on Architectures and Compilation Techniques for Fine and Medium Grain Parallelism, p.27-38, January 20-22, 1993
T. S. Chen , J. P. Sheu, Communication-Free Data Allocation Techniques for Parallelizing Compilers on Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.5 n.9, p.924-938, September 1994
Ernesto Su , Daniel J. Palermo , Prithviraj Banerjee, Processor Tagged Descriptors: A Data Structure for Compiling for Distributed-Memory Multicomputers, Proceedings of the IFIP WG10.3 Working Conference on Parallel Architectures and Compilation Techniques, p.123-132, August 24-26, 1994
Ram Subramanian , Santosh Pande, A framework for performance-based program partitioning, Progress in computer research, Nova Science Publishers, Inc., Commack, NY, 2001
Ram Subramanian , Santosh Pande, A framework for performance-based program partitioning, Progress in computer research, Nova Science Publishers, Inc., Commack, NY, 2001
Manish Gupta , Prithviraj Banerjee, PARADIGM: a compiler for automatic data distribution on multicomputers, Proceedings of the 7th international conference on Supercomputing, p.87-96, July 19-23, 1993, Tokyo, Japan
Manish Gupta , Prithviraj Banerjee, A methodology for high-level synthesis of communication on multicomputers, Proceedings of the 6th international conference on Supercomputing, p.357-367, July 19-24, 1992, Washington, D. C., United States
Tatsuya Shindo , Hidetoshi Iwashita , Shaun Kaneshiro , Tsunehisa Doi , Junichi Hagiwara, Twisted data layout, Proceedings of the 8th international conference on Supercomputing, p.374-381, July 11-15, 1994, Manchester, England
Chih-Zong Lin , Chien-Chao Tseng , Yi-Lin Chen , Tso-Wei Kuo, A systematic approach to synthesize data alignment directives for distributed memory machines, Nordic Journal of Computing, v.3 n.2, p.89-110, Summer 1996
Skewed Data Partition and Alignment Techniques for Compiling Programs on Distributed Memory Multicomputers, The Journal of Supercomputing, v.21 n.2, p.191-211, February 2002
data parallel programming on NUMA multiprocessors, USENIX Systems on USENIX Experiences with Distributed and Multiprocessor Systems, p.13-13, September 22-23, 1993, San Diego, California
A. Zaafrani , M. R. Ito, Expressing cross-loop dependencies through hyperplane data dependence analysis, Proceedings of the 1994 ACM/IEEE conference on Supercomputing, November 14-18, 1994, Washington, D.C.
M. Kandemir, 2D data locality: definition, abstraction, and application, Proceedings of the 2005 IEEE/ACM International conference on Computer-aided design, p.275-278, November 06-10, 2005, San Jose, CA
A. Zaafrani , M. R. Ito, Expressing cross-loop dependencies through hyperplane data dependence analysis, Proceedings of the 1994 conference on Supercomputing, p.508-517, December 1994, Washington, D.C., United States
Paul Feautrier, Toward automatic partitioning of arrays on distributed memory computers, Proceedings of the 7th international conference on Supercomputing, p.175-184, July 19-23, 1993, Tokyo, Japan
M. Kandemir , J. Ramanujam , A. Choudhary , P. Banerjee, A Layout-Conscious Iteration Space Transformation Technique, IEEE Transactions on Computers, v.50 n.12, p.1321-1336, December 2001
Niclas Andersson , Peter Fritzson, Generating parallel code from object oriented mathematical models, ACM SIGPLAN Notices, v.30 n.8, p.48-57, Aug. 1995
Gwan-Hwan Hwang , Cheng-Wei Chen , Jenq Kuen Lee , Roy Dz-Ching Ju, Segmented Alignment: An Enhanced Model to Align Data Parallel Programs of HPF, The Journal of Supercomputing, v.25 n.1, p.17-41, May
Anant Agarwal , David A. Kranz , Venkat Natarajan, Automatic Partitioning of Parallel Loops and Data Arrays for Distributed Shared-Memory Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.6 n.9, p.943-962, September 1995
Dean Engelhardt , Andrew Wendelborn, A partitioning-independent paradigm for nested data parallelism, Proceedings of the IFIP WG10.3 working conference on Parallel architectures and compilation techniques, p.224-233, June 27-29, 1995, Limassol, Cyprus
Byoungro So , Mary W. Hall , Heidi E. Ziegler, Custom Data Layout for Memory Parallelism, Proceedings of the international symposium on Code generation and optimization: feedback-directed and runtime optimization, p.291, March 20-24, 2004, Palo Alto, California
Chau-Wen Tseng , Jennifer M. Anderson , Saman P. Amarasinghe , Monica S. Lam, Unified compilation techniques for shared and distributed address space machines, Proceedings of the 9th international conference on Supercomputing, p.67-76, July 03-07, 1995, Barcelona, Spain
Edouard Bugnion , Jennifer M. Anderson , Todd C. Mowry , Mendel Rosenblum , Monica S. Lam, Compiler-directed page coloring for multiprocessors, ACM SIGPLAN Notices, v.31 n.9, p.244-255, Sept. 1996
Wai-Mee Ching , Alex Katz, An experimental APL compiler for a distributed memory parallel machine, Proceedings of the 1994 conference on Supercomputing, p.59-68, December 1994, Washington, D.C., United States
Wai Mee Ching , Alex Katz, An experimental APL compiler for a distributed memory parallel machine, Proceedings of the 1994 ACM/IEEE conference on Supercomputing, November 14-18, 1994, Washington, D.C.
PeiZong Lee, Efficient Algorithms for Data Distribution on Distributed Memory Parallel Computers, IEEE Transactions on Parallel and Distributed Systems, v.8 n.8, p.825-839, August 1997
Jennifer M. Anderson , Monica S. Lam, Global optimizations for parallelism and locality on scalable parallel machines, ACM SIGPLAN Notices, v.28 n.6, p.112-125, June 1993
Vikram Adve , Alan Carle , Elana Granston , Seema Hiranandani , Ken Kennedy , Charles Koelbel , Ulrich Kremer , John Mellor-Crummey , Scott Warren , Chau-Wen Tseng, Requirements for Data-Parallel Programming Environments, IEEE Parallel & Distributed Technology: Systems & Technology, v.2 n.3, p.48-58, September 1994
David A. Garza-Salazar , Wim Bhm, Reducing communication by honoring multiple alignments, Proceedings of the 9th international conference on Supercomputing, p.87-96, July 03-07, 1995, Barcelona, Spain
Mahmut Kandemir , Alok Choudhary , Nagaraj Shenoy , Prithviraj Banerjee , J. Ramanujam, A Linear Algebra Framework for Automatic Determination of Optimal Data Layouts, IEEE Transactions on Parallel and Distributed Systems, v.10 n.2, p.115-135, February 1999
Chau-Wen Tseng, Compiler optimizations for eliminating barrier synchronization, ACM SIGPLAN Notices, v.30 n.8, p.144-155, Aug. 1995
Mario Nakazawa , David K. Lowenthal , Wendou Zhou, The Execution Model for Heterogeneous Clusters, Proceedings of the 2005 ACM/IEEE conference on Supercomputing, p.7, November 12-18, 2005
John Plevyak , Vijay Karamcheti , Xingbin Zhang , Andrew A. Chien, A hybrid execution model for fine-grained languages on distributed memory multicomputers, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.41-es, December 04-08, 1995, San Diego, California, United States
Mahmut Kandemir , Alok Choudhary , J. Ramanujam , Meenakshi A. Kandaswamy, A Unified Framework for Optimizing Locality, Parallelism, and Communication in Out-of-Core Computations, IEEE Transactions on Parallel and Distributed Systems, v.11 n.7, p.648-668, July 2000
Mahmut Taylan Kandemir, A compiler technique for improving whole-program locality, ACM SIGPLAN Notices, v.36 n.3, p.179-192, March 2001
Jennifer M. Anderson , Saman P. Amarasinghe , Monica S. Lam, Data and computation transformations for multiprocessors, ACM SIGPLAN Notices, v.30 n.8, p.166-178, Aug. 1995
Micha Cierniak , Wei Li, Unifying data and control transformations for distributed shared-memory machines, ACM SIGPLAN Notices, v.30 n.6, p.205-217, June 1995
Ismail Kadayif , Mahmut Kandemir, Quasidynamic Layout Optimizations for Improving Data Locality, IEEE Transactions on Parallel and Distributed Systems, v.15 n.11, p.996-1011, November 2004
Mahmut Taylan Kandemir, Improving whole-program locality using intra-procedural and inter-procedural transformations, Journal of Parallel and Distributed Computing, v.65 n.5, p.564-582, May 2005
Pedro C. Diniz , Martin C. Rinard, Dynamic feedback: an effective technique for adaptive computing, ACM SIGPLAN Notices, v.32 n.5, p.71-84, May 1997
Peizong Lee , Zvi Meir Kedem, Automatic data and computation decomposition on distributed memory parallel computers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.1, p.1-50, January 2002
Mahmut Kandemir , Alok Choudhary , Prithviraj Banerjee , J. Ramanujam , Nagaraj Shenoy, Minimizing Data and Synchronization Costs in One-Way Communication, IEEE Transactions on Parallel and Distributed Systems, v.11 n.12, p.1232-1251, December 2000
Mahmut Kandemir , Prithviraj Banerjee , Alok Choudhary , J. Ramanujam , Eduard Ayguad, Static and Dynamic Locality Optimizations Using Integer Linear Programming, IEEE Transactions on Parallel and Distributed Systems, v.12 n.9, p.922-941, September 2001
Pascal Fradet , Julien Mallet, Compilation of a specialized functional language for massively parallel computers, Journal of Functional Programming, v.10 n.6, p.561-605, November 2000
M. Kandemir , P. Banerjee , A. Choudhary , J. Ramanujam , N. Shenoy, A global communication optimization technique based on data-flow analysis and linear algebra, ACM Transactions on Programming Languages and Systems (TOPLAS), v.21 n.6, p.1251-1297, Nov. 1999
Mahmut Kandemir, Compiler-Directed Collective-I/O, IEEE Transactions on Parallel and Distributed Systems, v.12 n.12, p.1318-1331, December 2001
Mahmut Kendemir , J. Ramanujam, Data Relation Vectors: A New Abstraction for Data Optimizations, IEEE Transactions on Computers, v.50 n.8, p.798-810, August 2001
Pedro C. Diniz , Martin C. Rinard, Eliminating synchronization overhead in automatically parallelized programs using dynamic feedback, ACM Transactions on Computer Systems (TOCS), v.17 n.2, p.89-132, May 1999
Akimasa Yoshida , Kenichi Koshizuka , Hironori Kasahara, Data-localization for Fortran macro-dataflow computation using partial static task assignment, Proceedings of the 10th international conference on Supercomputing, p.61-68, May 25-28, 1996, Philadelphia, Pennsylvania, United States
P. Banerjee , M. Peercy, Design and Evaluation of Hardware Strategies for Reconfiguring Hypercubes and Meshes Under Faults, IEEE Transactions on Computers, v.43 n.7, p.841-848, July 1994
Henri E. Bal , M. Frans Kaashoek, Object distribution in Orca using Compile-Time and Run-Time techniques, ACM SIGPLAN Notices, v.28 n.10, p.162-177, Oct. 1, 1993
Ken Kennedy , Ulrich Kremer, Automatic data layout for distributed-memory machines, ACM Transactions on Programming Languages and Systems (TOPLAS), v.20 n.4, p.869-916, July 1998 | constraints;datadistribution;perfectbenchmarks;linpack;parallelizing compilers;fortran programs;data structures;index termsautomatic data partitioning;scientific application programs;multicomputers;eispack libraries;parallel programming;program compilers |
629113 | Coterie Join Algorithm. | Given a set of nodes in a distributed system, a coterie is a collection of subsets of the set of nodes such that any two subsets have a non empty intersection and are not properly contained in one another. A subset of nodes in a coterie is called a quorum. An algorithm, called the join algorithm, which takes nonempty coteries as input, and returns a new, larger coterie called a composite coterie is introduced. It is proved that a composite coterie is nondominated if and only if the input coteries are nondominated.Using the algorithm, dominated or nondominated coteries may be easily constructed for a large number of nodes. An efficient method for determining whether a given set of nodes contains a quorum of a composite coterie is presented. As an example, tree coteries are generalized using the join algorithm, and it is proved that tree coteries are nondominated. It is shown that the join algorithm may be used to generate read and write quorums which may be used by a replica control protocol. | Introduction
In computer systems, the problem of mutual exclusion is an important issue. In
a distributed environment, mutual exclusion protocols should gracefully tolerate
node and communication line failures. One such class of protocols may be constructed
based on coteries [6]. Given a set of nodes U in the system, a coterie
under U is a collection of subsets of U in which any two subsets have a nonempty
intersection. This property is called the intersection property. A subset of nodes
in a coterie is called a quorum [3, 7]. If a node acquires permission from all of the
nodes in any quorum, it may enter the critical section. Because of the intersection
property, at most one node may be in the critical section at any given time.
Garcia-Molina and Barbara classified coteries into two types: dominated and
nondominated [6]. Nondominated coteries are more resilient to network and site
failures than dominated coteries; that is, the availability and reliability of a distributed
system is better if nondominated coteries are used. They also defined a
function, called a coterie transformation, which may be used to derive new non-dominated
coteries from existing ones. Then, they presented a recursive algorithm
which uses the coterie transformation to partially enumerate nondominated coter-
ies for a given N , where j. The algorithm generates all of the nondominated
coteries which have corresponding vote assignments. However, the algorithm does
not enumerate all of the nondominated coteries for N ? 5. The purpose of the
algorithm is to partially enumerate nondominated coteries. Because of the large
number of nondominated coteries, the algorithm is not well suited for deriving
nondominated coteries for large N .
In this paper, we present an algorithm, called the join algorithm, which takes
nonempty coteries, as input, and returns a new, larger coterie. We show that
the algorithm produces a nondominated coterie if and only if the input coteries
are nondominated. Thus, dominated or nondominated coteries may be easily
constructed. The algorithm may be used to construct a coterie for large N .
In practice, it is not necessary to actually compute and store all of the quorums
in advance. Instead, we present an efficient method for determining whether a
given set of nodes contains a quorum, called the quorum containment test.
The method only uses the input coteries and information about how they are
joined.
Then, we show that the binary tree protocol, introduced by Agrawal and El
Abbadi [1] to derive coteries, is a special case of the join algorithm. Agrawal and
El Abbadi suggested that any k-ary tree may be used, instead of a binary tree.
We show that, in fact, the algorithm may be applied to any tree in which each
nonleaf node has at least two children. Furthermore, the resulting coteries are
nondominated.
The mutual exclusion problem can be generalized to the replica control prob-
lem. We show how the join algorithm may be applied to the replica control prob-
lem. In distributed systems, there are many occasions in which multiple replicas
are maintained. For instance, in a distributed database system, several copies of
each data item may be maintained at different sites to improve the reliability and
availability of the data. A replica control protocol is used to ensure that different
copies of an object appear to the user as a single nonreplicated object; that is,
objects are one-copy equivalent [5].
Agrawal and El Abbadi [2], and Herlihy [8] formalized the problem of replica
control in terms of read and write quorums. Barbara and Garcia-Molina formally
defined data structures, called quorum agreements, which may be used in quorum-based
replica control protocols [4]. A quorum agreement is a pair of sets (Q; Q
where Q and Q \Gamma1 are sets of subsets of U such that every subset in Q intersects
with every subset in Q \Gamma1 . We may consider Q to be a set of write quorums and
to be a set of read quorums. Equivalently, we may consider Q to be a set
of read quorums and Q \Gamma1 to be a set write quorums. We show that the join
algorithm may be applied to quorum agreements. The quorum containment test
may be used to determine if a given set of replicas contains a read or write quorum.
The organization of this paper is as follows: Section 2 reviews some important
definitions and properties about coteries presented by Garcia-Molina and Barbara.
The join algorithm and its properties are presented in Section 3. Applications
of the algorithm are also presented. In particular, the binary tree protocol is
generalized by using the join algorithm. Section 4 reviews quorum agreements
and shows that the join algorithm may be used to generate quorum agreements.
A Review of Coteries
denote the set of N nodes in the system. The term nodes
may refer to computers in a network or copies of a data object in a replicated
database. The following definitions, theorem, and examples are by Garcia-Molina
and Barbara [6]. A collection of sets C is a coterie under U iff
1.
2. (Intersection property): G; H 2 C
3. G;
The sets G 2 C are called quorums. Note that not all nodes must appear in a
coterie; that is, ffagg is a coterie under fa; b; cg. Let C and D be coteries under
U . Then, C dominates D iff
1. C 6= D.
2. For each H 2 D, there is a G 2 C such that G ' H.
A coterie D (under U) is dominated iff there is another coterie (under U) which
dominates D. If there is no such coterie, then D is nondominated. Note that
the empty coterie under U is nondominated iff ;. The following
theorem makes it easier to determine if a coterie is dominated.
Theorem 2.1: Let C be a coterie under a nonempty set U . Then, C is dominated
iff there exists a set H ' U such that:
1.
2. G 2 C
For example, let cgg. The set satisfies the properties
in Theorem 2.1. Hence, D is dominated by cgg. The coterie
C is nondominated. Note that coterie C is clearly better than D because all sets
of nodes which can operate under D, can also operate under C. Also, if only
nodes in fa; cg are available, a quorum can be formed in C, but not in D. Note
that D is also dominated by the coterie ffbgg.
3 Join Algorithm
The join algorithm provides a simple way of combining nonempty coteries to
construct new, larger coteries. In this section, the join algorithm and its properties
are introduced. Then, we present an efficient method for determining whether a
given set of nodes contains a quorum, without actually computing and storing all
of the quorums in advance. Finally, we apply the join algorithm to generate a
new class of coteries, called tree coteries. Tree coteries are a generalization of the
binary tree protocol introduced by Agrawal and El Abbadi [1].
3.1 The join algorithm
Let U 1 be a nonempty set of nodes and x 2 U 1 . Let U 2 be a nonempty set of
nodes such that U Given a coterie C 1 under
U 1 and a coterie C 2 under U 2 , a new coterie, C 3 under U 3 , may be constructed by
replacing each occurrence of x in quorums of C 1 by nodes in a quorum of C 2 . More
formally, let CU i
denote the set of all nonempty coteries under U i for
and define a function,
\Theta CU 2
, by
The function, T x , is called a coterie join function. A coterie constructed by using
a coterie join function is called a composite coterie; that is, C
is a composite coterie. All other nonempty coteries are called simple coteries.
For instance, simple coteries may be constructed by using weighted voting [7], the
grid protocol [2, 10], or some other method. The input coteries, C 1 and C 2 ,
may be either simple or composite. The join algorithm is to apply the coterie
join function, one or more times, to construct a composite coterie.
For example, let U 6g. Define the input
coteries,
is a coterie under U
constructed by replacing each occurrence of in quorums of C 1 by nodes in
quorums of C 2 .
Note that the above coteries, C are all nondominated. This is no
accident. In the following subsection, we will prove that combining two nonempty,
nondominated coteries results in a nondominated, composite coterie. We will also
prove some other properties that the join algorithm satisfies.
3.2 Properties of the algorithm
Let U 1 be a nonempty set of nodes and let x 2 U 1 . Let U 2 be a nonempty set of
nodes such that U be a nonempty
coterie under U i for 2. Let C
Theorem 3.1: C 3 is a coterie under U 3 .
Proof: First, we will show that G 3 6= ; and G 3 ' U 3 for any quorum G 3 2 C 3 .
Let G 3 2 C 3 . There are two cases to consider:
1. Suppose G
2. Suppose G
is a coterie, G 2 6= ;, and it follows that G 3 6= ;. Also,
since
Next, we will show that the intersection property is satisfied. Let G
We will show that G 3 " H 3 6= ;. There are four cases to consider:
1. Suppose G
2. Suppose G
;. Thus, there exists y
x
3. Suppose G
This case follows directly
from Case 2 (above).
4. Suppose G
quorum
Therefore, the intersection property holds.
Finally, we will show that minimality is satisfied. Let G 3 ; H 3 2 C 3 . We will show
that G 3 6ae H 3 . There are four cases to consider:
1. Suppose G
2. Suppose G
but x 62 G 1 . Thus, there exists y 2 G 1 such that y 62 H 1 , and y 6= x. Since
3. Suppose G
under
;. So, there exists y 2 G 2 such that y 62 H 3 . Since G 2 ' G 3 , y is
also in G 3 . Thus, G 3 6ae H 3 because y 2 G 3 , but y 62 H 3 .
4. Suppose G
and some quorum H
Since C 1 is a coterie, G 1 6ae H 1 . Thus, there exists y 2 G 1 such that y 62 H 1
or G
Since C 2 is a coterie, G 2 6ae H 2 . Thus, there exists z 2 G 2 such that z 62 H 2
or G
By comparing G 1 with H 1 and G 2 with H 2 , there are three cases to consider:
(a) Suppose G
(b) Suppose there exists y 2 G 1 such that y 62 H 1 and G
it follows that y 2 G 3 . Also, y
follows that y 62 H 3 . Thus, G 3 6ae H 3 .
(c) Suppose there exists z 2 G 2 such that z 62 H 2 . Then, z 2 G 3 and
z 62 H 3 . Thus, G 3 6ae H 3 .
Therefore, minimality holds and C 3 is a coterie under U 3 . 2
Theorem 3.2: If C 1 and C 2 are both nondominated, then C 3 is nondominated.
Proof: Assume that C 3 is dominated. By Theorem 2.1, there must exist a set
We will consider the relation between H 3 and the quorums in C 2 . There are two
cases to consider: either H 3 has at least one node in common with each quorum
in C 2 or there is a quorum G 2 in C 2 such that G
In each case, we will find a quorum G 02 C 3 such that G 0' H 3 to obtain a
contradiction.
1. Suppose G
So, there must exist a quorum G 0
would satisfy both properties in Theorem 2.1, and C 2 would be dominated.
We start by showing that G
quorums
then On the other hand, if x 62 G 1 , then
nondominated, there must exist a quorum G 0
Finally, by using quorum G 0
show that
there exists a quorum G 02 C 3 such that G 0' H 3 to obtain a contradiction.
There are only two possible cases to consider: either x 2 G 0
1 or x 62 G 0
1 .
This is a contradiction.
and This is a contradiction.
2. Suppose there exists G
We start by showing that G
Assume there is a quorum G There are two
cases to consider: either x 2 G 1 or x 62 G 1 . In each case, we will find a
quorum G 3 2 C 3 such that G 3 " H to obtain a contradiction.
because This is a contradiction.
This is
a contradiction.
Thus,
Since C 1 is nondominated, there exists a quorum G 0
would satisfy both properties in Theorem 2.1, and C 1
would be dominated. Since x 62 H 1 , it follows that x 62 G 0. Let G 0= G 0Then, G 02 C 3 . Since G 0' H 1 and H 1 ' H 3 , it follows that G 0' H 3 . This
is a contradiction.
Therefore, C 3 is nondominated. 2
Theorem 3.3: If C 1 is dominated, then C 3 is dominated.
Proof: To show that C 3 is dominated, we will construct a set H 3 satisfying the
properties in Theorem 2.1, that is G
dominated, by Theorem 2.1, there exists a set H 1 ' U 1 such that
There are only two possible
cases to consider: either x
1. Suppose x We
will show that G 3 " H 3
Let G 3 2 C 3 . There are two cases to consider:
First, we will show that G 3 "H 3 6= ;. By the definition of H 1 , G 1 "H 1 6=
;. Since x 62 G 1 , it follows that G 1 "(H
Next, we will show that G 3 6' H 3 . By the definition of H 1 , G 1 6' H 1 .
So, there exists y 2 G 1 such that y 62 H 1 , and y 6= x. Also, y
because y
because y 2 G 3 , but y 62 H 3 .
and some quorum G 2 2 C 2 .
First, we will show that G 3 "H 3 6= ;. Since C 2 is a coterie, G 2 "H 2 6= ;.
Thus,
Next, we will show that G 3 6' H 3 . Since G 1 6' H 1 , there exists y
such that y
2. Suppose x 62 H 1 . Let H We will show that G 3 "H 3
for all G 3 2 C 3 .
Let G 3 2 C 3 . There are two cases to consider:
By the definition of H 1 ,
and G 3 6' H 3 .
and some quorum G 2 2 C 2 .
First, we will show that G
it follows that
Next, we will show that G 3 6' H 3 . Since x
follows that
Thus,
Therefore, by Theorem 2.1, C 3 is dominated. 2
Theorem 3.4: If C 2 is dominated and x 2 G for some G 2 C 1 , then C 3 is
dominated.
Proof: Since C 2 is dominated, by Theorem 2.1, there exists a set H 2 ' U 2 such
that be a quorum in C 1 such
that x 2 H 1 and let H We will show that H 3 satisfies the
properties in Theorem 2.1, that is G
Let G 3 2 C 3 . There are two cases to consider:
1. Suppose G
First, we will show that G 3 " H 3 6= ;. Since H 1 2 C 1 and C 1 is a coterie,
Next, we will show that G 3 6' H 3 . Since x
Since C 1 is a coterie, G 1 6ae H 1 . Thus, G 1 6' H 1 . So, there exists a y 2 G 1
such that y 62
follows that G 3 6' H 3 because y 2 G 3 , but y 62 H 3 .
2. Suppose G
and some quorum G 2 2 C 2 .
First, we will show that G
Next, we will show that G 3 6' H 3 . From the definition of H 2 , we have G 2 6'
. So, there exists y 2 G 2 such that y 62 H 2 . Since G
and H
Thus,
Therefore, by Theorem 2.1, C 3 is dominated. 2
3.3 Example
Consider the following four nondominated coteries:
fc,ag g under
We may construct a new coterie by joining the above coteries as follows:
cg.
is a coterie under U cg.
f2,3,6,4g, f3,1,4,5g, f3,1,5,6g, f3,1,6,4g, f4,5,cg,
f5,6,cg, f6,4,cg, fc,1,2g, fc,2,3g, fc,3,1g g
C). Then, G is a coterie under 9g.
f2,3,6,4g, f3,1,4,5g, f3,1,5,6g, f3,1,6,4g, f4,5,7,8g,
f4,5,8,9g, f4,5,9,7g, f5,6,7,8g, f5,6,8,9g, f5,6,9,7g,
f6,4,7,8g, f6,4,8,9g, f6,4,9,7g, f7,8,1,2g, f8,9,1,2g,
f9,7,1,2g, f7,8,2,3g, f8,9,2,3g, f9,7,2,3g, f7,8,3,1g,
f8,9,3,1g, f9,7,3,1g g
By Theorem 3.2, the composite coterie G is nondominated.
3.4 Quorum containment test
As Garcia-Molina and Barbara have shown, the number of quorums may be exponential
in N [6]. In practice, to determine if a given set contains a quorum
of a composite coterie, it is not necessary to actually compute and store all of
the quorums of the composite coterie in advance. Instead, we only need to store
the input coteries used to construct the composite coterie and information about
how the composite coterie was constructed. This yields an efficient method for
determining if a given set of nodes, say S, contains a quorum G of the composite
coterie C. The following function, QC, called the quorum containment test,
returns true if there exists a quorum G 2 C such that G ' S, and false otherwise.
set of nodes, C
begin
then /* C is a composite coterie */
then return
else return
else /* C is a simple coterie */
if G ' S for some G 2 C
then return true
else return false
We assume that composite(C; x; C a function that returns true if
input parameter C is a composite coterie and false if C is a simple coterie. If C
is a composite coterie, as a side effect, the function also returns x; C
such that is a coterie under U 2 .
Consider the example, given in Section 3.3. Since B), the function
call returns true and also returns b; E; B, and UB
in Similarly, returns
false.
Since the construction of a composite coterie is determined statically, function
composite may be implemented by simple table indexing; therefore, it may be
performed in constant time. Let fC denote the set of simple input
coteries used to construct the composite coterie C, by applying the coterie join
function times. The time complexity of determining if a given set of
nodes, say S, contains a quorum, using the above function, is O(Mc) +O(Md),
where c is the maximum time required to determine if G ' S for any quorum
G in any simple input coterie, and d is the time required to compute the set
difference and union, possible implementation is to use bit
vectors to denote the sets and quorums. If the simple input coteries are coteries
under disjoint sets, it is not necessary to compute the set difference. Thus, the
time complexity becomes O(Mc).
Now, we present a simple example using the coteries given in Section 3.3.
Suppose we want to know if the set contains a quorum of G.
QC(S;G), where
(Note that QC(S;C) is true, because f9; 7g ' S.)
E)
(Note that QC(S 0 ; B) is false.)
(Note that QC(S 00 ; A) is true, because f3; 1g ' S 00 .)
true, because fa; ag 2 D.
Thus, S contains a quorum of G.
3.5 Tree coteries
The binary tree protocol was introduced by Agrawal and El Abbadi for use in
a distributed mutual exclusion algorithm [1]. The set of N nodes are logically
arranged in a complete binary tree. A path in the tree is a sequence of nodes
such that a i+1 is a child of a i . A quorum is constructed
by grouping all nodes on a path from the root node to a leaf node. If a node on
the path is not available, paths that start at both children and terminate at the
leaves may be used instead. They suggested that any k-ary tree, with k - 2, may
be used.
In fact, the algorithm may be applied to any tree in which each nonleaf node
has at least two children. The resulting coteries are called tree coteries. Note
that if a nonleaf node has only one child, then the resulting quorums do not satisfy
minimality.
'i'j
'i
'i'j
'i
'i
'i
6 'j
'i
'i'
ae
ae
ae
ae
Z
Z
Z
Z
Figure
1. Tree
Consider the tree shown in Figure 1. If all nodes are available, then any of
the following sets are quorums: f1,2,4g, f1,2,5g, f1,2,6g, f1,3,7g, and f1,3,8g. If
node 1 is unavailable, then paths from both children, nodes 2 and 3, may be used
instead. Thus, the sets f2,3,4,7g, f2,3,4,8g, f2,3,5,7g, f2,3,5,8g, f2,3,6,7g, and
f2,3,6,8g are quorums. If node 2 is unavailable, the set f1,4,5,6g is a quorum.
Likewise if node 3 is unavailable, the set f1,7,8g is a quorum. If both nodes 1
and 2 are unavailable, the sets f3,4,5,6,7g and f3,4,5,6,8g are quorums. Likewise,
if both nodes 1 and 3 are unavailable, the sets f2,4,7,8g, f2,5,7,8g, and f2,6,7,8g
are quorums. Finally, if nodes 1, 2, and 3 are unavailable, the set f4,5,6,7,8g is a
quorum. The set of all possible quorums is a tree coterie.
Tree coteries may be formally described within the framework of the join algo-
rithm. We start with a few definitions. Let be a set of n - 3
nodes. We define a tree coterie of depth two over U by
ng [
Node a 1 is viewed as the root node and the remaining nodes are viewed as leaf
nodes in the tree. Tree coteries are constructed by repeatedly joining tree coteries
of depth two together at one of the leaf nodes. Thus, any tree in which each
nonleaf node has at least two children may be constructed.
Theorem 3.5: Tree coteries of depth two, over a set of at least 3 nodes, are
nondominated coteries.
Proof: Let be a set of n - 3 nodes. Let
ng [
First, we will show that C is a coterie under U .
Clearly, if G 2 C, then G 6= ; and G ' U .
Next, we will show that the intersection property is satisfied. All quorums, except
for quorum g, include node a 1 . The quorum includes
all other nodes. Thus, the intersection property is satisfied.
Finally, we will show that minimality is satisfied. Clearly all quorums which
include node a 1 and one other node satisfy minimality. Also, any quorum which
includes node a 1 cannot be a subset of the quorum which does not include node
a 1 . So, the only possible violation of minimality that may occur is if the quorum
is a subset of a quorum which includes node a 1 . Since n - 3, the
quorum has at least two different elements, namely a 2 and a 3 , and
cannot be a subset of a quorum which includes node a 1 and only one other node.
Thus, minimality is satisfied.
Therefore, C is a coterie under U .
Next, we will show that C is nondominated. Assume that C is dominated. By
Theorem 2.1, there exists a set H ' U such that G " H 6= ; and G 6' H for all
quorums G 2 C.
There are two cases to consider: either a 1 2 H or a 1 62 H. In each case, we will
derive a contradiction.
1. Suppose a 1 2 H. Since there exists a k 2 U for
some k 6= 1 such that a k 2 H. Thus, This is a contradiction.
2. Suppose a 1 62 H. For
This is a contradiction.
Therefore, C is a nondominated coterie under U . 2
Theorem 3.6: Tree coteries are nondominated coteries.
Proof: By applying the join algorithm to tree coteries of depth two, it is easy to
see that any given tree coterie may be obtained. Therefore, by Theorems 3.1, 3.2,
and 3.5, tree coteries are nondominated coteries. 2
For example, consider the following tree coteries of depth two:
is a tree coterie under
This means that we simply append tree C 2 (shown in Figure 2b) to tree C 1 (shown
in Figure 2a) at node a, and append tree C 3 (shown in Figure 2b) to tree C 1 at
node b. This results in the tree shown in Figure 1.
'i'j
'i
a 'j
'i
ae
ae
ae
ae
Z
Z
Z
Z
Figure
2a. C 1
'i
'i'j
'i
'i
'i
6 'j
'i
'i'
Figure 2b. C 2 and C 3
4 Quorum Agreements
The mutual exclusion problem may be generalized to the replica control problem.
Agrawal and El Abaddi formalized the problem of replica control in terms of read
and write quorums [2]. Associated with each data object, (several) read and write
quorums are formed, each of which is a subset of all copies of the data object. A
read operation accesses all of the copies in a read quorum and the value of a copy
with the highest version number is returned. A write operation writes to all of
the copies in a write quorum and assigns each copy a version number that is one
more than the maximum number encountered in the write quorum. Let R and
W be sets of read and write quorums, respectively. In order to ensure one-copy
equivalence, the read and write quorums must satisfy two intersection properties:
G; H 2 W
2. Read-write
The write-write intersection property is necessary to ensure that each new version
number is larger than any previous version number.
The above protocol may be modified so that timestamps, generated by logical
clocks [9], are used instead of version numbers [8]. A read operation chooses the
copy with the largest timestamp, and a write operation assigns each copy a new
timestamp. In this protocol, only the read-write intersection property is required
to ensure one-copy equivalence. The write quorums do not have to intersect. If
different values of a data object are written to disjoint write quorums, future read
operations simply return the version with the largest timestamp.
Barbara and Garcia-Molina defined data structures, called quorum agree-
ments, which may be used in replica control protocols. The coterie join algorithm
may be used to generate quorum agreements.
4.1 A review of quorum agreements
First, we will briefly review quorum agreements. The definitions in this subsection
are by Barbara and Garcia-Molina [4].
A collection of sets, Q, is a quorum set under U iff
1. G 2 Q
2. (Minimality): G; H 2 Q
Let Q be a quorum set under U . Then, a complimentary quorum set, Q c , is
another quorum set under U such that G 2 Q and H 2 As
in previous literature, the sets G are informally called quorums, even
though they may not satisfy the definition given in Section 2.
There are many complimentary quorum sets corresponding to a given quorum
set. The antiquorum set of Q, denoted by Q \Gamma1 , is the complimentary quorum
set with the largest number of quorums of minimal size. Thus, we say that an
antiquorum set is maximal. That is, let
I
Qg.
The called a quorum agreement under U . There are
only three possible cases:
1. Q and Q are nondominated coteries, and
2. Q is a dominated coterie, and Q \Gamma1 is not a coterie (or equivalently, Q \Gamma1 is a
dominated coterie, and Q is not a coterie).
3. neither Q nor Q \Gamma1 is a coterie.
The first two types of quorum agreements may be used in replica control protocols
that use version numbers, and all three types may be used in replica control
protocols that use timestamps.
4.2 Application of the join algorithm
The join algorithm may be applied to generate quorum sets, complementary quorum
sets, and antiquorum sets.
Theorem 4.1: Let U 1 be a nonempty set of nodes and x 2 U 1 . Let U 2 be a
nonempty set of nodes such that U
be a quorum set under U 1 and Q c
1 be a complementary quorum set under U 1 . Let
be a quorum set under U 2 and Q cbe a complementary quorum set under U 2 .
Let
is a quorum set under U 3 ,
and Q cis a complementary quorum set under U 3 .
Proof: Note that the first property, G 3 2
follows directly from the proof given in Theorem 3.1.
Next, we will show that the intersection property is satisfied. Let G 3 2 Q 3 and
We will show that G 3 " H 3 6= ;. There are four cases to consider:
1. Suppose G
1 . Since
2. Suppose G
2 . There exists y
3. Suppose G
follows from Case 2 (above).
4. Suppose G
and H
Therefore, the intersection property holds.
Finally, minimality follows directly from the proof given in Theorem 3.1.
Therefore, Q 3 is a quorum set under U 3 , and Q cis a complementary quorum set
under U 3 . 2
Theorem 4.2: Let U 1 be a nonempty set of nodes and let x 2 U 1 . Let U 2 be a
nonempty set of nodes such that U
1 ) be a quorum agreement under U 1 and QA
a quorum agreement under U 2 . Let Q
3 ) is a quorum agreement under U 3 .
Proof: By Theorem 4.1, Q 3 is a quorum set under U 3 and Q \Gamma1is a complementary
quorum set under U 3 . We only need to show that Q
3 is maximal. Assume
that Q
3 is not maximal. Then, there is a set H 3 ' U 3 such that:
We will consider the relation between H 3 and quorums in Q 2 . There are two cases
to consider: either H 3 has at least one node in common with each quorum in Q 2
or there is a quorum G 2 in Q 2 such that G
In each case, we will find a quorum H 0
3 such that H 0
to obtain a
contradiction.
1. Suppose G
2 is an antiquorum set, there must exist a quorum H 0
2 such
that H 0' H 2 .
We start by showing that G
On the other hand, if x 62 G 1 , then G
for some G 3 2 Q 3 . Since G
Thus,
1 is an antiquorum set, there
exists H 0
1 such that H 0
Finally, by using H
2 , we will show that there exists
\Gamma1such that H 0' H 3 to obtain a contradiction. There are only two
possible cases to consider: either x
1 or x 62 H 0
1 .
3 , and
This
is a contradiction.
3 , and H 0' H 3 because
This is a contradiction.
2. Suppose that there exists G 2 2 Q 2 such that G 2 "H
We start by showing that G
Assume that there exists G There are two
cases to consider: either x 2 G 1 or x 62 G 1 . In each case, we will find
to obtain a contradiction.
because This is a contradiction.
This is
a contradiction.
Thus,
Since Q \Gamma1is an antiquorum set, there exists H 02 Q \Gamma1such that H 0' H 1 .
3 . Since
This is a contradiction.
Therefore,
3 is maximal, and QA 3 is a quorum agreement under U 3 . 2
Suppose that QA
3 ) is a quorum agreement constructed by using
the join algorithm to join quorum agreements QA
). The following conclusions may be made:
1. If both Q 1 and Q 2 are coteries, then Q 3 is a coterie.
2. If Q 1 and Q 2 are nondominated coteries, then Q 3 is a nondominated coterie.
Therefore, the join algorithm may be used to generate quorum agreements for both
types of replica control protocols: protocols based on timestamps and protocols
based on version numbers.
4.3 Example
In this subsection, we will give a simple example of how the join algorithm may
be used to generate a quorum agreement. Consider the two input quorum agree-
ments,
2gg.
If we use the join algorithm to construct a new quorum set, Q
then the corresponding antiquorum set is given by Q
1gg.
We obtain a new quorum agreement QA
3
The quorum containment test may be used for quorum sets, complementary
quorum sets, and antiquorum sets. Suppose we want to know if the set
contains a quorum of Q \Gamma1QC(S;Q \Gamma1
(Note that QC(S;Q \Gamma1
true, because f1; ag
1 .
Thus, S contains a quorum of Q \Gamma15 Conclusion
In this paper, we introduced the join algorithm which takes nonempty coteries,
as input, and returns a new, larger coterie. The algorithm produces a nondominated
coterie if and only if the input coteries are nondominated. The quorum
containment test provides an efficient method for determining if a given set of
nodes contains a quorum of a composite coterie. The test does not require that
the quorums of a composite coterie be computed in advance.
Within the framework of the join algorithm, we introduced tree coteries. Tree
coteries are a generalization of the binary tree protocol, introduced by Agrawal
and El Abbadi. We proved that tree coteries are nondominated.
The join algorithm may be used to generate quorum sets, complementary
quorum sets, and antiquorum sets. These structures may be used in a wide range
of distributed applications.
--R
An efficient solution to the distributed mutual exclusion problem.
Exploiting logical structures in replicated databases.
The vulnerability of vote assignments.
Mutual exclusion in partitioned distributed systems.
Concurrency Control and Recovery in Database Systems.
How to assign votes in a distributed system.
Weighted voting for replicated data.
A quorum consensus replication method for abstract data types.
A p N algorithm for mutual exclusion in decentralized systems.
--TR
How to assign votes in a distributed system
A quorum-consensus replication method for abstract data types
The vulnerability of vote assignments
Concurrency control and recovery in database systems
Efficient solution to the distributed mutual exclusion problem
Exploiting logical structures in replicated databases
A <inline-equation> <f> <rad><rcd>N</rcd></rad></f> </inline-equation> algorithm for mutual exclusion in decentralized systems
Time, clocks, and the ordering of events in a distributed system
Weighted voting for replicated data
--CTR
Her-Kun Chang , Shyan-Ming Yuan, Performance Characterization of the Tree Quorum Algorithm, IEEE Transactions on Parallel and Distributed Systems, v.6 n.6, p.658-662, June 1995
Yiwei Chiao , Masaaki Mizuno , Mitchell L. Neilsen, A self-stabilizing quorum-based protocol for maxima computing, Distributed Computing, v.15 n.1, p.49-55, January 2002
P. C. Saxena , Sangita Gupta , Jagmohan Rai, A delay optimal coterie on the k-dimensional folded Petersen graph, Journal of Parallel and Distributed Computing, v.63 n.11, p.1026-1035, November
Yu-Chen Kuo, Composite k-Arbiters, IEEE Transactions on Parallel and Distributed Systems, v.12 n.11, p.1134-1145, November 2001
Takashi Harada , Masafumi Yamashita, Improving the Availability of Mutual Exclusion Systems on Incomplete Networks, IEEE Transactions on Computers, v.48 n.7, p.744-747, July 1999
Yu-Chen Kuo , Shing-Tsaan Huang, Recognizing Nondominated Coteries and wr-Coteries by Availability, IEEE Transactions on Parallel and Distributed Systems, v.9 n.8, p.721-728, August 1998
Takashi Harada , Masafumi Yamashita, Transversal Merge Operation: A Nondominated Coterie Construction Method for Distributed Mutual Exclusion, IEEE Transactions on Parallel and Distributed Systems, v.16 n.2, p.183-192, February 2005
Takashi Harada , Masafumi Yamashita, Coterie Join Operation and Tree Structured k-Coteries, IEEE Transactions on Parallel and Distributed Systems, v.12 n.9, p.865-874, September 2001
Jan C. Bioch , Toshihide Ibaraki, Generating and Approximating Nondominated Coteries, IEEE Transactions on Parallel and Distributed Systems, v.6 n.9, p.905-914, September 1995
T. Ibaraki , T. Kameda, A Theory of Coteries: Mutual Exclusion in Distributed Systems, IEEE Transactions on Parallel and Distributed Systems, v.4 n.7, p.779-794, July 1993
Dahlia Malkhi , Michael Reiter , Avishai Wool, The load and availability of Byzantine quorum systems, Proceedings of the sixteenth annual ACM symposium on Principles of distributed computing, p.249-257, August 21-24, 1997, Santa Barbara, California, United States
P. C. Saxena , J. Rai, A survey of permission-based distributed mutual exclusion algorithms, Computer Standards & Interfaces, v.25 n.2, p.159-181, May | nonempty intersection;trees mathematics;index termsdistributed system;treecoteries;composite coterie;join algorithm;distributedalgorithms;read and write quorums;nonempty coteries;replica control protocol |
629124 | Evaluation of NUMA Memory Management Through Modeling and Measurements. | Dynamic page placement policies for NUMA (nonuniform memory access time)shared-memory architectures are explored using two approaches that complement eachother in important ways. The authors measure the performance of parallel programsrunning on the experimental DUnX operating system kernel for the BBN GP1000, whichsupports a highly parameterized dynamic page placement policy. They also develop andapply an analytic model of memory system performance of a local/remote NUMAarchitecture based on approximate mean-value analysis techniques. The model isvalidated against experimental data obtained with DUnX while running a syntheticworkload. The results of this validation show that, in general, model predictions are quitegood. Experiments investigating the effectiveness of dynamic page-placement and, inparticular, dynamic multiple-copy page placement the cost of replication/coherency faulterrors, and the cost of errors in deciding whether a page should move or be remotelyreferenced are described. | Introduction
NUMA (nonuniform memory access time) multiprocessor designs are of increasing importance
because they support shared memory on a large scale. For such systems, the placement and
movement of code and data are crucial to performance. This need to deal with data placement
issues has been called the "NUMA Problem". Presenting the programmer with an explicit NUMA
memory model results in a significant additional programming burden. The alternative considered
here is for the operating system (OS) to manage placement through the policies and mechanisms
of the virtual memory subsystem. In such a system, the task of the OS-level memory management
software is to decide when to reference memory remotely and when to migrate (move) or replicate
(copy) a page to a frame in the local memory of the processor generating the memory request.
OS-level NUMA memory management is an area of active research. Bolosky, Scott, and Fitzgerald
and Cox and Fowler [10] demonstrated specific solutions implemented on the IBM Ace and
BBN Butterfly Plus multiprocessors, respectively. Black et al. proposed provably competitive
algorithms for page migration and replication in [4, 5, 6]. Scheurich and Dubois proposed page
migration algorithms based on page pivoting in [26]. Bolosky et al. conducted a trace-based simulation
study of the effects of architectural features on the performance of several migration and
replication policies in [8], and Ramanathan and Ni conducted a study of critical factors in NUMA
memory management reported in [25]. We have also investigated OS-level NUMA memory management
through experimentation, both with the USMR programming library for the BBN GP1000
and with our DUnX kernel for the BBN GP1000 and TC2000 [14, 15, 17, 18, 19, 20, 21]. The
unique contribution of this current work in relation to previous research is the complementary use
of 1) measurements based on a flexible parameterized policy implementation that can explore a
wide range of policy behavior and 2) an experimentally validated analytic model.
Our focus has been on the class of NUMA architectures known as Local/Remote architectures,
as typified by the BBN GP1000 [3]. Local/Remote architectures are those in which the memory
modules of the machine are distributed such that there is one memory module local to each pro-
cessor, with the rest being remote from that processor (but local to some other processor). Each
processor/memory module pair is called a node. While a processor may directly reference any of the
memory modules, references to the local module are much faster than to remote modules, since the
request need not be sent through the interconnection network. For example, on the GP1000, a local
read takes approximately 0:6-s, whereas a remote reference takes approximately 7:5-s (ignoring
various contention factors).
For this research, implementation-based experimentation has several advantages over more formal
approaches. Most importantly, real applications can be used as the workload. The likelihood
of discovering the subtle issues that may be important to addressing the problem increases.
Complex interactions between the reference behavior of real programs and the features of policies
implementable in an operating system kernel are often difficult, if not impossible, to capture in an
abstract model. Thus, we have developed the DUnX (Duke University nX) operating system kernel
as an experimental platform for exploring the potential role of the operating system in solving the
NUMA Problem. On the other hand, performance measurement of implemented systems also has
limitations: architectural parameters cannot easily be varied, the workload is limited to the set of
available application programs, and the interpretation of results can be muddied by implementation
details.
In order to complement our experimental work, we have developed an analytical model of the
memory management behavior of a NUMA multiprocessor supporting dynamic multiple-copy page
placement (page placement with migration and replication) [16]. It is based on the approximate
mean-value analysis (MVA) approach as in [2, 9, 22, 27, 29, 28]. The goal of the model is to
evaluate the performance of some basic policies within the context of a given workload model.
There are necessarily restrictions on the workload model and policies considered. We do not claim
that our workload model can predict the exact performance of a specific real application program.
We do, however, conjecture that the parameters of our workload model capture some of the key
features of real programs and can provide insight into the general performance of different classes
of programs on a given architecture and operating system. Within the context of our workload
model it is straightforward to define an approximate ideal policy which always makes the proper
choice between remote reference, migration, or replication depending on the interprocess reference
granularity and the interprocess write granularity. This establishes a performance goal against
which to compare other policies. There is no analogous policy that can be implemented and
measured in an experimental system without knowledge of future references. The ideal policy can
be modified to introduce errors (i.e., poor policy choices), so that their effects can be compared to
the ideal performance for different workload assumptions.
Working with both an experimental system and an analytic model puts us in a unique position
to investigate a wide range of issues in NUMA memory management. First of all, this approach
allows us to validate our model using experimental data obtained with DUnX. Then we can use both
measurements of real applications running on DUnX and our model, with architectural parameters
set to values that are consistent with our GP1000 implementation or other architectures, to answer
a series of questions about dynamic page placment. These include the effectiveness of dynamic
single-copy page placement, the effectiveness of dynamic multiple-copy page placement, the cost of
using replication/coherency-fault pairs instead of page migrations, and the cost of incorrect policy
decisions.
In the next section, we describe the DUnX operating system kernel. In Section 3, we present
the modeling approach. We outline the system model of the architecture and operating system
in 3.1 and the workload model in 3.2. Validation is done in Section 4. The experiments and results
are described in Section 5. Finally, we summarize in the last section.
Experimental Framework
We developed the DUnX kernel for the BBN GP1000 NUMA shared memory multiprocessor as a
framework for implementing a wide assortment of dynamic page placement strategies. We initially
viewed the NUMA memory management policy design space in terms of distinct points; individual
policies that captured various combinations of the large number of factors that we suspected might
affect performance. Nearly fifty policies were tested using DUnX, including (at least approximations
of) most of the published policies, and the experimental results were reported in [18].
Our early experiences allowed us to prune and consolidate techniques. It appeared that our
further investigation of policy issues could best be formulated in the context of a single parameterized
policy and studied by varying the parameter settings and measuring the effect on performance.
This insight led to the development of version two of DUnX, which supports such a tunable policy.
This single policy seems to capture a fairly large region of the policy design space, including the
most successful policies identified in our earlier experiments. A study of the effects of policy tuning
with respect to differences in applications and architectural features appears in [20].
In this paper, we consider a static single-copy policy and our highly parameterized dynamic
multiple-copy policy. The static policy places each virtual page in a frame on the processor that
first references that page or in a frame on the processor explicitly specified by the application
programmer when the virtual memory is allocated. Other processors that wish to share access to
the page create mappings to the same physical copy (with the exception of code pages, which are
always replicated to each node using them), and the placement of a page does not change unless it
is selected for removal by the replacement policy and later paged in again.
The dynamic policy that we consider uses the same initial placement as the static policy, but
periodically reevaluates earlier decisions and allows multiple physical copies of a single virtual
page. This policy can move a page to a local frame upon demand. It supports both migration
and replication with the choice between the two operations based on reference history (specifically,
the recent history of modifications made to the page). A directory-based invalidation scheme is
used to ensure the coherence of replicated pages. DUnX supports a sequentially consistent memory
system. Recent work suggests that weaker consistency models can be exploited to obtain additional
performance improvements [1, 11, 12, 13]. The use of weak consistency could be incorporated
into the DUnX framework as well. The policy applies a freeze/defrost strategy (an idea adopted
from [10]) to control page bouncing, a condition in which a page is continually being migrated from
one node to another. With the freeze/defrost strategy, excessive page movement is controlled by
Param Role
freeze-window defines size of "recent invalidations" window for freezing decisions
recent-mod controls the replication vs. migration decision
scan-delay sets the rate of scanner daemons
sample-passes adjusts the number of reference collection samples
defrost-trigger remote count - local count "successes" needed to defrost
trigger-method controls the "invalidate all" vs. "invalidate remote" trigger decision
Table
1: Policy Parameter Summary
freezing the page in place and forcing remote accesses. The freezing criterion is based on the time
since the most recent invalidation of the page. Determining when to defrost a frozen page and
trigger reevaluation of its placement is based on both time (by how often such decisions are made)
and reference history (the recent remote/local usage). At the same time, choosing how to trigger
new placement decisions is based on reference data (recent modification history).
Six parameters control the behavior of the policy. One set of parameters controls the frequencies
at which certain events take place (defrost and triggering decision-points, reference data collection,
and aging of usage counts). Other parameters set thresholds on the interpretation of reference data
(e.g., defining "recent" history). These parameters are not intended to be orthogonal, but they
do provide a means to systematically study the policy design space. With appropriate parameter
settings, behavior of the policy can be adjusted to mimic a wide variety of freeze-based policies.
The parameters and their roles are summarized in Table 1.
The policy is comprised of two parts. The first defines the behavior of the policy when faced
with a page fault, and the second defines the behavior of the page scanner daemons used to trigger
the reevaluation of earlier page placement decisions.
When a fault occurs on a page that has not been replicated but is already resident in some
remote physical frame, the chosen course of action depends primarily on the recent reference history
for the page and the settings of the freeze-window and recent-mod policy parameters. The policy
must decide between installing a remote mapping and migrating or replicating the page.
The first step involves determining whether the page should be frozen by checking to see whether
the most recent invalidation of the page (due to a page migration or coherency fault) occurred within
the past freeze-window milliseconds. If the page is frozen (either imposed just now or sometime in
the past), the remote frame is used to service the fault. The freeze-window parameter essentially
limits the rate at which invalidations of a page can occur. When freeze-window is set to zero,
the policy behaves much like the caching policies used in the proposed software distributed shared
memory environments (e.g., [23, 24]). When freeze-window is set to infinity, a page may only
be invalidated once (migrated once, or replicated until the first coherency fault occurs) before it is
frozen. Values of freeze-window between these two extremes allow varying amounts of dynamic
page placement activity (migration and replication). In essence, the freeze-window parameter
controls the eagerness of the policy to migrate and replicate pages.
Once it is determined that a local copy of the page is desired, it is necessary to decide between
migration and replication. The recent-mod parameter controls this decision. The policy checks to
see if the page has been recently modified by comparing the modification history (maintained by
the page scanners through aging of the hardware modification bits) to the recent-mod parameter.
If the aged modification counter exceeds the recent-mod threshold or if a write reference triggered
the current fault, the page is migrated to a local free frame. Otherwise, a local free frame is used to
create a replica of the page. Write access to all copies is prohibited, allowing the fault handler to
ensure data coherency. When recent-mod ! 0, migration is always chosen in favor of replication
(i.e., replication is not allowed). When recent-mod = 1, replications are chosen over migrations
on any fault not triggered by a write reference. If we assume that the goal of the policy is to
replicate only pages being referenced in a read-only fashion, then we can characterize recent-mod
as the point at which the policy concludes that the last modification of the page is far enough in
the past that it can assume that the page is now being referenced in a read-only fashion.
The handling of a fault on a page that is already replicated requires that data coherence be
maintained. If a write memory reference triggered the fault, all but the copy eventually used to
satisfy the fault must be invalidated. If no local copy of the page exists, one of the existing replicas
is migrated to a local frame. If a read memory reference triggered the fault, then the policy either
uses an existing replica, or creates an additional local one if none already exists. There is no need
to check for freezing, since it is impossible for a page that should be frozen to be replicated.
Our policy uses a page scanner on each processor node to trigger the reevaluation of earlier
placement decisions. The page scanners run every scan-delay seconds. Each time a scanner runs,
it collects page reference and modification information for the frames on its processor. Separate
local and remote reference counts are maintained, so that the policy can tell whether only local
processes, only remote processes, or both local and remote processes are referencing each page.
The remaining parameters specify details of scanner operation that have minor effects and are not
varied in the experiments presented in this paper.
3 Analytical Framework
3.1 The System Model
The system model is an approximate mean-value analysis (MVA) similar to those reported elsewhere
Figure
graphically depicts the modeled system, which is a Lo-
cal/Remote memory architecture. The system is comprised of N processor/memory nodes, connected
to each other through some interconnection network. Queuing delays are encountered when-
\Upsilon
\Upsilon
\Upsilon
Interconnection Network
F
A
Figure
1: System Queuing Model
ever a processor attempts to reference memory and whenever a message is sent through the inter-connection
network.
Table
summarizes model hardware and software input parameters. Since b is the number
of blocks in a page, the time to read or write an entire page is bt bm and the time to transfer a
page across the interconnection network is bt bx . The basic system model assumes a local memory
reference takes a uniform amount of time, modeled in the t l input parameter. To account for time
differences between read and write references in some systems (e.g. the BBN GP1000), we can derive
the t l input parameter using t lr and t lw , the local read and write reference times, respectively.
The memory management policy and the workload are both modeled by the software input
parameters. The model assumes that the mean time between memory references is - time units, and
that any given reference is to local memory with probability p l and to a remote memory module with
probability . There are a number of different types of faults. We concentrate primarily
on faults resulting in migration, replication, or coherency operations with the probabilities q m , q r ,
and q c , respectively. This discussion omits details of other types of faults that enter into the model.
In Subsection 3.2 we describe how these input parameters relate to application program reference
patterns and management policies.
The mean total time between virtual (user program) memory requests (R) issued by a processor
is the sum of the execution time between requests (- ), the mean time to complete the memory
request (R r ), and the weighted mean time for servicing encountered page faults (R f ) (equation 1).
The mean time to complete a virtual memory request (R r ) (equation 2) depends on T l (equation 3),
the mean time required to perform a local memory reference (including wait time) and T r (equa-
tion 4), the time required to make a remote request. The page fault handler also makes local and
remote memory references. The mean time (per virtual memory request) spent servicing faults,
R f , is calculated based on the probabilities and costs of each type of fault. The calculation of the
Symbol Meaning
Hardware Input Parameters
l short message memory reference time
t lr local read reference time
t lw local write reference time
t x short message network transfer time
t bm block memory reference time
t bx block network transfer time
b number of blocks in a page
t d mean disk transfer time
Software Input Parameters
- mean time between memory reference requests
l prob. that a memory reference is local
r prob. that a memory reference is remote
q r prob. of a page fault that results in a replication
prob. of a page fault that results in a migration
q c prob. of a page fault that is a coherency fault
Table
2: System Model Input Parameters
wait times w l , w r , and w n is discussed in [17].
Consider processor zero in Figure 1 making a reference to local memory, and then a reference to
a location in memory module one. In the first case, the only wait required is at the queue labeled
"B" in the figure. Thus, the service time is comprised of the single wait for local memory (w l ) and
the time to actually complete the reference (t l ). In the second case, the request must be sent out
over the network to memory module one. First, the request is delayed at queue "A" to wait for
network access for w n time units, then it must travel through the network (t x time units) and wait
at the remote memory module at queue "D" for w r time units. After the wait at queue "D" and
the time to actually process the request (t l ), a return message must be sent back to the original
requesting processor, zero. This involves waiting at queue "C" (w n ) and then the time to actually
make the final transfer (t x ), thus yielding equation 4.
PR Private data pages and code pages
RO Read-only shared data
RM Read-mostly shared data
SS Shared-sequential pages
SP Shared-parallel pages
Table
3: Shared Data Page Classes
Symbol Meaning
number of pages in class c
R c mean read references to page
references to page
r c interprocess reference granularity
mean number of processors sharing page
Table
4: Workload Model Input Parameters (c 2 fPR; RO;RM;SS;SPg)
3.2 The Workload Model
In this subsection, we develop a simple workload model for obtaining approximations to these input
parameters for different application/policy combinations. We assume that each virtual page in the
address space of a process can be placed into one of a small number of classes as shown in Table 3.
Similar classifications into page classes and use in MVA models have been developed independently
for studying cache coherency [2, 29]. PR pages contain program code and variables of which each
process needs its own copy. We assume these pages are replicated into each local memory before
execution. The RO class contains shared data pages which are never modified during the execution
of the program. The class of RM pages contains those shared data pages that are modified only
occasionally, but referenced in a read-only fashion far more often. The two remaining classes (SS
and SP ) contain shared data pages that are read-write shared, but differ in how active the sharing
is. Specifically, a read-write shared page is an SS class page if the number of references a process
makes to such a page between references by another is large enough to justify migrating the page,
and is an SP class page otherwise.
For each of the five page classes, we define six workload model input parameters, summarized
in
Table
4. P c , where c 2 fPR; RO;RM;SS;SPg, is the number of pages in class c. R c (W c ) is the
mean number of read (write) memory references each process makes to each page in class c. r c (w c )
is the interprocess reference (write) granularity; that is, the mean number of references a processor
makes to each page in class c between reads or writes (writes) to that page by another processor.
Policy Description
ideal chooses the right operation
static never migrate or replicate
cache migrate SS and SP pages, replicate RO and RM pages
MOR always migrate
RC ideal with RC migrations
error ideal with percentage of decisions wrong
Table
5: Model Policies
The sixth parameter is M c , which is the mean number of processors sharing each page of class c.
The r c parameter is what distinguishes SS from SP pages. If r c is small, migration is not likely to
be cost effective and we classify the page as class SP. If, on the other hand, r c is rather large, then
migration is likely to be worthwhile and we consider the page to be of class SS. In Section 5.5, we
discuss results that make this distinction clear.
On the first reference to a non-local page, any policy must chose between establishing a mapping
to a remote copy of the page (thus deciding to reference that page remotely) or migrating or
replicating that page to a local frame. In order to ensure strict data coherency, modification to
a page is allowed only if there exists only a single valid copy of that page. An invalidation-based
coherency protocol is used to enforce this requirement.
Table
5 lists the management policies considered in the model. An ideal policy makes the
correct choice among the options on each reference. In addition to the ideal policy, we consider
several other policies: static (never migrate or replicate), cache (always migrate SS and SP pages
and always replicate RO and RM pages), migrate-on-reference (MOR) (always chose migration over
remote references), ideal with migration errors (RC) (ideal with migration implemented through
replication/coherency fault pairs), and ideal with percent errors (error) (a certain percentage of all
ideal policy decisions are incorrect).
We derive the system model parameters for each of these policies based on the workload model
parameters.
For static page placement, we know that q We assume that each page is
placed in a frame on a randomly selected node that actually uses that page and, therefore, is local
to at least one of the processors using that page. The probability of a reference being to a remote
memory module (p r ) and to a local memory (p can be derived.
The cache policy is the "always migrate or replicate" policy typically used by software distributed
shared memory systems (e.g., [24]). Remote memory is never referenced directly, so
The probability of migrating a page, q m , is simply the total number of SS and SP
migrations divided by the total number of application references, where the mean number of migrations
performed for each SS and SP page is the mean total number of references to those pages
divided by the mean number of references the processor is able to make before needing to re-migrate
the page back to a local frame. The probability of the cache policy replicating a page, q r , is the
number of replications divided by the total number of references. References to a page that will be
replicated are broken into runs of size w c , c 2 fRO;RMg, with a replication required at the start
of each run (the page is not resident at the start of the computation, and a coherency fault ensures
that a replication will be required after every w c references).
For the migrate-on-reference (MOR) policy, q and the calculation of q m includes the
RO and RM page references as well as the SS and SP page references.
Key to the approximate ideal policy is knowledge of the reference count k mig necessary to
justify a page migration and the reference count k rep necessary to justify a replication. The idea
behind the ideal policy is that it is better to migrate a page than to reference it remotely whenever
and better to replicate a page than to reference it remotely whenever w c ? k rep. We
assume that the policy is able to distinguish between page classes. Consequently, for the RO and
RM page classes, the policy uses the w c and k rep values to determine whether or not to replicate
a page, and for the SS and SP classes, the policy uses r c and k mig values to determine whether or
not to migrate a page. The policy never choses to migrate RO or RM pages, nor does it ever chose
to replicate SS or SP pages.
Migration can be effected through a replication/coherency fault pair (henceforth called an RC
migration). The RC policy uses the ideal policy equations except that q r and q c are set to the
original q m and q m is set to zero.
Our error policy performs like the ideal policy, but with a certain fraction of the decisions being
incorrect. The equations for the error policy are similar to those for the ideal policy but with a
percent error factor for each class (e c ).
4 Validation of the Model Against the Implementation
In this section, we investigate the accuracy of our analytic predictions by comparing them to
experimental results obtained using the DUnX operating system kernel running on a BBN GP1000
multiprocessor. It is very difficult to derive accurate estimates for the input parameters to our
workload model for some arbitrary application program, so we wish to avoid this task. Yet in order
to validate the model, we need to examine a wide assortment of points in the possible workload
space. To deal with these conflicting goals, we develop a synthetic program (called synth) for which
detailed analysis is simplified. The synthetic program is parameterized so that it can, in effect,
exhibit a wide assortment of different reference patterns.
The five most important input parameters to synth are the number of pages in each of our five
Instance P PR PRO PRM P SS P SP
Table
Synth Program Instances
workload model page classes (i.e., PR, RO, RM , SS, and SP ). The synth program allocates the
input number of shared pages of each class, and enters a series of loops with references to those
pages. The loops are constructed so that the key workload model parameters (e.g., the interprocess
reference granularities) are obeyed. Thus, if the mean interprocess reference granularity for the SS
page class is r SS , then a synth process will make r SS references to such a page before any other process
references that page. Through a detailed hand-level analysis of the synth compiler-generated
assembler code, expressions for the numbers of memory references and instructions executed have
been developed. Thus, we can quickly determine the total number of memory references made to
each type of page given the correct values for the five synth input parameters.
In this section, we consider nine points in the synth parameter space. We refer to these points
as instances of synth, numbered 1 through 9. The instances are defined in Table 6.
The first step in our analysis of a particular synth instance is to run it in a one-node cluster of our
GP1000 under a static page placement policy. Each experiment is conducted ten times, so that the
statistical significance of variation from analytic results can be checked. Using the reference count
data obtained through our source analysis expressions and the mean measured completion time and
page fault data, we compute - and the page fault probabilities for this instance of synth. This is
possible in the one-node case because we know exactly what fraction of the memory references made
are local (exactly 1), the total number of references made (from our source analysis expressions), and
the numbers of page faults of different types encountered and their mean costs (from experimental
data).
In
Figure
2, we plot the measured (from DUnX) and predicted (from our system model) completion
times (in seconds) for the nine instances running in a one-node cluster with static page
placement. For the experimental results, two points are plotted for each experiment. These points
bound a 99% confidence interval calculated using the Student-t distribution with a sample size of
Comparison of Experimental and Predicted Data - 1 Node Case
Model
Figure
2: Comparison of 1-Node Static Results (completion time in seconds)
ten. The results match quite well. This, however, is expected since we used the experimental data
to set the input parameters to the model. If the results didn't match, it would indicate a problem
in our methodology.
The next step in the analysis is to use the values obtained in the first step to make predictions
about performance in an n-node cluster. These data are then compared to experimental data
obtained by running synth under DUnX on our GP1000. For example, the 8-node results under a
static page placement policy are shown in Figure 3. Again, the two DUnX points for each instance
bound a 99% confidence interval established with ten sample points and the Student-t distribution.
Note that each of the ten sample points is itself the mean completion time of the eight processes
that comprise the computation.
Figure
3 shows that our analytic predictions are quite close to the experimental results, differing
from the mean experimental time (the mean is not plotted but it lies halfway between the plotted
bounds) by less than 5% in all cases. These data clearly show that our memory and network
contention estimates are reasonably accurate, as are our workload model's p l and p r estimates.
The analytic predictions are slightly optimistic, however, probably due to some combination of the
following factors:
1. Clustering in time of references to pages of a particular class - It is likely that each of
the synth processes references pages of a certain class at roughly the same time. This, of
course, means that memory and network contention encountered making those references
may be higher than predicted, since the model assumes that these references are distributed
Comparison of Experimental and Predicted Data - Static Policy
Model
Figure
3: Comparison of 8-Node Static Results (completion time in seconds)
uniformly in time.
2. Clustering of page faults - Though only a few page faults occur, they all must occur at
the start of the application. Thus, contention for operating system data structures may be
higher, resulting in total page fault costs slightly greater than those predicted by the model.
3. Other factors - The experimental data are obtained from a real system supporting a real
user community. Certain operating systems functions, such as the process scheduler, may
play a role in slowing the application, as may unexpected interference (in terms of memory,
network, and OS data structure contention) from other users of the system. 1 It is simply
not possible for the system model to account for all of the factors that may affect the overall
performance of a real application.
Of course, despite quantitative errors of as much as nearly 5%, the model predictions are qualitatively
accurate.
In
Figure
4, we compare analytic predictions of our nine synth instances running under the
cache policy with experimental results obtained when running under (essentially) the same policy.
The results for Instances 5, 7, 8, and 9 (those instances for which P SP ? 0) do not appear in
the figure. For each of these cases, the model predicts excessively high completion times (e.g., for
Instance 5, the prediction is over hours). We did not allow the experimental runs for these
Other users may affect memory and network contention despite the fact that our computation runs in its own
cluster of processor nodes since the operating system's disk buffer cache is distributed across the memories of all the
nodes in the system.
Comparison of Experimental and Predicted Data - Cache Policy
Model 3
w/barriers \Theta
\Theta \Theta
\Theta \Theta \Theta \Theta
Figure
4: Comparison of 8-Node Cache Results (completion time in seconds)
instances to complete (for obvious reasons), but we did allow each to run long enough to conclude
that excessive amounts of dynamic page placement activity (migrations, replications, and coherency
were certain to result in very long application run times.
The predictions for synth Instances 1 and 2 are well within 5% of the mean measured completion
times. The factors contributing to these quantitative differences are likely the same as those
proposed in our discussion of the 8-node static policy results.
For synth Instances 4 and 6, the analytic predictions differ from the mean experimental completion
time by a more significant amount (12:7% for Instance 4 and 21:4% for Instance 6). In the
Instance 4 case, the analytic prediction lies within the 99% confidence interval, but the analytic
prediction for Instance 6 is nearly 10% lower than even the lower bound on the 99% confidence
interval.
Only the lower bound of the 99% confidence interval for Instance 3 appears in Figure 4. The
upper bound on that interval is over 800 seconds, indicating a wide variance in the measured
completion times for that synth instance.
The following factors play a role in the noted differences for synth Instances 3, 4, and
1. "Fuzzy" phase transitions - When deriving the input parameters for our workload model, we
make a simplifying assumption that phase transitions occur at distinct points in time. This
assumption maximizes the r SS and wRM parameter estimates (minimizing the number of
migrations and replications), since no overlapping of phases occurs. Unfortunately, when we
actually run the application under DUnX, some overlap may occur, counter to our assumption.
Since we overestimate r SS and wRM , the workload model predicts fewer migrations of SS
pages and fewer coherency faults and replications of RM pages. Better estimates of r SS
and wRM would likely improve the success of the model, but there is no clear-cut way to
determine exactly how much the phase transitions may overlap (barring the introduction of
barrier synchronization between phases).
2. A subtle DUnX race - As it turns out, a subtle race condition in DUnX is also partially
to blame. A processor may experience a page fault that results in a coherence operation or
a page migration, yet be unable to complete the memory reference that triggered the fault
before some other processor is able to replicate (disabling write access to the page) or migrate
that same page. This causes the first processor to fault on the same memory reference once
more, repeating the process. 2 As with the item number 1 above, it is difficult to imagine how
one would include the effects of such a race in the analytic model.
Note that both of these effects can introduce large amounts of variance in experimental measure-
ments, since both the amount of phase overlap and the number of iterations through the DUnX
coherency/write race are highly unpredictable.
To test these hypotheses, we introduced several barrier synchronization points to the synth
application so that the noted problems would be avoided. Results of these experiments are also
shown in Figure 4 (as before, two data points bound a 99% confidence interval). We see that
performance of the modified synth is much closer to that predicted by our model, despite the fact
that the model does not account for the additional memory references and contention associated
with the barrier synchronization points. Additionally, we note that the confidence interval size for
the modified synth instances is also greatly reduced (the two Instance 4 points lie on top of one
another). These data give strong support for our explanation of the noted differences in the model
predictions and the experimental measurements.
It is more difficult to test the accuracy of our analytic model of more realistic policies, since
there is no clear mapping from modeled policies to actual DUnX policies. In Figure 5, we compare
the performance of the analytic ideal policy to the best performance we were able to obtain with
the parameterized DUnX policy (note that the policy tuning differs for each of the nine instances).
As before, the experimental results are given by the lower and upper bounds of a 99% confidence
interval around the mean measured completion time. We see that the predicted ideal performance
is reasonably close to the best performance we were able to achieve under DUnX for most cases.
Reasons for the differences include those discussed in relation to the static policy results as well
2 Note that it is not clear how one would "correct" this race condition in DUnX, nor is it clear that it is necessary
to do so. The problem occurs only when run under the cache policy, since the migration and replication control
mechanisms present in other policies prevent it from occurring. The appropriate action is to change policies when
faced with such behavior.
Time
Instance Number
Comparison of Experimental and Predicted Data - Ideal Policy
Model 3
Figure
5: Comparison of 8-Node Ideal Policy Results with the Best Possible Results under DUnX
(completion time in seconds)
as the first item in our list of factors affecting performance of the cache policy. Other factors also
come into play:
1. Lack of an ideal DUnX policy - We cannot implement an ideal policy since such a policy
requires knowledge of future reference patterns.
2. Page scanner overhead - The real DUnX policies must suffer overhead effects associated
with running the page scanner daemons. This overhead comes in the form of lost CPU cycles
used by the daemons as well as additional "quick" faults not accounted for by our workload
model.
As with our other results, even though our predictions may not be extremely accurate (though
within 10% in all cases is certainly not terrible), they are qualitatively accurate. This becomes
especially obvious when we compare the results of Figure 3 with those of Figure 5, for there it
is evident that the model "responds" correctly to differing reference patterns. For example, it
predicts that significant performance improvements can be made for synth Instances 2 and 6,
whereas it doesn't predict such drastic improvements for Instances 7 and 8. Also note the relative
performances for Instances 6, 7, and 8. The model successfully predicts which instances it can most
improve.
The Instance 5 results in Figure 5 merit special attention, since the DUnX results are faster
than the predicted ideal completion time. There is a relatively simple explanation for this, however.
Instance 5 is comprised entirely of SP pages. Under the ideal policy, SP pages are never replicated
and not migrated when r SP is less than k mig as in this case. Thus, the predicted performance of
the ideal policy for Instance 5 is virtually identical to that predicted for the static policy. The r SP
and w SP values are just means, however. In reality, migration and replication can improve the
performance of Instance 5, as is evidenced by the experimental data in Figure 5. This is the same
type of error that motivates the development of our error policy, though in the opposite sense (the
error results in pessimistic rather than optimistic model predictions).
To summarize the results of this section: we have found that while the differences between
analytic predictions and experimental results are usually small, there are cases where the difference
is substantial. These differences are due to several factors not accounted for in the analytic model.
Especially important are the factors related to program behavior, such as the clustering of memory
accesses and page faults, and the fuzziness of phase transitions. These factors appear in the
execution of a synthetic program designed specifically to conform to our workload model. These
factors are likely to play an even more important role in real applications. Consequently, though
we can use the model to make predictions about the behavior of the policies defined above in
the context of our workload model, predictions about the performance of specific real applications
running under real policies are of questionable value.
5 Experiments and Results
We take the approach of using both the analytic model and experiments on DUnX to answer specific
questions about dynamic page placement behavior and the effects different workload features have
on performance. The two methods complement each other. Each technique has its strengths and
weaknesses, but the limitations of one approach tend to be covered by the capabilities of the other.
Measurements of an implemented system running real applications can capture true program
behavior, complex interactions that may be hard to anticipate, and the impact of actual system
overhead, but only for the specific workload suite tested and the hardware/software implementation
available. Thus, while the experimental results can be considered accurate, they may not be easily
generalizable to other architectures, the possibility of better implementations, or a different set of
programs. On the other hand, the model clearly does not include all the subtle effects of real system
performance, but it does allow exploration of a wider range of hardware/software parameters once
some confidence in the model has been established. For example, in Section 5.4, we ask a question
about the effect of poor policy decisions when the cost of the resulting migrations differs from that
of our DUnX implementation.
The complexity of actual behavior that is missing from the model can also make experimental
data difficult to interpret whereas the model explicitly formulates relationships among the appar-
Class P c R c W c r c w c M c
Workload #1 Characterization
PR 50 500; 000 50; 000 550; 000 550; 000 1
RO
Workload #2 Characterization
PR 50 500; 000 50; 000 550; 000 550; 000 1
SS
Workload #3 Characterization
SS
Workload #4 Characterization
Table
7: Workload Characterizations for the Simple Workloads
ent contributing factors and can be used to test hypotheses to explain results. For example, in
Section 5.3, the question of whether replication is the "right" mechanism to serve as the basis for
dynamic placement is addressed by constructing workloads specially tailored to favor replication
(an emphasis on read-only shared data) or migration (emphasizing sequentially shared read-write
data) to see the impact of using the intuitively less appropriate mechanism.
Finally, it is useful to establish a performance goal although this implies policy decisions that
can not actually be implemented. Thus, we use the modeled ideal policy to ask whether there
is significant potential for improved performance by pursuing sophisticated dynamic placement
strategies (Section 5.2).
5.1 Methodology
In applying the analytic model, we use simple workloads and vary one or two key parameter values
in order to study, in isolation, some particular aspect of memory reference behavior. The workload
settings for these experiments are in Table 7. The use of the variable x in the table signifies that
this parameter is varied. Hardware parameters are set to the values on our GP1000 (t
Table
2). For the software parameters, are derived by the
workload model.
For the experimental measurements in the DUnX implementation, the architecture and work-load
characteristics are fixed by the actual hardware and applications available. The basic costs for
page placement operations have been measured in the GP1000 DUnX implementation to be 4.5 ms
Program Description
msort merge sort of an integer array
gauss
simulates gaussian elimination
with integer arithmetic
hh3d
simulates electrical conduction
in cardiac tissue
hough computes hough transforms
solves
using block chaotic relaxation
wave
solves the wave equation on a
square grid with periodic boundary
fish
simulates sharks and fishes
in two-dimensional sea
mandel
performs mandelbrot set calculation
Table
8: Experimental Workload Collection
for migration, 4.6 ms for replication, and 2.1 ms for servicing a coherency fault. The experiments
focus on varying policy parameters to achieve a range of responses.
The workload used for our experimentation was developed independently from our project, in
an effort to prevent unconscious attempts at making design decisions that might affect our results.
For most of the applications that comprise our workload collection (listed in Table 8), there are
versions written in both UMA and NUMA styles. The exception is msort, for which we have no
NUMA version. The NUMA version is a highly-tuned implementation of the program written to
optimize memory reference locality assuming a static policy and using programming techniques
such as manually placing shared data pages or making explicit copies of read-only data structures.
The UMA version does no such NUMA-specific memory management. As one would expect, the
NUMA version of an application is typically more complicated, less portable, and much more
difficult to write. For each application in our workload collection, we began our study by "tuning"
the parameter settings to achieve the best possible performance for that application on the GP1000
within the limits of the somewhat ad hoc tuning process. Once we arrived at those parameter
settings for an application, we designated them as the default settings for that application. In the
experiments, the default settings for all but the parameter being varied were used. These settings
are indicated in the figure captions.
In the plots of DUnX performance, there are generally two heavy lines that mark the levels of
performance obtained by the UMA and NUMA versions of the application program using static
page placement (the upper line is the UMA result, and the lower the NUMA result). Each diamond
in a plot is an experimental data point obtained with the UMA version of the program run under
our dynamic policy with the corresponding parameter settings (multiple trials were done to check
validity of data). The thin solid line plots the mean values. In all of the DUnX plots, time on the
y-axis is measured in elapsed seconds.
5.2 The Importance of Dynamic Page Placement
Perhaps the most important question to answer is whether dynamic page placement is worth
pursuing. The model predictions for the instances of synth used in the validation study provide
evidence that dynamic page placement can improve the performance of some applications. For
example, the results of Figures 3, 4, and 5 show that for several synth instances, the cache and/or
ideal policies perform better than the static policy. With the appropriate k mig and k rep values
(the minimal reference counts necessary to justify a page migration or replication), the ideal policy
will never perform worse than the static policy, though in many cases, it will perform better. Since
the ideal policy performs better than the cache policy in many cases, the investigation of more
sophisticated dynamic policies is worthwhile. The validation results obtained by running synth on
DUnX also confirm that dynamic page placement is worth investigating.
To answer this question in the context of actual programs and practical policies in DUnX, we
consider the performance of three application/policy combinations. The UMA version of an application
run under the static page placement policy serves as a base case measure, since it is the case
in which the NUMA problem has not been addressed by either the application programmer or the
operating system. If we assume that the NUMA versions are well written, in the sense that they
represent successful attempts at addressing the NUMA problem, then we can consider the difference
in performance between the UMA version of the application run with static page placement
(a combination denoted by UMA/Static) and the NUMA version run under static page placement
(NUMA/Static) as the cost of the NUMA problem. The performance of the NUMA/Static combination
serves as a performance goal, in the sense that if we achieve that level of performance through
some other method, we can consider that method successful. In our case, the other method is the
dynamic multiple-copy page placement policy implemented in our DUnX kernel, individually tuned
for each application in our workload. Thus, the third combination of interest in our experiments is
UMA/Dynamic.
In
Table
9 we give the raw completion time data for the three application/policy combina-
tions. The table labels U/S, N/S, and U/D correspond to the UMA/Static, NUMA/Static, and
UMA/Dynamic combinations, respectively. In most cases, the UMA/Dynamic combination performs
significantly better than the UMA/Static combination, thus showing that the operating
system can indeed prove effective at addressing the NUMA problem. In many instances, the performance
of the UMA/Dynamic combination approaches that of the NUMA/Static combination, and
in fact, for the hough application the UMA/Dynamic combination outperforms the NUMA/Static
combination. This last result indicates that the NUMA version of the hough application must not
be optimal, since whatever the operating system is able to do to further improve performance, the
Program U/S N/S U/D
msort 336:7s - 80:2s
gauss 1833:1s 174:7s 238:1s
hh3d 1102:2s 628:5s 722:9s
hough 180:4s 154:0s 96:8s
psolu 3796:0s 530:7s 577:6s
wave 465:0s 124:4s 251:0s
fish 86:3s 85:5s 105:4s
mandel 1160:6s 1155:9s 1168:4s
Table
9: Absolute Performance
applications programmer could also have done (most likely more efficiently). The data also show
that for the fish and mandel applications, dynamic multiple-copy page placement serves only to
degrade performance. Since the hand-tuned NUMA/Static versions of fish and mandel fail to
perform significantly better than their UMA/Static counterparts, it is not surprising that the costs
of dynamic placement outweigh the limited potential for benefits.
Results presented throughout the remainder of this section support the value of dynamic page
placement, in addition to answering other questions.
5.3 The Importance of Page Replication
Intuitively, page replication should be desirable, and favored over migration or single-copy static
placement, for applications which have a significant amount of read-only sharing. However, single-copy
policies would be simpler to implement. The model with Workload 1 can be used to investigate
the impact of multiple-copy page placement by comparing performance of the static, MOR, and
cache policies (see Figure 6). Since varying the inter-process reference granularity, r RO , is the only
way to change the number of migrations and/or replications without changing the total number
of references to the RO pages, it is the parameter varied in the experiment. The MOR curve
continues to rise as r RO decreases until at r RO = 10, MOR has R (mean time between virtual
memory requests) of over 350 -s. This is due to page bouncing. The cache policy performs
significantly better than the static policy, indicating that multiple-copy page placement policies
can improve performance of some applications. The figure also shows that at sufficiently high r RO
values, MOR achieves performance as good as the cache policy, but never better. This result makes
clear the importance of multiple-copy page placement policies for at least one class of applications
(i.e., applications with a significant amount of RO pages). Other experiments not presented here
support similar conclusions with respect to RM pages.
Given the predetermined reference patterns generated by real programs, the approach to experimentally
investigating the choice between replication and migration is to vary the policy. The
R
r RO
Importance of Page Replication
Figure
Workload #1 Model Experiments (R is in microseconds)
recent-mod parameter of the DUnX parameterized policy controls the migration versus replication
decision. The results of varying recent-mod for the gauss application on the GP1000 are given in
Figure
7.
Figure
7 indicates that the higher the preference for replication over migration, the better the
measured performance. Since the primary mode of sharing in gauss involves reading pivot rows
of the matrix, which are never modified once they become pivot rows, the value of replication
is not surprising. Figure 7 shows that when replication is always chosen over migration on read
faults performance of the UMA version of gauss is nearly as good as with the
NUMA/Static version of the program. Intermediate recent-mod values result in performance better
than without replication (the recent-mod ! 0 case), but fail to take advantage of some potential
page replications that further improve performance in the recent-mod case. Results of
recent-mod experiments with the psolu, hough, and wave applications on the GP1000 resemble
those obtained for the gauss program. Based on our analytic results, the improved performance of
these applications under a policy favoring page replication suggests that they share a substantial
amount of data in a read-only fashion (i.e., pages of type RO and RM ).
5.4 The Effects of Coherency Faults on Performance
Workload 2 is used to investigate the performance of a policy that implements page migration (for
workloads in which that is the appropriate operation) through replication/coherency fault pairs
(RC migrations). For this experiment (see Figure 8), we vary r SS and w SS together. With this
Time
secs
recent-mod
Effect of recent-mod on gauss Performance
Dynamic Policy Points 33 3 3 3 33
UMA and NUMA Static
Figure
7: Effects of recent-mod on gauss/GP1000 Measured Performance
workload consisting only of PR and SS pages, the workload model predicts that, with the ideal
policy, no page replications or coherency faults will occur. For sufficiently high r SS , however, the
ideal policy predicts a positive q m value. Values of r SS and w SS below 1000 are not considered,
since for such values the ideal policy is the same as the static policy. Even in the worst case of
r SS and w SS values between 1000 and 10,000, the penalty for using RC migrations is not excessive.
This is fortunate, since in a real system, the memory management software has no way of knowing,
at fault time, whether it is best to migrate or replicate a particular page. Thus, RC migrations are
likely to be fairly common in real systems.
The msort application is an example of an application that does not benefit when replication is
preferred over migration on the GP1000. The data are shown in Figure 9. This behavior is explained
by the fact that there is no read-only sharing in msort which can benefit from page replication. As
a result, when we see a very slight performance degradation. This degradation
is due to using replication/coherency fault pairs to migrate pages since a replication/coherency
fault pair is more costly than simply migrating the page. The weakness of the negative impact of
coherency faults is consistent with the analytic predictions of Figure 8, and is because the cost of
incorrectly choosing replication over migration (a coherency fault) is only 50% more expensive in
our DUnX implementation on the GP1000. Our analytic model predicts that avoiding coherency
faults may be more important for architectures in which processing of a coherency fault is very
R
r SS and w SS
Migration vs. Replication/Coherency Fault Pairs
Migration
Replication/Coherency 222 2 22
Figure
8: Workload #2 Model Experiments (R is in microseconds)100200300-1
Time
secs
recent-mod
Effect of recent-mod on msort Performance
Dynamic Policy Points 3
Average Dynamic Policy
UMA Static
Figure
9: Effects of recent-mod on msort/GP1000 Measured Performance
e SS
Effects of SS Page Errors
\Theta \Theta \Theta \Theta \Theta
\Theta
\Theta
Figure
10: Workload #3 Model Experiments (R is in microseconds)
expensive. Experimental results with a DUnX implementation on the BBN TC2000, reported
in [20], also suggest this to be the case.
5.5 The Effects of Policy Errors on Performance
The success of the ideal policy promotes the development of sophisticated policies that attempt to
approximate the ideal policy by selectively limiting page movement activity in some way. However,
such policies are bound to make mistakes either by being too aggressive about moving pages and
encountering page bouncing or by being too conservative and passing up desirable opportunities.
In the analytic model, errors in determining the level of dynamic placement activity are represented
in the error policy. Correctly handling PR and RO pages should not prove difficult for
any reasonable policy implementation, so we assume that no such errors will be made (i.e., we let
0). For RM pages with wRM ? k rep and SS pages an incorrect policy choice is
to use a remote reference, so clearly the worst case performance degradation is only to the static
policy. The more interesting case is when the incorrect policy choice is to replicate (for RM pages
with or migrate (for SP pages) since this can result in performance worse than that
of the base-case static policy.
We use Workload 3 to consider SS page errors. In Figure 10, we plot R versus e SS for five
different r SS and w SS values. For each of the r SS and w SS values, the performance of the error
policy with e SS = 0:001 is approximately the same as the ideal policy performance. As expected,
for the r performance eventually degrades to that of the static policy
s
\Theta \Theta \Theta \Theta \Theta \Theta \Theta
Figure
11: Workload #4 Model Experiments (R is in microseconds)
(R 9:2-s). The r case differs in that the ideal performance is nearly identical
to that of the static policy, and performance of the error policy degrades as e SS increases until
at e is identical to that of the cache policy (which, at worse than that
of the static policy). This is because the default error policy k mig parameter is greater than
the r SS value of 1000, so policy errors result in undesirable page migrations rather than missed
opportunities. In effect, SS pages with low r SS values behave like SP pages. In fact, the natural
division between the two page classes is at r mig.
The next case (Workload 4) considers SP page errors. As shown in Figure 11 we plot R versus
e SP for five r SP and w SP values. Performance at higher e SP values is considerably worse than
we encountered with Workload 3. This poor performance is due to page bouncing behavior. To
better see how quickly page bouncing becomes a concern, the data of Figure 11 are plotted again in
Figure
12 for only smaller R values (the key is not shown in this figure due to space limitations, but
it is the same as in Figure 11). We see that for smaller r SP and w SP values, performance quickly
degrades for the error policy as e SP increases from 0:001. The most significant result, however, is
that for any e SP value, performance is never better than with the static policy (R = 9:2-s).
The questions naturally arise of whether the migration costs derived from our implementation
are simply too high and how the behavior may change if page migration and fault handling can
be made significantly faster. Repeating the Workload 4 tests with r five
different parameter settings for migration costs (expressed as percentages of the default values used
previously) exploits the ability of the model to explore different architectural parameters. Figure 13
\Theta \Theta \Theta \Theta \Theta
\Theta
\Theta
Figure
12: Close-Up of Workload #4 Model Experiments (R is in microseconds)
shows that the trends in the curves as e SP increases remain the same as in Figures 12 and 11,
although the magnitude of the impact of page bouncing changes with different migration costs.
The model predictions indicate that it is better to err on the side of the more conservative
approaches (i.e., it is better to miss a migration opportunity than to suffer unwanted migra-
tions). Limiting page movement is controlled by the freezing/defrost mechanism in DUnX. The
freeze-window parameter essentially controls the imposition of freezing to limit the amount of
dynamic page placement activity. When freeze-window is set to zero, there is no limit on the
frequency of page migrations and coherency faults, and for most of our applications on both the
GP1000 and TC2000, the page bouncing problem sets in, resulting in incredibly poor performance.
In order to prevent such situations, higher freeze-window values must be used. However, if
freeze-window is set too high, the prevention or delay (until defrost) of desirable migrations and
replications becomes a real possibility.
The results of our freeze-window experiments with psolu shown in Figure 14, are typical. The
plot shows that performance with lower freeze-window values suffers relative to higher values. If
we let freeze-window go to zero, performance degrades to the point that we have never been able
to let the computation complete. An important characteristic of the psolu freeze-window results
is that once the freeze-window setting is "high enough," further increases have little effect on
performance. This is true of the hough, gauss, hh3d, and msort results as well. This suggests that
the potential problem of delaying desirable operations is not a concern for these applications on
these architectures, either because the effects of such delays are negligible (e.g., delays are short
R (-s)
e SP
Effects of SP Page Errors versus Migration Costs
\Theta \Theta \Theta \Theta \Theta
\Theta
\Theta
Figure
13: Workload #4 Model Experiments with Different Migration Costs (R is in microseconds)5506500 125 250 375 500 625 750 875 1000
Time
secs
freeze-window (ms)
Effect of freeze-window on psolu Performance
Dynamic Policy Points 33 3
Average Dynamic Policy
UMA and NUMA Static
Figure
14: Effects of freeze-window on psolu/GP1000 Measured Performance
since defrost comes soon), or because there are few such operations. We suspect that a combination
of the two factors is the reason.
6
Summary
Dynamic multiple-copy page placement is an important operating system technique for dealing
with NUMA memory management. We have investigated issues related to dynamic multiple-copy
page placement through a mixture of measurement of an operating system implementation running
a real workload and of applying an MVA-based model.
We checked the validity of our model by comparing its performance predictions with experimental
data obtained from running a synthetic program on a BBN GP1000 multiprocessor with
the DUnX operating system kernel. In most of the cases, our predictions were quite accurate. Substantial
errors, however, did occur in some cases due to "fuzzy" phase transitions in the synthetic
program and a certain race condition in the DUnX kernel. Introducing barriers into the synthetic
program caused the differences to become insignificant. The need to introduce barriers in some
cases highlights an essential distinction. We can only say with confidence that the conclusions that
we draw from our model are valid to the extent that real programs conform to our workload model.
We identified several important questions and constructed experiments with the implementation
and the model to provide answers. Experiments with DUnX compared the performance of programs
in our collected workload suite, each with an individualized policy tuning, to hand-tuned NUMA
versions of the same programs (representing a performance target). Policy parameters were varied
to effect a range of dynamic placement activity in response to a given program. Using the model,
we compared the relative performance of an approximate ideal policy with several other policies
(not all implementable) for which analytic modeling is possible. The ideal policy always makes
the correct choice among remote reference, migration, and replication. The alternatives considered
(static, MOR, cache, RC, and error) sometimes make incorrect policy decisions. We varied workload
parameters to observe a range of behaviors.
The results of these experiments support the following conclusions:
1. We confirmed the effectiveness of dynamic placement policies. Experimentally, the measured
performance of the UMA versions of the workload programs running with appropriate tunings
often approaches the performance of the hand-tuned NUMA versions. The modeling results
show that the ideal and cache policies can improve the performance of some workloads over
the performance with the static policy.
2. Replication is an important feature. The model identifies workload characteristics for which
multiple-copy policies perform significantly better than single-copy policies. The poor performance
of the single-copy policies for such workloads is found to be the result of page
bouncing, a phenomenon similar to the page thrashing sometimes encountered in traditional
virtual memory paging systems. Varying the recent-mod parameter in DUnX which changes
the preference for replication over migration shows the importance of providing replication
for real workloads.
3. Replication/coherency fault pairs can be used to migrate pages effectively. The extra overhead
associated with this type of page migration, called an RC migration, was found to be fairly
reasonable. This is a fortunate result, since intuition says that RC migrations are likely to
be fairly common in real systems supporting multiple-copy page placement.
4. Given the choice between too much dynamic placement activity (with its potential to lead to
page bouncing) and too little (missing opportunities for local access), it appears better for
a policy to be conservative. This is captured in the model by the difference between errors
made on SS pages and errors on SP pages in the error policy. Errors handling SS pages
with r SS ? k mig degrade performance from the ideal policy, but not worse than that of the
static policy. Errors handling SP pages, on the other hand, can result in performance far
worse than that of the static policy. Varying the freeze-window parameter in DUnX adjusts
the aggressiveness of the policy to move pages. Our experimental results indicate that it is
more important to limit bouncing behavior than to take advantage of every possible desirable
migration or replication. This is consistent with the predictions of our analytic model.
--R
Weak ordering - a new definition
Comparison of hardware and software cache coherence schemes.
Scheduling and Resource Management Techniques for Multiprocessors.
Competitive management of distributed shared memory.
Competitive algorithms for replication and migration problems.
Simple but effective techniques for NUMA memory management.
NUMA policies and their relationship to memory architecture.
Experience with mean value analysis models for evaluating shared bus throughput-oriented multiprocessors
The implementation of a coherent memory abstraction on a NUMA multiprocessor: Experiences with Platinum.
Memory access dependencies in shared-memory multiprocessors
Performance evaluation of memory consistency models for shared-memory multiprocessors
Memory consistency and event ordering in scalable shared-memory multiprocessors
Page table management in local/remote architectures.
Reference history
Memory coherence in shared virtual memory systems.
A hypercube shared virtual memory system.
Critical factors in NUMA memory management.
Dynamic page migration in multiprocessors with distributed global memory.
Analysis of critical architectural and program paramters in a hierarchical shared-memory multiprocessor
An accurate and efficient performance analysis technique for multiprocessor snooping cache-consistency protocols
Performance analysis of hierarchical cache-consistent multiprocessors
--TR
Memory coherence in shared virtual memory systems
An accurate and efficient performance analysis technique for multiprocessor snooping cache-consistency protocols
Page table management in local/remote architectures
A mean-value performance analysis of a new multiprocessor architecture
Reference history, page size, and migration daemons in local/remote architectures
Simple but effective techniques for NUMA memory management
The implementation of a coherent memory abstraction on a NUMA multiprocessor: experiences with platinum
Memory Access Dependencies in Shared-Memory Multiprocessors
Performance analysis of hierarchical cache-consistent multiprocessors
Analysis of critical architectural and programming parameters in a hierarchical
NUMA policies and their relation to memory architecture
Performance evaluation of memory consistency models for shared-memory multiprocessors
Experience with mean value analysis model for evaluating shared bus, throughput-oriented multiprocessors
Exploiting operating system support for dynamic page placement on a NUMA shared memory multiprocessor
Comparison of hardware and software cache coherence schemes
Experimental comparison of memory management policies for NUMA multiprocessors
The robustness of NUMA memory management
Page placement for non-uniform memory access time (NUMA) shared memory multiprocessors
An analysis of dynamic page placement on a NUMA multiprocessor
Weak orderingMYAMPERSANDmdash;a new definition
Memory consistency and event ordering in scalable shared-memory multiprocessors
Scheduling and resource management techniques for multiprocessors | local/remote NUMA architecture;experimental DUnX operating system kernel;memorysystem performance;storage allocation;remotely referenced;Index TermsNUMA memory management;experimental data;parallel programs;BBNGP1000;approximate mean-value analysistechniques;dynamic multiple-copy pageplacement;model predictions;highly parameterized dynamic page placement policy;storage management;shared memory systems;nonuniform memory access time;replication/coherency fault errors;parallel programming;shared-memory architecture;analytic model |
629154 | An Efficient Heuristic for Permutation Packet Routing on Meshes with Low Buffer Requirements. | Even though exact algorithms exist for permutation routine of n/sup 2/ messages on an*n mesh of processors which require constant size queues, the constants are very largeand the algorithms very complicated to implement. A novel, simple heuristic for the above problem is presented. It uses constant and very small size queues (size=2). For all the simulations run on randomly generated data, the number of routing steps that is required by the algorithm is almost equal to the maximum distance a packet has to travel. A pathological case is demonstrated where the routing takes more than the optimal, and itis proved that the upper bound on the number of required steps is O(n/sup 2/).Furthermore, it is shown that the heuristic routes in optimal time inversion, transposition,and rotations, three special routing problems that appear very often in the design ofparallel algorithms. | Introduction
An important task in the design of parallel computers is the development of efficient parallel
data transfer algorithms [1] [9]. The two fundamental performance measurements here are, the
number of parallel steps required to complete the message (packet) transfer and the additional
buffer (queue) size needed for queueing in each processor. We study the packet routing problem,
i.e., the problem of, given an interconnection network of processors, how to route the right data
(packets), to the right processor fast. For this routing to be efficient, the processors must be
able to communicate very fast with each other and the size of queues (buffers) created at any
processor must be constant and very small. The requirement of small and constant buffer area
for each processor is very important because it makes a parallel architecture scalable.
In this paper, we present a novel technique for packet routing on a mesh (n \Theta n array) of
processors. Our technique builts on the well known odd-even transposition method [2]. The
main contribution of our technique is that it uses very small queues (buffer area for exactly
2 packets is needed). The algorithm is very simple and does not require complex operations
for the maintenance of the buffers. So, it is a good candidate for hardware implementation.
Furthermore, our experimental results show that the number of steps required to complete the
routing is almost equal to the maximum distance a packet has to travel. Since the data for
our experiments were obtained by random number generators, they suggest that the algorithm
performs optimally with high probability.
We have chosen the mesh parallel architecture because its simple and regular interconnection
pattern makes it especially suited for VLSI implementation. In addition, since the Euclidean
distance between neighboring processing elements is constant on the mesh, the time needed for
communication between any pair of connected elements is also constant. Furthermore, although
the mesh has a large diameter (2n \Gamma 2 for an n \Theta n mesh), its topology is well matched with
many problems.
Previous work on packet routing includes deterministic [4] [5] [6] and probabilistic [3] [9]
approaches. The trivial greedy algorithm routes the packets to the correct column and then to
the correct row in 2n \Gamma 2 steps. The size of the queues, however, can be as bad as O(n). The
nontrivial solutions given to this problem are based on parallel sorting algorithms [7] [8]. Kunde
[4] [5] was the first to use parallel sorting to obtain a deterministic algorithm that completes
the routing in 2n +O(n=f(n)) steps with queues of size O(f(n)). Later on, Leighton, Makedon
and Tollis [6] derived a deterministic algorithm that completes the routing in 2n \Gamma 2 steps using
constant size queues.
The first probabilistic algorithm, derived by Valiant and Brebner [9], completed the routing
in 3n using O(log n) size queues. Later on, Krizanc, Rajasekaran, and Tsantilas [3]
derived a probabilistic algorithm that routes the packets in 2n + O(log n) steps using constant
size queues. All probabilistic algorithms route the packets correctly with high probability. It
is important to note here that, although two of the already known algorithms use constant-size
queues, the size of the constant is too large and, as the authors admit, "the constant bound in
the resulting queuesize is not practical, even for moderate values of n ( say n - 100 )". Thus,
efficient heuristics are needed, such as the one we propose, for practical applications in the design
of parallel computers.
Preliminaries
A n\Thetan mesh of processors is defined to be a graph
and an edge belongs to E if 1. The n \Theta n mesh is illustrated
in
Figure
1. At any one step, each processor can communicate with all of its neighbors by
the use of bidirectional links (called channels). We define the distance between two processors
and P 2 are neighbors. It is convenient for the description of our algorithms to talk about the
east, the west, the north and the south neighbors of a given processor. Formally, for processor
we define the east neighbor to be processor 1), the west
neighbor to be processor 1), the north neighbor to be processor P
Figure
1: The 2-dimensional mesh.
the south neighbor to be processor P processors is said to be on the
boundary of the mesh if 1. Processors on the boundary of
the mesh do not have all of their four neighbors defined. All processors are assumed to work in
a synchronous MIMD model.
In a permutation problem, each processor has one packet to transmit to another processor.
At the end, each processor receives exactly one packet (1-1 routing). The problem here is to
route all packets to their destinations fast and without the use of large additional storage area
(buffers) in each processor.
Originally, the odd-even transposition method was used for sorting the elements of a linear
array. This method works as follows: At odd time instances, the processors that are at odd
array positions compare their contents with the contents of their east neighbors. A decision is
made on whether an exchange of their contents will occur. At even time instances, a comparison
and a possible exchange occurs between the processors at even positions of the array and their
east neighbors. Figure 2 shows the comparisons that are performed at time
an array of 8 elements.
Figure
2: Comparisons performed by the odd-even transposition method at time instances t=0
and t=1 for an array of 8 elements.
3 The Routing Algorithm
Our routing algorithm always tries to reduce the summation of the distances that all packets
have to travel. Let D(t) and D(t respectively be the total distance that all packets have
to travel at time instances t and t + 1. Then we will have that D(t
it is trivially true that, if the equality holds, then, at time instance have that
1). This guarantees that eventually all packets will reach their destinations.
At time t, let processors P i and P i+1 contain packets p i and p i+1 and let the distance
these packets have to travel be d i (t) and d i+1 (t) respectively. We say that the distribution of the
packets in processors P i and P i+1 at time t is normalized, if the exchange of the contents of these
processors with each other does not reduce the total distance D(t) and also max(d i (t); d
are the distances the packets have to
travel after the exchange. An example of such a normalization of distances is shown at Figure 3.
We call the packets whose contents are compared, compared packets.
Figure
3: Examples of Normalization
Algorithm Route() /* High level description. */
ffl Assume that processors in the same row of the mesh form a linear array in which odd-even
transposition takes place.
ffl Perform odd-even transpositions on the rows of the mesh, as described in Section 2, such
that the distances the compared packets have to travel are always normalized (packets are
moving only horizontally).
ffl If a packet reaches the processor that lies at its column destination, and, no other packet
at that processor wants to move vertically in the same direction (south or north), it starts
its vertical movement (Figure 4).
Otherwise, the packet that has to go further in the column, moves vertically, while, the
other packet participates in the odd-even transposition that is taking place on the row
Figure
5).
Figure
4: Uninterrupted vertical movement of packets.
Figure
5: Interrupted vertical movement of packets.
Theorem 1 Algorithm Route() needs a buffering area of exactly 2 packets.
Proof Consider the situation where a queue can be created. Two packets enter a given node,
the one is moving vertically, and, the other is moving horizontally but must change the direction
of its movement because it has reached its column destination. If both want to move vertically
in the same direction one must wait. Thus, a queue is created. In our routing algorithm this
will not happen. Our algorithm has the property that, any two packets moving horizontally but
in opposite directions cannot enter the same processor at the same time. This property comes
from the application of the odd-even transposition. From the two packets that want to travel
in the same vertical direction, the one that has to go further is chosen. The other, instead of
being placed in a queue, participates in the odd-even transposition. So far, we have used buffer
area of only one packet (the area for the packet that initially was in the processor). It remains
to show how the communication between neighboring processors is implemented. Remember
that, at any step, a processor communicates in the horizontal direction only with the neighbor
on its east or on its west, but not with both. Consider any two neighboring processors that will
communicate during the current step of the algorithm. Each one will transmit its contents to
the other and, at the same time, it will keep a copy of its original packet. After the packets have
been received, each processor computes the distance the two packets still have to travel before
and after the exchange. If the distances are normalized after the exchange, then the previous
contents of the packets are discarded. If the distances after the exchange are not normalized,
then the exchange is ignored. Observe that both of the neighboring processors are doing exactly
the same computations on the same data. Thus, they will reach compatible results. Also observe
that by using the above described scheme, buffer area for only one extra packet is needed. This
completes the proof.
Performance
In this section, we present simulation results of the algorithm based on random routing problems
on a 100 \Theta 100 mesh connected array. We present a case where our algorithm takes more than
Table
1: Experimental results for random routing problems.
2n steps, and, we prove that the algorithm completes the routing within O(n 2 ) steps.
4.1 Experimental Results
We simulated our algorithm using random routing input data on a 100 \Theta 100, a 50 \Theta 50 and a
20 \Theta 20 mesh connected array. The data were generated with the help of the random number
generator function drand48() on a SUN workstation. Each experiment was set to start from
a different point in the random sequence generated by this function. We run hundreds of
experiments and we got near optimal performance on all of them (off by at most 1 routing step).
Table
shows results from some of our simulations on a 100 \Theta 100 mesh connected array. The
second column of this table contains the maximum distance that any packet has to travel in the
specific experiment ( This distance is always less than or equal to 198 since we are working on
a 100 \Theta 100 mesh). The fact that some entries in this column are the same does not mean that
the data in the corresponding experiments are identical. The destinations of the 10000 packets
in each experiment are totally different. The third column of Table 1 contains the number of
the routing steps that are needed for the completion of the routing. Note that, this number is
Figure
A permutation of the packets that results in non-optimal routing time.
almost the same (off by at most 1) with the maximum distance in the corresponding experiment.
This implies that the performance of the heuristic is near optimal.
4.2 A Low Performance Packet Routing Problem
In spite of the good experimental behavior of algorithm Route(), there exist permutations of the
packets that result into routing time that is not optimal. In this section, we present such an
initial setting of the packets. Assume that the destinations of the packets on the n \Theta n mesh is
as shown in Figure 6.
ffl All packets in the first row and in columns 0 trough
are destined for the
south-east p
n \Theta
n corner of the mesh.
initially at position (0;
destined for the processor at position
Observe that, all packets that are initially located to the west of packet P have to travel
further than P . So, during the odd-even transposition in the first row, packet P moves west
since we always try to normalize the distances the packets have to travel. When it reaches the
Table
2: Experimental results on the number of delayed packets for random routing problems
where non-optimal solutions are forced to occur.
processor at position (0,0) it starts moving to the east. It reaches column
moves vertically down until it arrives at its destination. The total movement of packet P takes
n) routing steps.
We have to mention that although there exists a case for which the required routing time
is not optimal, no such case was produced by the random number generator. As is indicated
in
Table
2, the required number of steps for a problem for which the above "bad" situation is
forced to occur, depends only on the distance the "bad" packet P has to travel. Furthermore,
as it is indicated in Table 3, only a very small portion of the packets is not received in optimal
time ("delayed" packets). More specifically, in all experiments only 38-40 packets out of an
initial load of 10000 packets are "delayed" packets. When we traced these packets back to their
origins, we found out that all of them originated in the first row of the mesh. The obvious
conclusion from our experiments is that, the performance of the routing algorithm when such a
"bad" situation occurs seems to depend only on the packets that constitute the "bad" situation.
Again, we have to remind the reader that Tables 2 and 3 contain entries for a small portion of
our simulations.
Table
3: Experimental results for random routing problems where non-optimal solutions are
forced to occur.
4.3 An Upper Bound
Given the low performance routing problem described in the previous section, it is natural to
ask for an upper bound on the number of routing steps required by algorithm Route() in order
to solve any permutation routing problem. It is trivial to see that algorithm Route() cannot
take more than O(n 3 ) routing steps to route any permutation routing problem (At any routing
step at least one packet will approach its destination, So the total distance that all packets have
to travel is reduced by at least 1. The maximum total distance that all packets have to travel
is O(n 3 ). This implies that, after O(n 3 ) routing steps all packets have reach their destinations).
However, we will prove that algorithm Route() completes the routing within O(n 2 ) steps.
Before we proceed with the proof we need the following definitions and lemmata. Definition
A packet that at a given time instant participates in the odd-even transposition is said to be of
row of the mesh is said to be empty at time t if at that time instant no
packet of type-H is on the row.
Assume a row of the mesh with at most n packets of type-H. Also assume that no
other packet will cross the row in order to reach its destination. Then, after 2n steps the row
will be empty.
Proof Observe that if no packet wants to cross the row from the north or the south, then the
only packets that will participate in the odd-even transposition are the ones originally on that
row. Another obvious observation is that it is impossible for two packets that were switched
at a previous time instant to be compared and switched again. This implies that a packet can
move at most n steps away of its destination. Then, after at most n steps, every packet will
reach its column destination and will leave the row.
Assume a row of the mesh with at most n packets of type-H. Also assume that at
most k packets want to cross the row in order to reach their destination. Then, after 2n
steps the row will be empty.
Proof Similar to that of Lemma 1
Theorem 2 Algorithm Route() will complete the routing of any permutation problem in O(n 2 )
routing steps.
Proof We will prove the theorem by showing that every row on the mesh will be empty
after O(n 2 ) steps. Then, after n steps all packets will reach their destinations since they are
moving vertically and no collisions will happen. Lemma 1 implies that rows 0 and
be empty after 2n steps. We will use Lemma 2 to obtain a more general statement. Consider
any row k; 1 packets want to cross row k
in order to reach their destinations. From Lemma 2 we know that row k will be empty after
routing steps. The above expression gets its maximum value
This proves the theorem.
The above theorem provides an upper bound on the number of routing steps required by
Algorithm Route(). However, we were not able to construct a routing problem that takes O(n 2 )
steps for its completion by Algorithm Route(). We conjecture that Algorithm Route() can solve
any permutation routing problem within 4n routing steps. If this is true, then, the analysis
will be tight, since we have constructed a very complicated routing problem that needs about
4n steps for its completion. Also, we will have the first routing algorithm for the mesh that
requires O(n) routing steps and is not based on sorting of its submeshes. The existence of such
an algorithm is an open problem.
4.4 An Interesting Property
In this section we prove a property of Algorithm Route() concerning the way the total distance
that all packets have to travel is reduced during the course of the algorithm. Exploring similar
properties might be the key tool in the effort to prove that a non-sorting based algorithm
terminates after O(n) steps.
Theorem 3 Let S i;t denote the set of packets that are in row i at time t, and D(S i;t ; l) denote
the total distance that the packets in set S i;t still have to travel at time l; l - 0. Then, for every
row
Proof Consider all packets that are in row i at time t. They constitute set S i;t . These packets
can be divided into two categories. Packets that will participate in the odd-even transposition,
already defined as type-H, and packets that will move vertically (to the north or to the south),
we will call type-V . We consider two cases:
Case 1. S i;t contains no packet of type V .
First assume that there are two compared packets that want to travel to opposite
directions. Then, they will be exchanged and we will have that D(S
At the next step these packets may continue their horizontal movement, start moving
vertically, or do not move at all. In any case, the total distance is not decreased. Thus,
2.
Now assume that all compared packets want to move in the same direction. Even if an
exchange takes place, the total distance that the packets have to travel remains the same.
There are two possibilities. All the pairs of compared packets
have the same direction, or there are two pairs that want to travel to opposite directions.
If all the pairs have to travel to the same direction, then there must be less than n packets
in the row. If there are n packets and, say, they are all moving to the left, the leftmost one
must be in its correct column, so it must be of type V . But we assumed that no such packet
exists. So, there are less than n packets in the row. This implies that there exist a "gap",
and, a packet must be adjacent to it. During the next step, the packet that is adjacent
to the "gap" will occupy it. Thus, we get that D(S 1. The second
possible case is that there are two pairs of packets that want to move to opposite directions.
Then, during the next step, their adjacent packets will be compared and an exchange will
occur. Thus, we will have that D(S 2. If there are no such adjacent
there must be a "gap". In this case we will have that D(S
Case 2. There is at least one packet of type V .
Let P be such a packet. Then P moves vertically and at time t+1 we have: D(S
1. Now, we have to consider what will happen in the next step. If packet P
stays at its correct column during the next step, then, we have that D(S
Assume now, that, in the next step it is forced to move away from its destination.
Then, we might have that D(S i;t is now at row k, where k is
We will show that D(S k;t ; 2. The fact that packet P is
forced to move away from its destination implies two things. First of all, there is another
packet at row k which is destined for the same column that packet P is, and, in addition,
it has to go further. This packet forces P to participate in the odd-even transposition.
Secondly, there is another packet that forces P to move out of its column destination.
Thus, there are two packets in set S k;t that each reduces its distance by one step. So, we
have that D(S k;t 2.
Routing Problems that Can Be Solved Optimally
In this section, we consider some special forms of permutation routing problems for which algorithm
performs optimally. We prove that when the permutation problem is a rotation,
an inversion, or a transposition, then at most 2n steps are required. Before we procced to examine
these problems, we prove two useful lemmas that concern permutation routing on a chain
of processors.
5.1 Permutation Routing on a Chain of Processors
Lemma 3 A permutation routing problem on a chain of n processors can be solved in n steps
using the odd-even transposition method.
Proof A way to solve the permutation routing problem is by sorting all packets according to
their destination addresses. Since the odd-even transposition method is used for sorting in linear
arrays, the time required for sorting is also enough for routing. But, we know that in order to
n elements on a linear array of size n, we need exactly n steps if the odd-even transposition
method is used [2].
Lemma 4 A permutation routing problem on a chain of processors can be solved in r routing
steps, where r is the maximum distance a packet has to travel.
Proof We can achieve the number of steps that is stated at the lemma if all packets start
their motion at the same time, and, always travel towards their destinations. Then, nothing can
delay a packet. Since at each step every packet approaches its destination, the packet that has
to travel distance r will do so in r routing steps.
5.2 Rotations
permutation packet routing problem on a n \Theta n mesh is called an (l; m)-rotation
if the packet initially at position (i; j) is destined for position (i
we have a horizontal rotation, if vertical rotation, and if diagonal
rotation. The next theorem shows that for all kinds of rotations algorithm Route() is optimal.
Theorem 4 Given an n \Theta n mesh of processors, algorithm Route() needs at most n+max(l;n \Gamma
steps in order to complete the routing of any permutation problem that is an (l; m)-rotation,
Proof For the moment, assume that algorithm Route() routes all packets in two consecutive
and time disjoint phases. During the first phase, it routes all packets horizontally until they
reach their column destination. The second routing phase starts when all the packets reach
their column destination. All packets are routed to their final destination along the columns
of the array. Observe now, that when the second phase starts, each processor at any column
has exactly one packet to route vertically. This is so, because in the beginning of the routing
all the packets of the same row are destined for different columns. Thus, in each phase of the
algorithm, we have a permutation problem to solve. Since we use the odd-even transposition
method in order to route the packets during the first phase, we know from Lemma 3 that n
steps are enough. During the second phase the routing is done as described in Lemma 4. So,
we need exactly steps in order to complete the routing. Thus, a total of
steps is required.
In the above analysis, we assumed that the packets are routed in two time disjoint phases,
an assumption that is not true. However, the analysis is still valid. To see why, observe that, all
packets that are destined for the same column will reach that column at the same step. Thus,
all packets that are destined for the same column start their column routing at the same time.
This synchonization permits us to treat the movement of the packets as two separate phases, a
horizontal, and a vertical one. This completes the proof.
5.3 Inversion
permutation packet routing problem on a n \Theta n mesh is called an inversion if the
packet initially at position (i; j) is destined for position (n
Theorem 5 shows that algorithm Route() performs optimally if the routing problem is an inversion
Theorem 5 Given an n \Theta n mesh of processors, algorithm Route() needs exactly
in order to complete the routing of an inversion packet routing problem.
Proof All packets that are destined for column are initially located
at column j. Since the movement of the packets in all columns is similar, all the packets that
are destined for the same column reach that column at the same step. So, as in the proof of
Theorem 4, we can assume in our analysis that the routing is done at two time disjoint phases.
From Lemma 3, we know that the first phase needs exactly n routing steps. The maximum
distance a packet has to travel during the second phase is n \Gamma 1. This is because, all packets
that initially are located in the first row of the mesh have destinations in the last row of the
mesh. By applying Lemma 4, we conclude that the second phase takes exactly steps.
Thus, algorithm Route() solves the inversion packet routing problem in exactly 2n \Gamma 1 steps.
5.4 Transposition
permutation packet routing problem on an n \Theta n mesh is called a transposition if
the packet that initially is at position (i; j) is destined for position (j; i), where 0 -
Theorem 6 Given an n \Theta n mesh of processors, algorithm Route() needs exactly routing
steps in order to solve the transposition routing problem.
Proof Without loss of generality consider packet P that initially is located at position (i; j),
j. The case where i ? j is treated similarly. Packet P is destined for location (j; i).
Observe that, all the packets to its left are also destined for column i, but, they have to travel
greater distance. More specifically, the packet that is k positions to its left has to travel 2k step
more than P . So, P is "trapped" by the packets to its left. As a result, it moves to the left
until it hits the left boundary of the mesh. Then it moves uninterrupted to the right until it
reaches its column destination. Finally, it moves vertically to its destination. Figure 7 illustrates
the motion of packet P . Note that no interaction of packets occurs during the vertical routing
because the packets that are to the left of column i want to move upwards, while those to the
right want to move downwards. The travel of packet P , initially at position (i; takes a total
of routing steps. This implies that all packets reach their destinations after
steps, since the maximum value that i can take is n \Gamma 1.
6 Conclusion and Further Work
We have presented an efficient heuristic for routing messages (packets) on mesh connected parallel
computers. The main advantage of our algorithm is that it uses buffer area of exactly 2
packets per processor.Furthermore, the algorithm is simple and its experimental behavior indicates
that it performs optimally for random routing problems. It is the first time, according to
our knowledge, that such a heuristic is reported, and its surprisingly good experimental results
make it more important. An interesting problem would be to prove that this algorithm works
Figure
7: The motion of packet P during the routing of a transposition.
optimally with high probability. However, the fact that the movement of any packet depends
on the destinations of its neighboring packets at any time instance makes this problem a very
difficult and challenging one. Another interesting problem is to provide a tighter analysis of
Algorithm Route(). There is a gap between the upper bound of O(n 2 ) routing steps and the 4n
performance of the worst case routing problem we were able to construct.
--R
"The Torus Routing Chip"
The Art of Computer Programming
"Optimal Routing Algorithms for Mesh-Connected Processor Arrays"
"Routing and Sorting on Mesh-Connected Arrays"
"Packet Routing on Grids of Processors"
"A Routing in an n \Theta n Array With Constant Size Queues"
"An Optimal Sorting Algorithm for Mesh Connected Computers"
"Sorting on a Mesh-Connected Parallel Computer"
"A Scheme for Fast Parallel Communication"
--TR
An optimal sorting algorithm for mesh connected computers
Optimal routing algorithms for mesh-connected processor arrays (extended abstract)
Routing and sorting on mesh-connected arrays (extended abstract)
A 2<i>n</i>-2 step algorithm for routing in an <i>nxn</i> array with constant size queues
The art of computer programming, volume 3
Sorting on a mesh-connected parallel computer
--CTR
Antonios Symvonis , Jonathon Tidswell, An Empirical Study of Off-Line Permutation Packet Routing on Two-Dimensional Meshes Based on the Multistage Routing Method, IEEE Transactions on Computers, v.45 n.5, p.619-625, May 1996 | parallel architectures;parallel algorithms;upper bound;index termsheuristic;transposition;permutation packet routing;low buffer requirements;exact algorithms;meshes;packet switching;optimal time inversion |
629166 | On the Efficiency of Parallel Backtracking. | Analytical models and experimental results concerning the average case behavior ofparallel backtracking are presented. Two types of backtrack search algorithms areconsidered: simple backtracking, which does not use heuristics to order and prunesearch, and heuristic backtracking, which does. Analytical models are used to comparethe average number of nodes visited in sequential and parallel search for each case. Forsimple backtracking, it is shown that the average speedup obtained is linear when thedistribution of solutions is uniform and superlinear when the distribution of solutions isnonuniform. For heuristic backtracking, the average speedup obtained is at least linear,and the speedup obtained on a subset of instances is superlinear. Experimental results formany synthetic and practical problems run on various parallel machines that validate thetheoretical analysis are presented. | Introduction
Consider the problem of finding a solution in a state-space tree containing one or more
solutions[10, 28, 26, 6]. Backtracking, also called Depth-first Search, is a widely used technique
for solving such problems because of its storage efficiency [13, 28]. Throughout the
paper, we use the two names interchangeably. We use the acronym DFS to denote backtracking
or depth-first search on state-space trees. There are many variants of DFS algorithms,
each of which is tuned to certain types of problems. In this paper we deal with two important
simple backtracking (which does not use any heuristic information); (ii) heuristic
backtracking (which uses ordering and/or pruning heuristics to reduce search complexity).
A number of parallel formulations of DFS have been developed by various researchers[12,
7, 22, 2, 25, 23]. In one such formulation[23], N processors concurrently perform backtracking
in disjoint parts of a state-space tree. The parts of the state-space searched by different
processors are roughly of equal sizes. But the actual parts of the search space searched
by different processors and the sequence in which nodes of these subspaces are visited are
determined dynamically; and these can be different for different executions. As a result, for
some execution sequences, the parallel version may find a solution by visiting fewer nodes
than the sequential version thus giving superlinear speedup. (The speedup is defined as the
ratio of the times taken by sequential and parallel DFS.) And for other execution sequences,
it may find a solution only after visiting more nodes thus giving sublinear speedup). This
type of behavior is common for a variety of parallel search algorithms, and is referred to
as 'speedup anomaly' [18, 19]. The superlinear speedup in isolated executions of parallel
DFS has been reported by many researchers [12, 25, 22, 7, 33, 20]. It may appear that on
the average the speedup would be either linear or sublinear; otherwise, even parallel DFS
executed on sequential processor via time-slicing would perform better than sequential DFS.
This paper considers the average case speedup anomalies in parallel DFS algorithms
that are based on the techniques developed in [23, 17]. Though simple backtracking and
heuristic backtracking algorithms we analyze here use a DFS strategy, their behavior is
very different, and they are analyzed separately. We develop abstract models for the search
spaces that are traversed by these two types of DFS algorithms. We analyze and compare
the average number of nodes visited by sequential search and parallel search in each case.
For simple backtracking, we show that the average speedup obtained is (i) linear when the
distribution of solutions is uniform and (ii) superlinear when the distribution of solutions is
non-uniform. For heuristic backtracking, the average speedup obtained is at least linear (i.e.,
either linear or superlinear), and the speedup obtained on a subset of instances (that are
"difficult" instances) is superlinear. The theoretical analysis is validated by experimental
analysis on example problems such as the problem of generating test-patterns for digital
circuits[3], N \Gammaqueens, 15-puzzle[26], and the hackers problem[31].
The result that "parallel backtrack search gives at least a linear speedup on the average"
is important since DFS is currently the best known and practically useful algorithm to solve a
number of important problems. The occurrence of consistent superlinear speedup on certain
problems implies that the sequential DFS algorithm is suboptimal for these problems and
that parallel DFS time-sliced on one processor dominates sequential DFS. This is highly
significant because no other known search technique dominates sequential DFS for some of
these problems. We have restricted our attention in this paper to state-space search on trees,
as DFS algorithms are most effective for searching trees.
The overall speedup obtained in parallel DFS depends upon two factors: search overhead
(defined as the ratio of nodes expanded by parallel and sequential search), and communication
overhead (amount of time wasted by different processors in communication, synchro-
nization, etc. They are orthogonal in the sense that their causes are completely different.
Search overhead is caused because sequential and parallel DFS search the nodes in a different
order. Communication overhead is dependent upon the target architecture and the
load balancing technique. The communication overhead in parallel DFS was analyzed in our
previously published papers[17, 16, 4, 14], and was experimentally validated on a variety of
problems and architectures. 1 In this paper, we only analyze search overhead. However, in
the experiments, which were run only on real multiprocessors, both overheads are incurred.
Hence the overall speedup observed in experiments may be less than linear (i.e., less than N
on N processors) even if the model predicts that parallel search expands fewer nodes than
sequential search. In parallel DFS, the effect of communication overhead is less significant
for larger instances (i.e., for instances that take longer time to execute). Hence, the larger
instances for each problem obey the analyses more accurately than the smaller instances.
The reader should keep this in mind when interpreting the experimental results presented
in this paper.
In Section 2, we briefly describe the two different kinds of DFS algorithms that are
being analyzed in this paper. In Section 3, we review parallel DFS. Simple backtrack search
algorithms are analyzed in Section 4 and ordered backtrack search algorithms are analyzed
in 5. Section 6 contains related research and section 7 contains concluding remarks.
In these experiments, parallel DFS was modified to find all optimal solutions, so that the number of
nodes searched by sequential DFS and parallel DFS become equal, making search overhead to be 1.
2 Types of DFS algorithms
Consider problems that can be formulated in terms of finding a solution path in an implicit
directed state-space tree from an initial node to a goal node. The tree is generated on the
fly with the aid of a successor-generator function; given a node of the tree, this function
generates its successors. Backtracking (i.e., DFS) can be used to solve these problems as
follows. The search begins by expanding the initial node; i.e., by generating its successors.
At each later step, one of the most recently generated nodes is expanded. (In some problems,
heuristic information is used to order the successors of an expanded node. This determines
the order in which these successors will be visited by the DFS method. Heuristic information
is also used to prune some unpromising parts of a search tree. Pruned nodes are discarded
from further searching.) If this most recently generated node does not have any successors
or if it can be determined that the node will not lead to any solutions, then backtracking is
done, and a most recently generated node from the remaining (as yet unexpanded) nodes is
selected for expansion. A major advantage of DFS is that its storage requirement is linear
in the depth of the search space being searched. The following are two search methods that
use a backtrack search strategy.
1. Simple Backtracking is a depth-first search method that is used to find any one
solution and that uses no heuristics for ordering the successors of an expanded node.
Heuristics may be used to prune nodes of the search space so that search can be avoided
under these nodes.
2. Ordered Backtracking is a depth-first search method that is used to find any one
solution. It may use heuristics for ordering the successors of an expanded node. It may
also use heuristics to prune nodes of the search space so that search can be avoided
under these nodes. This method is also referred to as ordered DFS[13].
3 Parallel DFS
There are many different parallel formulations of DFS[7, 15, 19, 34, 2, 8, 23] that are suitable
for execution on asynchronous MIMD multiprocessors. The formulation discussed here is
used quite commonly [22, 23, 2, 4, 14]. In this formulation, each processor searches a disjoint
part of the search space. Whenever a processor completely searches its assigned part, it
requests a busy processor for work. The busy processor splits its remaining search space
into two pieces and gives one piece to the requesting processor. When a solution is found
by any processor, it notifies all the other processors. If the search space is finite and has no
solutions, then eventually all the processors would run out of work, and the search (sequential
or parallel) will terminate without finding any solution. In backtrack search algorithms, the
search terminates after the whole search space is exhausted (i.e., either searched or pruned).
The parallel DFS algorithms we analyze theoretically differ slightly from the above description
as follows. For simplicity, the analyses of the models assumes that an initial static
partitioning of the search space is sufficient for good load balancing. But in the parallel DFS
used in all of our experimental results, the work is partitioned dynamically. The reader will
see that there is a close agreement between our experiments and analyses.
4 Analysis of Speedup in Simple Backtracking with
Information
4.1 Assumptions and Definitions
The state-space tree has M leaf nodes, with solutions occurring only among the leaf nodes.
The amount of computation needed to visit each leaf node is the same. The execution time
of a search is proportional to the number of leaf nodes visited. This is not an unreasonable
assumption, as on search trees with branching factor greater than one, the number of nodes
visited by DFS is roughly proportional to the number of leaves visited. Also, in this section
we don't model the effect of pruning heuristic explicitly. We assume that M is the number
leaf nodes in the state-space tree that has already been pruned using the pruning function.
Both sequential and parallel DFS stop after finding one solution. In parallel DFS, the
state-space tree is equally partitioned among N processors, thus each processor gets a subtree
of with M
nodes. There is at least one solution in the entire tree (otherwise both parallel
search and sequential search would visit the entire tree without finding a solution, resulting
in linear speedup). There is no information to order the search of the state-space tree; hence
the density of solutions across the search frontier is independent of the order of the search.
Solution density ae of a leaf node is the probability of the leaf node being a solution. We
assume a Bernoulli distribution of solutions; i.e., the event of a leaf node being a solution is
independent of any other leaf node being a solution. We also assume that ae !! 1.
WN denotes the average of the total number of nodes visited by N processors before
one of the processors finds a solution. W 1 is the average number of leaf nodes visited by
sequential DFS before a solution is found. Clearly, both W 1 and WN are less than or equal
to M .
Since the execution time of a search (in the sequential as well as parallel case) is proportional
to the number of nodes expanded,
WN \Theta N
Efficiency E, is the speedup divided by N . E denotes the effective utilization of computing
resources.
WN
4.2 Efficiency Analysis
Consider the search frontier of M leaf nodes being statically divided into N regions, each
with
leaves. Let the density of solutions among the leaves in the i th region be ae i .
In the parallel case, processor i searches region i, independently until one of the processors
finds a solution. In the sequential case, the regions are arranged in a random sequence and
searched in that order.
Theorem 4.1 If ae(? 0) is the density in a region and number of leaves K in the region is
large , then mean number of leaves visited by a single processor searching the region is 1
ae .
Proof: Since we have a Bernoulli distribution,
Mean number of
For large enough K, the second term in the above becomes less than 1; hence,
Mean number of trials 'aeSequential DFS selects any one of the N regions with probability 1=N , and searches it
to find a solution. Hence the average number of leaf nodes expanded by sequential DFS is 2
ae N
2 The given expression assumes that a solution is always found in the selected region, and thus only one
region has to be searched. But with probability does not have any solution, and another
region would need to be searched. Taking this into account would make the expression for W 1 more precise,
and and increase the average value of W 1 somewhat. But the reader can verify that the overall results of
the analysis will not change.
In each step of parallel DFS, one node from each of the N regions is explored simultane-
ously. Hence the probability of success in a step of the parallel algorithm is
This is approximately ae 1 (neglecting the second order terms since ae i 's are
assumed to be small). Hence
Inspecting the above equations, we see that W
HM and
AM , where HM is
the harmonic mean of the ae 0 s; AM is their arithmetic mean. Since we know that arithmetic
mean (AM) and harmonic mean (HM) satisfy the relation : AM - HM , we have W 1 - WN .
In particular,
ffl when ae i 's are equal, When solutions are uniformly
distributed, the average speedup for parallel DFS is linear.
ffl when they are different, AM ? HM , therefore W 1 ? WN . When solution densities in
different regions are non-uniform, the average speedup for parallel DFS is superlinear.
The assumption that "the event of each node being a solution is independent of the
event of other nodes being a solution" is unlikely to be true for practical problems. Still, the
above analysis suggests that parallel DFS can obtain higher efficiency than sequential DFS
provided that solutions are not distributed uniformly in the search space, and no information
about densities in different regions is available. This characteristic happens to be true for a
variety of problem spaces searched by simple backtracking.
4.3 Experimental Results
We present experimental results of the performance of parallel DFS on three problems: (i)
the hacker's problem [31] (ii) the 15-puzzle problem[26] and (iii) the N \Gammaqueens problem[6].
In all the experiments discussed in this section, both sequential and parallel DFS visit the
newly generated successors of a node in a random order. This is different from "conventional"
DFS in which successors are visited in some statically defined "left to right" order or in
"ordered" DFS in which successors are visited in some heuristic order. We visit the successors
in a random order (rather than left-to-right or heuristic order) because we are trying to
validate our model which assumes that no heuristic information is available to order the
nodes - hence any random order is as good as any other. 3 To get the average run time, each
3 As an aside, the reader should note that the ordering information does not always improve the effectiveness
of DFS. For example, experience with the IDA* algorithm on the 15-puzzle problem [11] indicates that
experiment is repeated many times. Note that, besides the random ordering of successors,
there is another source of variability in execution time of parallel DFS. This is because the
parts of the state-space tree searched by different processors are determined dynamically, and
are highly dependent on run time events beyond programmers control. Hence, for parallel
DFS, each experiment is repeated even more frequently than it is for sequential DFS.
The hacker's problem involves searching a complete binary tree in which some of the leaf
nodes are solution nodes. The path to a solution node represents a correct password among
the various binary sequences of a fixed length. There may be more than one solution, due to
wild card notation. We implemented a program on Sequent Balance 21000 multiprocessor
and experimented with different cases for up to 16 processors. Experiments were done with
two different kinds of trees. In one case, one or more solutions are distributed uniformly in
the whole search space. This corresponds to the case in which the branching points due to
wild cards are up in the tree (more than 1 solutions) or there are no wild cards (exactly one
solution). In this case, as predicted by our analysis, sequential search and parallel search do
approximately the same amount of work, hence the speed up in Parallel DFS is linear. This
case corresponds to the curve labeled as 1 in Figure 1. The efficiency starts decreasing beyond
8 processors because communication overhead is higher for larger number of processors.
In the second case, four solutions were distributed uniformly in a small subspace of the
total space and this subspace was randomly located in the whole space. This corresponds
to the case in which branching point due to characters denoted as wild cards are low in
the tree. In this case, as expected, the efficiency of parallel DFS is greater than 1, as the
regions searched by different processors tend to have different solution densities. The results
are shown in Figure 1. The fractions indicated next to each curve denotes the size of the
subspace in which solutions are located. For example, 1means that the curve is for the case
in which solutions are located (randomly) only in 1of the space . In Figure 1, the reader
would notice that there is a peak in efficiency at r processors for the case in which solutions
are distributed in 1
r fraction of the search space (see the curves labeled 1=8 and 1=16). This
happens because it is possible for one of the r processors to receive the region containing
all or most of the solutions, thus giving it a substantially higher density region compared
with other processors. Of course, since the search space in the experiments is distributed
dynamically, this above best case happens only some of the times, and the probability of its
occurrence becomes smaller as the number of processors increases. This is why the heights
of the peaks in the successive curves for decreasing values of 1=r are also decreasing.
the use of the Manhattan distance heuristic for ordering (which is the best known admissible heuristic for
15-puzzle) does not make DFS any better. On the other hand, in problems such as N \Gammaqueens, the ordering
information improves the performance of DFS substantially[9, 32].
The experiments for 15-puzzle were performed on the BBN Butterfly parallel processor
for up to 9 processors. The experiments involved instances of the 15-puzzle with uniform
distribution and non-uniform distribution of solutions. Depth bounded DFS was used to limit
the search space for each instance. Sequential DFS and parallel DFS were both executed
with a depth-bound equal to the depth of the shallowest solution nodes. The average timings
for sequential and parallel search were obtained by running each experiment 100 times (for
every 15-puzzle instance). Figure 2 shows the average speedups obtained. The instances
with uniform distribution of solutions show a near linear speedup. The maximum deviation
of speedup is indicated by the banded region. The width of the banded region is expected
to reduce if a lot more repetitions (say a 1000 for every instance) were tried. The instances
with non-uniform distribution of solutions give superlinear speedups.
Figure
3 show the efficiency versus the number of processors for the N \Gammaqueens problem.
The problem is naturally known to exhibit non uniformity in solution density[8]. Each data
point shown was obtained by averaging over 100 trials. As we can see, parallel DFS exhibits
better efficiency than sequential DFS. As the number of processors is increased for any fixed
problem size, the efficiency goes down because the overhead for parallel execution masks the
gains due to parallel execution. We expect that on larger instances of such problems, parallel
DFS will exhibit superlinear speedup even for larger number of processors.
All of these experiments confirm the predictions of the model. Superlinear speedup occurs
in parallel DFS if the density of solutions in the regions searched by different processors are
different. Linear speedup occurs when solutions are uniformly distributed in the whole
search space, which means that solution density in regions searched by different processors
in parallel is the same.
Number of processors N
Efficiency
Figure
1: Efficiency curves for the hackers problem. An efficiency greater than 1 indicates
superlinear speedup.
Number of Processors N
S Uniform dist.
Non Uniform dist.
Single soln.
Figure
2: Speedup curves for the 15-puzzle problem.
13-queens
16-queens
22-queens
Linear Speedup
Number of processors N
Figure
3: Efficiency Curves for the N-Queens problem.
5 Speedup in Parallel Ordered Depth-First Search
5.1 Assumptions and Definitions
We are given a balanced binary tree of depth d. The tree contains 2 nodes of which 2 d
are leaf nodes. Some of the 2 are solution nodes. We find one of these solutions
by traversing the tree using (sequential or parallel) DFS. A bounding heuristic is available
that makes it unnecessary to search below a non-leaf node in either of the following two
cases:
1. It identifies a solution that can be reached from that node or even identifies the node
to be a solution.
2. It identifies that no solution exists in the subtree rooted at that node, and thus makes
it unnecessary to search below the node.
When a bounding heuristic succeeds in pruning a non-leaf node, there is no need to search
further from that node. (If a node is a non-leaf node and if the bounding heuristic does not
succeed, then the search proceeds as usual under the node.) We characterize a bounding
heuristic by its success rate (1 \Gamma -); i.e. the probability with which the procedure succeeds
in pruning a node. For the purposes of our discussion we shall assume 0:5
This ensures that the effective branching factor is greater than 1. For - 0:5, the search
complexity becomes insignificant.
Consider a balanced binary tree of depth k that has been pruned using the
bounding heuristic. Let F (k) be the number of leaf nodes in this tree. Clearly, F (k) would
be no more than than 2 k .
If our given tree has no solutions, then DFS will visit F (d) leaf nodes. If the tree has one
or more solutions, then DFS will find a solution by visiting fewer leaf nodes than F (d). The
actual number of leaf nodes visited will depend upon the location of the left most solution in
the tree, which in turn will depend upon the order in which successors of nodes are visited.
In an extreme case, if the "correct" successor of each node is visited first by DFS, then a
solution will be found by visiting exactly one leaf node (as the left most node of the search
tree will be a solution). In practice, an ordering heuristic is available that aids us in visiting
the more promising node first and postponing the visit to the inferior one (if necessary) later.
We characterize an ordering heuristic by a parameter fl. The heuristic makes the correct
choice in ordering with a probability fl; i.e., fl-fraction of the time, subtree containing a
solution is visited, and the remaining (1 \Gamma fl)-fraction of the time, subtree containing no
solution is visited. Obviously, it only makes sense to consider 0:5 means
that the heuristic provides worse information than a random coin toss. When 1:0, the
ordering is perfect and the solution is found after visiting only 1 leaf node.
We shall refer to these trees as OB-trees (ordered-bounded trees), as both ordering and
bounding information is available to reduce the search. To summarize, OB-trees model search
problems where bounding and/or ordering heuristics are available to guide the search and
their error probability is constant. The reader should be cautioned that for some problems,
this may not necessarily be true.
5.2 Efficiency Analysis
We now analyze the average number of leaf nodes visited by the sequential and parallel DFS
algorithms on OB-trees. Let S(d) be the average number of leaf nodes (pruned nodes or
terminal nodes) visited by sequential search and P (d) be the sum of the average number of
leaf nodes visited by each processor in parallel DFS.
Theorem 5.1 F
Proof: See the proof of Theorem A.1 in Appendix A.Theorem 5.2 S(d) -
See the proof of Theorem A.2 in Appendix A.Thus the use of ordering and bounding heuristics in sequential DFS cuts down the effective
size of the original search tree by a large factor. The bounding heuristic reduces the effective
branching factor from 2 to approximately 2-. The ordering heuristic reduces the overall
search effort by a factor of (1 \Gamma fl). Even though OB-trees are complete binary trees, a
backtrack algorithm that uses the pruning heuristic reduces the branching factor to less
than 2.
Now let's consider the number of nodes visited by parallel DFS. Clearly, the bounding
heuristic can be used by parallel DFS just as effectively (as it is used in sequential DFS).
However, it might appear that parallel DFS cannot make a good use of the ordering heuristic,
as only one of the processors will work on the most promising part of the space whereas
other processors will work on less promising parts. But the following theorem says that this
intuition is wrong.
Theorem 5.3 For OB-trees, parallel DFS expands no more nodes on the average than sequential
DFS.
Proof: We first consider a two processor parallel DFS and later generalize the result. In
two-processor parallel DFS, the tree is statically partitioned at the root and the processors
search two trees of depth independently until at least one of them succeeds. Each
of them individually performs the sequential DFS with the help of bounding and pruning
heuristics. Note that though the second processor violates the advice of the ordering heuristic
at the root node, it follows its advice everywhere else. Consider the case in which the root
node is not pruned by the bounding heuristic. Now there are two possible cases:
Case 1: Solution exists in the left subtree. This case happens fl-fraction of the time.
In this case, sequential DFS visits S(d \Gamma 1) leaf nodes on the average, whereas parallel DFS
visits at most 2S(d \Gamma 1) leaf nodes. If the left subtree also has a solution, then parallel
DFS visits exactly 2S(d \Gamma 1) leaf nodes on the average. Otherwise (if both subtrees have a
solution), then the average work done in parallel DFS will be smaller.
Case 2: Solution does not exist in the left subtree (i.e., it exists in the right subtree). This
case happens (1\Gammafl)-fraction of the time. In this case, sequential DFS visits F (d\Gamma1)+S(d\Gamma1)
leaf nodes on the average, whereas parallel DFS visits exactly 2S(d \Gamma 1) leaf nodes.
Thus fl-fraction of the time parallel DFS visits at most S(d \Gamma 1) extra nodes, and (1 \Gamma fl)-
fraction of the time it visits F (d nodes than sequential DFS.
Hence, on the average (ignoring the case in which solution is found at the root itself 4 ),
- 0 by Theorem A.2
This result is extended to the case where we have 2 a processors performing the parallel
search, in the following theorem.
Theorem 5.4 If P a is the number of nodes expanded in parallel search with 2 a processors,
where a ? 0, then P a - P a\Gamma1 .
Proof: This theorem compares the search efficiency when 2 a\Gamma1 processors are being used to
that when 2 a processors are being used. In the first case, the entire search tree is being split
into 2 a\Gamma1 equal parts near the root and each such part is searched by one processor. In the
4 When the root is pruned we can ignore the difference between P (d) and S(d), as the tree has only one
node.
second case, each of these is again being split into two equal parts and two processors share
the work that one processor used to do. Let us compare the number of nodes expanded by
one processor in the first case with the corresponding pair of processors in the second case.
We know that the subtree we are dealing with is an OB-tree. There fore, theorem 5.3 shows
that the pair of processors do at most as much work as the single processor in the first case.
By summing over all 2 a\Gamma1 parts in the whole tree, the theorem follows. An induction on a
with this theorem shows that P (d) - S(d) holds for the case of 2 k processors performing the
parallel search.5.2.1 Superlinear Speedup on Hard to Solve Instances
Theorem 5.3 has the following important consequence. If we partition a randomly selected
set of problem instances into two subsets such that on one subset the average speedup is
sublinear, then the average speedup on the other one will be superlinear. One such partition
is according to the correctness of ordering near the root. Let us call those instances on
which the ordering heuristic makes correct decision near the root, easy-to-solve instances
and the others, hard-to-solve instances. For sequential DFS, easy-to-solve instances take
smaller time to solve than hard-to-solve instances.
For the 2-processor case, easy-to-solve instances are those fl-fraction of the total instances
in which the ordering heuristic makes correct decisions at the root. On these, parallel version
obtains an average speedup of 1 (i.e., no speedup). On the remaining instances, the average
speedup is roughly 2\Gammafl
, which can be arbitrarily high depending upon how close fl is to 1.
On 2 a processors, the easiest to solve instances are those fl a -fraction of the total instances
on which sequential search makes correct decision on the first a branches starting at the
root. The maximum superlinearity is available on the hardest to solve instances which are a
fraction of the total instances.
5.3 Experimental Results
The problem we chose to experiment with is the test generation problem that arises in
computer aided-design (CAD) for VLSI. The problem of Automatic Test Pattern Generation
(ATPG) is to obtain a set of logical assignments to the inputs of an integrated circuit that
will distinguish between a faulty and fault-free circuit in the presence of a set of faults. An
input pattern is said to be a test for a given fault if, in the presence of the fault, it produces
an output that is different for the faulty and fault-free circuits. We studied sequential and
parallel implementations of an algorithm called PODEM (Path-Oriented Decision Making
[3]) used for combinational circuits (and for sequential circuits based on the level-sensitive
scan design approach). This is one of the most successful algorithms for the problem and
it is widely used. The number of faults possible in a circuit is proportional to the number
of signal lines in it. It is known that the sequential algorithm is able to generate tests for
more than 90% of the faults in reasonable time but spends an enormous amount of time
(much more than 90% of execution time) trying to generate tests for the remaining faults.
As a result, the execution of the algorithm is terminated when it fails to generate a test
after a predefined number of node expansions or backtracks. Those faults that cannot be
solved in reasonable time by the serial algorithm are called hard-to-detect (HTD) faults[27].
In practice, it is very important to generate tests for as many faults as possible. Higher
fault coverage results in more reliable chips. The ATPG problem fits the model we have
analyzed very well for the following reasons: (i) the search tree generated is binary; (ii) For
a non-redundant fault, the problem typically has one or a small number of solutions; (iii)
a good but imperfect ordering heuristic is available; (iv) a bounding heuristic is available
which prunes the search below a node when either the pruned node itself is a solution or it
can no longer lead to solutions.
Our experiments with the ATPG problem support our analysis that on hard-to-solve in-
stances, the parallel algorithm shows a superlinear speedup. We implemented sequential and
parallel versions of PODEM on a 128 processor Symult 2010 multiprocessor. We performed
an experiment using the ISCAS-85 benchmark files as test data. More details on our implementation
and experimental results can be found in [1]. Our experiments were conducted as
follows. The HTD faults were first filtered out by picking those faults from the seven files
whose test patterns could not be found within 25 backtracks using the sequential algorithm.
The serial and parallel PODEM algorithms were both used to find test patterns for these
HTD faults. Since some of these HTD faults may not be solvable (by the sequential and/or
parallel PODEM algorithm) in a reasonable time, an upper limit was imposed on the total
number of backtracks that a sequential or parallel algorithm could make. If the sequential or
parallel algorithm exceeded this limit (the sum of backtracks made by all processors being
counted in the parallel case), then the algorithm was aborted, and the fault was classified as
undetectable (for that backtrack limit). Time taken by purely sequential PODEM and the
parallel PODEM were used to compute the speedup. These results are shown in Figure 4
for each circuit in the ISCAS-85 benchmark. In these experiments, 25600 was used as the
upperlimit for backtracks.
To test the variation of superlinearity with hardness of faults, we selected two sets of
faults. The first set consisted of those faults that the serial algorithm was able to solve after
executing a total number of backtracks (node expansions) in the range 1600-6400. Similarly,
Number of processors
speedup
Figure
4: Speedup Curves for the ATPG problem
the second set of faults was solved by the serial algorithm in the backtrack range 6400-25600.
The faults in the second set were thus harder to solve for the serial algorithm. Two of the
seven files, namely c499 and c1355, did not yield any faults for either of the two sets. We
executed the parallel algorithm for 16 to 128 processors and averaged the speedups obtained
for a given number of processors separately for the two sets of faults. The run-time for each
fault was itself the average obtained over 10 runs. These results are shown in Figure 5.
From all these results, it is clear that superlinearity increases with the increasing hardness of
instances. The degree of superlinearity decreases with the increasing number of processors
because the efficiency of parallel DFS decreases if the problem size is fixed and the number
of processors is increased[16].
Note that the above experimental results validate only the discussion in Section 5.2.1.
To validate Theorem 5.3, it would be necessary to find the number of nodes expanded by
parallel DFS even for easy to detect faults. For such faults, experimental run time will not
be roughly proportional to the number of nodes searched by parallel DFS because for small
trees communication overhead becomes significant. Superlinearity for hard-to-detect faults
was experimentally observed for other ATPG heuristics by Patil and Banerjee in [27].
6 Related Research
The occurrence of speedup anomalies in simple backtracking was studied in [22] and [8].
Monien, et. al.[22] studied a parallel formulation of DFS for solving the satisfiability problem.
In this formulation, each processor tries to prove the satisfiability of a different subformula of
the input formula. Due to the nature of the satisfiability problem, each of these subformulas
leads to a search space with a different average density of solutions. These differing solution
densities are responsible for the average superlinear speedup. In the context of a model,
Monien et al. showed that it is possible to obtain (average) superlinear speedup for the SAT
problem. Our analysis of simple backtracking (in Section 4) is done for a similar model,
but our results are general and stronger. We had also analyzed the average case behavior
of parallel (simple) backtracking in [24]. The theoretical results we present here are much
stronger than those in [24]. In [24], we showed that if the regions searched by a few of the
processors had all the solutions uniformly distributed and the regions searched by all the
rest of the processors had no solutions at all, the average speedup in parallel backtracking
would be superlinear. Our analysis in Section 4 shows that any non-uniformity in solution
densities among the regions searched by different processors leads to a superlinear speedup
on the average. The other two types of heuristic DFS algorithms we discuss are outside the
scope of both [22] and [24].
1600-64003282251751257525Number of processors
Figure
5: Speedup curves for hard-to-solve instances of the ATPG problem
If the search space is searched in a random fashion (i.e., if newly generated successors
of a node are ordered randomly) , then the number of nodes expanded before a solution
is found is a random variable (let's call it T(1)). One very simple parallel formulation of
DFS presented in [21, 8] is to let the same search space be searched by many processors
in an independent random order until one of the processors finds a solution. The total
number of nodes expanded by a processor in this formulation is again a random variable
(let's call it T(N)). Clearly, g, where each V i is a random variable. If
the average value of T(N) is less than 1
times T(1), then also we can expect superlinear
speedup[21, 8]. For certain distributions of T(1), this happens to be the case. For example, if
the probability of finding a solution at any level of the state-space tree is the same, then
has this property[21]. Note that our parallel formulation of DFS dominates the one in [21, 8]
in terms of efficiency, as in our parallel formulation there is no duplication of work. Hence,
our parallel formulation will exhibit superlinear speedup on any search space for which the
formulation in [21, 8] exhibits superlinear speedup, but the converse is not true.
For certain problems, probabilistic algorithms[29] 5 can perform substantially better than
simple backtracking. For example, this happens for problems in which the state-space tree
is a balanced binary tree (like in the Hacker's Problem discussed in Section 4.3), and if
the overall density of solutions among the leaf nodes is relatively high but solutions are
distributed nonuniformly. Probabilistic search performs better than simple backtracking for
these problems because it can make the density of solutions at the leaf nodes look virtually
uniform. The reader can infer from the analysis in Section 4 that for these kinds of search
spaces, parallel DFS also obtains similar homogenization of solution density, even though
each processor still performs enumeration of (a part of) the search space. The domain of
applicability of the two techniques (probabilistic algorithms vs sequential or parallel DFS)
however is different. When the depth of leaf nodes in a tree varies, a probabilistic search
algorithm visits shallow nodes much more frequently than deep nodes. In the state-space
tree of many problems (such as the N \Gammaqueens problem), shallow nodes correspond to failure
nodes, and solution nodes are located deep in the tree. For such problems, probabilistic
algorithms do not perform as well as simple backtracking, as they visit failure nodes more
frequently. (Note that the simple backtracking will visit a failure node exactly once.) When
the density of solutions among leaf nodes is low, the expected running time of a probabilistic
algorithm can also be very high. (In the extreme case, when there is no solution, the
probabilistic search will never terminate, whereas simple backtracking and our parallel DFS
will.) For these cases also, an enumerative search algorithm such as simple backtracking
5 A probabilistic algorithm for state space search can be obtained by generating random walks from root
node to leaf nodes until a solution is found
is superior to a probabilistic algorithm, and the parallel variant retains the advantages of
homogenization.
For ordered DFS, randomized parallel DFS algorithms such as those given in [21, 8]
will perform poorly, they are not able to benefit from the ordering heuristics. Probabilistic
algorithms have the same weakness. For decision problems, our analysis in Section 5 shows
that the utilization of ordering heuristic cuts the search down by a large factor for both the
sequential DFS and our parallel DFS. In the case of optimization problems (i.e., when we
are interested in finding a least-cost solution), randomized parallel DFS algorithms in [21, 8]
as well as probabilistic algorithms are not useful. One cannot guarantee the optimality of a
solution unless an exhaustive search is performed.
Saletore and Kale [30] present a parallel formulation of DFS which is quite different than
the ones in [22, 23, 2]. Their formulation explicitly ensures that the number of nodes searched
by sequential and parallel formulations are nearly equal. The results of our paper do not
apply to their parallel DFS.
In [5], a general model for explaining the occurrence of superlinear speedups in a variety
of search problems is presented. It is shown that if the parallel algorithm performs less work
than the corresponding sequential algorithm, superlinear speedup is possible. In this paper,
we identified and analyzed problems for which this is indeed the case.
Conclusions
We presented analytical models and theoretical results characterizing the average case behavior
of parallel backtrack search (DFS) algorithms. We showed that on average, parallel
DFS does not show deceleration anomalies for two types of problems. We also presented
experimental results validating our claims on multiprocessors. Further, we identified certain
problem characteristics which lead to superlinear speedups in parallel DFS. For problems
with these characteristics, the parallel DFS algorithm is better than the sequential DFS algorithm
even when it is time-sliced on one processor. While isolated occurrences of speedup
anomalies in parallel DFS had been reported earlier by various researchers, no experimental
or analytical results showing possibility of superlinear speedup on the average (with the
exception of results in [22, 24]) were available for parallel DFS.
A number of questions need to be addressed by further research. On problems for which
sequential DFS is dominated by parallel DFS but not by any other search technique, what is
best possible sequential search algorithm? Is it the one derived by running parallel DFS on
one processor in a time slicing mode? If yes, then what is the optimum number of processors
to emulate in this mode? In the case of ordered backtrack search, we showed that parallel
search is more efficient on hard-to-solve instances while sequential search is more efficient
on easy-to-solve instances. In practice, one should therefore use a combination of sequential
and parallel search. What is the optimal combination? This paper analyzed efficiency
of parallel DFS for certain models. It would be interesting to perform similar analysis for
other models and also for other parallel formulations of backtrack search such as those given
in [7] and in [30].
Acknowledgements
We would like to thank Sunil Arvindam and Hang He Ng for
helping us with some of the experiments. We would also like to thank Dr. James C. Browne
and Dr Vineet Singh for many helpful discussions.
APPENDICES
A Details of Analysis for Ordered DFS Algorithms.
Theorem A.1 F
Proof. It is clear that F 1. Now consider the case when k - 0. With
the root node is pruned, and thus F 1. With the remaining - probability, the root
node is not pruned, and has 2 successors. In this case, F
Hence, we have
bounding heuristic succeeds at the root, DFS visits only one leaf; otherwise
it visits only the left subtree at the root fl fraction of the time or it visits the left subtree
unsuccessfully and then visits the right subtree fraction of the time. Hence, we have,
For d ?? 1 (a moderate d suffices) we have
Hence
Using theorem A.1 we can simplify this to
Under the previous assumption d ?? 1, the - d+1 term can be ignored. The term -
becomes negligible when - is larger than 0:5, say for - 0:55. Thus
Theorem A.3
The error ignored is small.In two-processor parallel depth-first search, the tree is statically partitioned at the root
and the processors search two trees of depth d-1 independently until at least one of them
succeeds. Each of them individually performs the sequential DFS consulting the three wise
oracles. Note that though the second processor violates the advice of the ordering heuristic
at the root node, it follows its advice everywhere else. Hence
of the time the root is pruned by prune and as a liberal convention we have two
nodes expanded in the parallel search. Otherwise, two OB-trees of depth d-1 are searched,
two nodes being expanded in each step, until one of the processors succeed.
The inequality arises because the average of the minimum of two trials is not more than
the minimum of the two averages. From the formula for S(d) we have
using theorem A.1,
where the error ignored is less than 1.
From Theorem A.1 we have
The inequality becomes an equality if we restrict that there be only one solution in the entire
search tree.
Theorem A.4 On the average parallel search visits same number of nodes as sequential
search on a large randomly selected set of problem instances, with single solutions.
Proof: See above arguments.
--R
Automatic test pattern generation on multiprocessors.
An implicit enumeration algorithm to generate tests for combinatorial logic circuits.
Experimental evaluation of load balancing techniques for the hypercube.
Modeling speedup (n) greater than n.
Fundamentals of Computer Algorithms.
A parallel searching scheme for multiprocessor systems and its application to combinatorial problems.
Randomized parallel algorithms for prolog programs and backtracking applications.
A perfect heuristic for the n non-attacking queens problem
Search in Artificial Intelligence.
Personal Communication.
The use of parallelism to implement a heuristic search.
Scalable load balancing techniques for parallel computers.
Parallel branch-and-bound formulations for and/or tree search
Scalable parallel formulations of depth-first search
Parallel depth-first search
Anomalies in parallel branch and bound algorithms.
Wah Computational efficiency of parallel approximate branch-and-bound algorithms
A shared virtual memory system for parallel computing.
Superlinear speedup through randomized algorithms.
Superlinear speedup for parallel back- tracking
Parallel depth-first search
Superlinear speedup in state-space search
A parallel implementation of iterative-deepening-a*
Principles of Artificial Intelligence.
A parallel branch-and-bound algorithm for test generation
Probabilistic algorithms.
Consistent linear speedup to a first solution in parallel state-space search
The average complexity of depth-first search with backtracking and cutoff
Efficient search techniques - an empirical study of the n-queens problem
Performance and pragmatics of an OR-parallel logic programming system
--TR
Heuristics: intelligent search strategies for computer problem solving
The average complexity of depth-first search with backtracking and cutoff
DIBMYAMPERSANDmdash;a distributed implementation of backtracking
Principles of artificial intelligence
Performance of an OR-parallel logic programming system
Parallel depth first search. Part I. implementation
Parallel depth first search. Part II. analysis
Search in Artificial Intelligence
An almost perfect heuristic for the <italic>N</italic> nonattacking queens problem
Scalable parallel formulations of depth-first search
Anomalies in parallel branch-and-bound algorithms
Fundamentals of Computer Alori
Automatic Test Pattern Generation on Multiprocessors
Superlinear Speedup in Parallel State-Space Search
--CTR
Jung , M. S. Krishnamoorthy , George Nagy , Andrew Shapira, N-Tuple Features for OCR Revisited, IEEE Transactions on Pattern Analysis and Machine Intelligence, v.18 n.7, p.734-745, July 1996
Finkelstein , Shaul Markovitch , Ehud Rivlin, Optimal schedules for parallelizing anytime algorithms: the case of independent processes, Eighteenth national conference on Artificial intelligence, p.719-724, July 28-August 01, 2002, Edmonton, Alberta, Canada
Fumiaki Okushi, Parallel cooperative propositional theorem proving, Annals of Mathematics and Artificial Intelligence, v.26 n.1-4, p.59-85, 1999
James Cheetham , Frank Dehne , Andrew Rau-Chaplin , Ulrike Stege , Peter J. Taillon, Solving large FPT problems on coarse-grained parallel machines, Journal of Computer and System Sciences, v.67 n.4, p.691-706, December
Ariel Felner , Sarit Kraus , Richard E. Korf, KBFS: K-Best-First Search, Annals of Mathematics and Artificial Intelligence, v.39 n.1-2, p.19-39, September
Daniel J. Challou , Maria Gini , Vipin Kumar , George Karypis, Predicting the Performance of Randomized Parallel An Application to Robot Motion Planning, Journal of Intelligent and Robotic Systems, v.38 n.1, p.31-53, September
G. Karypis , V. Kumar, Unstructured tree search on SIMD parallel computers: a summary of results, Proceedings of the 1992 ACM/IEEE conference on Supercomputing, p.453-462, November 16-20, 1992, Minneapolis, Minnesota, United States
Wei-Ming Lin , Wei Xie , Bo Yang, Performance analysis for parallel solutions to generic search problems, Proceedings of the 1997 ACM symposium on Applied computing, p.422-430, April 1997, San Jose, California, United States
Andrea Di Blas , Arun Jagota , Richard Hughey, Optimizing neural networks on SIMD parallel computers, Parallel Computing, v.31 n.1, p.97-115, January 2005
G. Karypis , V. Kumar, Unstructured Tree Search on SIMD Parallel Computers, IEEE Transactions on Parallel and Distributed Systems, v.5 n.10, p.1057-1072, October 1994
Ananth Grama , Vipin Kumar, State of the Art in Parallel Search Techniques for Discrete Optimization Problems, IEEE Transactions on Knowledge and Data Engineering, v.11 n.1, p.28-35, January 1999
Peter A. Krauss , Andreas Ganz , Kurt J. Antreich, Distributed Test Pattern Generation for Stuck-At Faults in Sequential Circuits, Journal of Electronic Testing: Theory and Applications, v.11 n.3, p.227-245, Dec. 1997
Lucas Bordeaux , Youssef Hamadi , Lintao Zhang, Propositional Satisfiability and Constraint Programming: A comparative survey, ACM Computing Surveys (CSUR), v.38 n.4, p.12-es, 2006 | parallel algorithms;index termsparallel backtracking;speedup;heuristicbacktracking;search problems;backtrack search algorithms;simple backtracking |
629185 | A Unified Formalization of Four Shared-Memory Models. | The authors present a data-race-free-1, shared-memory model that unifies four earliermodels: weak ordering, release consistency (with sequentially consistent specialoperations), the VAX memory model, and data-race-free-0. Data-race-free-1 unifies themodels of weak ordering, release consistency, the VAX, and data-race-free-0 byformalizing the intuition that if programs synchronize explicitly and correctly, thensequential consistency can be guaranteed with high performance in a manner that retainsthe advantages of each of the four models. Data-race-free-1 expresses the programmer'sinterface more explicitly and formally than weak ordering and the VAX, and allows animplementation not allowed by weak ordering, release consistency, or data-race-free-0.The implementation proposal for data-race-free-1 differs from earlier implementations bypermitting the execution of all synchronization operations of a processor even whileprevious data operations of the processor are in progress. To ensure sequentialconsistency, two sychronizing processors exchange information to delay later operationsof the second processor that conflict with an incomplete data operation of the firstprocessor. | Introduction
A memory model for a shared-memory multiprocessor system is a formal specification of how memory
operations in a program will appear to execute to the programmer. In particular, a memory model specifies the
values that may be returned by read operations executed on a shared-memory system. This paper presents a new
memory model, data-race-free-1, that unifies four earlier models. 1 Although the four models are very similar, each
model has different advantages and disadvantages for programmers and system designers.
unifies the four models by retaining all the advantages of the four models.
Most uniprocessors provide a simple memory model that ensures that memory operations will appear to
execute one at a time, in the order specified by the program (program order). Thus, a read returns the value from
the last write (in program order) to the same location. To improve performance, however, uniprocessors often
allow memory operations to overlap other memory operations and to be issued and executed out of program order.
Uniprocessors use interlock logic to maintain the programmer's model of memory (that memory operations
appear to execute one at a time, in program order). This model of uniprocessor memory, therefore, has the
advantage of simplicity and yet allows for high performance optimizations.
The most commonly (and often implicitly) assumed memory model for shared-memory multiprocessor systems
is sequential consistency, formalized by Lamport [21] as follows.
Definition 1.1: [A multiprocessor system is sequentially consistent if and only if] the result of any
execution is the same as if the operations of all the processors were executed in some sequential
order, and the operations of each individual processor appear in this sequence in the order specified
by its program.
In other words, a sequentially consistent multiprocessor appears like a multiprogrammed uniprocessor [24].
Although sequential consistency retains the simplicity of the uniprocessor memory model, it limits performance
by preventing the use of several optimizations. Figure 1 shows that in multiprocessor systems, both with
and without caches, common uniprocessor hardware optimizations, such as write buffers, overlapped memory
operations, out-of-order memory operations, and lockup-free caches [20], can violate sequential consistency.
These optimizations significantly improve performance and will become increasingly important in the future as
processor cycle times decrease and memory latencies increase [13]. Gharachorloo et al. have described
1. An earlier version of this work appears in the Proceedings of the 17th Annual International Symposium on Computer
Architecture, June 1990 [1]. The data-race-free-1 memory model developed in this paper extends the data-race-free-0
model of [1] by distinguishing unpaired synchronization operations from paired release and acquire synchronization opera-
tions. The definition of data-race-free-1 in Section 2 uses the notions of how different operations are distinguished, when
the distinction is correct, the synchronization-order-1 and happens-before-1 relations, and data races. These notions are extensions
of similar concepts developed for data-race-free-0. Also, in parallel with our work on this paper, we published a
technique for detecting data races on a data-race-free-1 system [2]. Consequently, [2] reviews data races and the data-race-
memory model, and contains the definitions (in slightly different form) of Section 2. This material is used in Section
2 with the permission of the ACM.
mechanisms that allow these optimizations to be used with the sequential consistency model, but the mechanisms
require hardware support for prefetching and rollback [12].
Initially
Figure
1. A violation of sequential consistency. 2
and Y are shared variables and r1 and r2 are local registers. The execution depicted above violates sequential
consistency since no total order of memory operations consistent with program order lets both P 1 and P 2 return 0
for their reads on Y and X. Note that neither processor has data dependencies among its instructions; therefore,
simple interlock logic will not preclude either processor from issuing its second instruction before the first.
Shared-bus systems without caches - The execution is possible if processors issue memory operations out of order
or allow reads to pass writes in write buffers.
Systems with general interconnection networks without caches - The execution is possible even if processors
issue memory operations in program order, if the operations reach memory modules in a different order [21].
Shared-bus systems with caches - Even with a cache coherence protocol [6], the execution is possible if processors
issue memory operations out-of-order or allow reads to pass writes in write buffers.
Systems with general interconnection networks and caches - The execution is possible even if memory operations
are issued and reach memory modules in program order, if they do not complete in program order. Such a situation
can arise if both processors initially have X and Y in their caches, and a processor issues its read before its
write propagates to the cache of the other processor.
Alternate memory models have been proposed to improve the performance of shared-memory systems. To
be useful, the new models should satisfy the following properties: (1) the model should be simple for programmers
to use, and (2) the model should allow high performance. The central assumption of this work is that most
programmers prefer to reason with the sequential consistency model since it is a natural extension of the well-understood
uniprocessor model. Therefore, one way in which a memory model can satisfy the first property is to
appear sequentially consistent to most programs and to formally characterize this group of programs. A memory
model can satisfy the second property by allowing all high performance optimizations that guarantee sequential
consistency for this group of programs.
One group of programs for which it is possible to guarantee sequential consistency and still use many
optimizations is programs that explicitly distinguish between synchronization memory operations (operations used
to order other operations) and data memory operations (operations used to read and write data). This dichotomy
2.
Figure
1 is a modified version of Figure 1 in [1] and is presented with the permission of the IEEE.
between memory operations is the motivation behind the four models of weak ordering [9], release consistency
with sequentially consistent special operations (henceforth called release consistency) [11], the VAX [8], and
data-race-free-0 (originally called ordering with respect to data-race-free-0) [1].
Although the four memory models are very similar, small differences in their formalization lead to differences
in the way they satisfy the above two properties. Weak ordering [9] and release consistency [11] restrict
hardware to actually execute specific memory operations in program order. For programmers, the authors of
ordering later stated that mutual exclusion should be ensured for each access to a shared variable by using
constructs such as critical sections, which are implemented with hardware-recognizable synchronization operations
[10, 26]. The authors of release consistency formalize a group of programs called properly labeled pro-
grams, for which release consistency ensures sequential consistency. A properly labeled program distinguishes its
memory operations depending on their use. For example, it distinguishes synchronization operations from ordinary
data operations. The VAX and data-race-free-0 models differ from weak ordering and release consistency
by avoiding explicit restrictions on the actual order of execution of specific memory operations. In the VAX
architecture handbook [8], the data sharing and synchronization section states the following. "Accesses to explicitly
shared data that may be written must be synchronized. Before accessing shared writable data, the programmer
must acquire control of the data structure. Seven instructions are provided to permit interlocked access to a
control variable." Data-race-free-0 [1] states that sequential consistency will be provided to data-race-free pro-
grams. A data-race-free program (discussed formally in Sections 2 and distinguishes between synchronization
operations and data operations and ensures that conflicting data operations do not race (i.e., cannot execute con-
currently). For programs that contain data races, data-race-free-0 does not guarantee the behavior of the
hardware.
The different formalizations of the four models result in some models satisfying the simplicity or the high-performance
property better than other models; however, no model satisfies both properties better than all other
models. For example, the VAX imposes the least restrictions on hardware, but its specification is less explicit and
formal than the other models. Consider the statement, "before accessing shared writable data, the programmer
must acquire control of the data structure." Does this allow concurrent readers? Further, how will hardware
behave if programs satisfy the specified conditions? Although it may be possible to answer these questions from
the VAX handbook, a more explicit and formal interface would allow a straightforward and unambiguous resolution
of such questions. Release consistency, on the other hand, provides a formal and explicit interface. However,
as Section 4 will show, the hardware requirements of release consistency are more restrictive than necessary.
This paper defines a new model, data-race-free-1, which unifies the weak ordering, release consistency,
VAX, and data-race-free-0 models in a manner that retains the advantages of each of the models for both the
programmer and the hardware designer. The following summarizes how data-race-free-1 unifies the four models
and how it overcomes specific disadvantages of specific models.
For a programmer, data-race-free-1 unifies the four models by explicitly addressing two questions: (a)
when is a program correctly synchronized? and (b) how does hardware behave for correctly synchronized pro-
answers these questions formally, but the intuition behind the answers is simple: (a) a
program is correctly synchronized if none of its sequentially consistent executions have a data race (i.e.,
conflicting data operations do not execute concurrently), and (b) for programs that are correctly synchronized, the
hardware behaves as if it were sequentially consistent. This viewpoint is practically the same as that provided by
release consistency and data-race-free-0. However, it is more explicit and formal than weak ordering and the
VAX (e.g., it allows concurrent readers because they do not form a data race).
For a hardware designer, data-race-free-1 unifies the four models because (as will be shown in Section
implementing any of the models is sufficient to implement data-race-free-1. Furthermore, data-race-free-1 is less
restrictive than either weak ordering, release consistency, or data-race-free-0 for hardware designers since there
exists an implementation of data-race-free-1 that is not allowed by weak ordering, release consistency, or data-
race-free-0. The new implementation (described in Section 4) differs from implementations of weak ordering and
release consistency by allowing synchronization operations to execute even while previous data operations of the
synchronizing processors are incomplete. To achieve sequential consistency, processors exchange information at
the time of synchronization that ensures that a later operation that may conflict with an incomplete data operation
is delayed until the data operation completes. The new implementation differs from implementations of data-
race-free-0 by distinguishing between different types of synchronization operations.
The rest of the paper is organized as follows. Section 2 defines data-race-free-1. Sections 3 and 4 compare
data-race-free-1 with the weak ordering, release consistency, VAX, and data-race-free-0 models from the
viewpoint of a programmer and hardware designer respectively. Section 5 relates data-race-free-1 to other
models. Section 6 concludes the paper.
2. The Data-Race-Free-1 Memory Model
Section 2.1 first clarifies common terminology that will be used throughout the paper and then informally
motivates the data-race-free-1 memory model. Section 2.2 gives the formal definition of data-race-free-1. Data-
race-free-1 is an extension of our earlier model data-race-free-0 [1].
2.1. Terminology and Motivation for
The rest of the paper assumes the following terminology. The terms system, program, and operations (as in
definition 1.1 of sequential consistency) can be used at several levels. This paper discusses memory models at the
lowest level, where the system is the machine hardware, a program is a set of machine-level instructions, and an
operation is a memory operation that either reads a memory location (a read operation) or modifies a memory
location (a write operation) as part of the machine instructions of the program. The program order for an execution
is a partial order on the memory operations of the execution imposed by the program text [27]. The result of
an execution refers to the values returned by the read operations in the execution. A sequentially consistent execution
is an execution that could occur on sequentially consistent hardware. Two memory operations conflict if at
least one of them is a write and they access the same location [27].
The motivation for data-race-free-1, which is similar to that for weak ordering, release consistency, the
VAX model and data-race-free-0, is based on the following observations made in [1]. 3 Assuming processors
maintain uniprocessor data and control dependencies, sequential consistency can be violated only when two or
more processors interact through memory operations on common locations. These interactions can be classified
as data memory operations and synchronization memory operations. Data operations are usually more frequent
and involve reading and writing of data. Synchronization operations are usually less frequent and are used to
order conflicting data operations from different processors. For example, in the implementation of a critical section
using semaphores, the test of the semaphore and the unset or clear of the semaphore are synchronization
operations, while the reads and writes in the critical section are data operations.
Additionally, synchronization operations can be characterized as paired acquire and release synchronization
operations or as unpaired synchronization operations as follows. (The characterization is similar to that used for
properly labeled programs for release consistency [11]; Section 3 discusses the differences.) In an execution, consider
a write and a read synchronization operation to the same location, where the read returns the value of the
write, and the value is used by the reading processor to conclude the completion of all memory operations of the
writing processor that were before the write in the program. In such an interaction, the write synchronization
operation is called a release, the read synchronization operation is called an acquire, and the release and acquire
are said to be paired with each other. A synchronization operation is unpaired if it is not paired with any other
synchronization operation in the execution. For example, consider an implementation of a critical section using
semaphores, where the semaphore is tested with a test&set instruction and is cleared with an unset instruction.
The write due to an unset is paired with the test that returns the unset value; the unset write is a release operation
3. The observations are paraphrased from [1] with the permission of the IEEE.
and the test read is an acquire operation because the unset value returned by the test is used to conclude the completion
of the memory operations of the previous invocation of the critical section. The write due to a set of a
test&set and a read due to the test of a test&set that returns the set value are unpaired operations; such a read is
not an acquire and the write is not a release because the set value does not communicate the completion of any
previous memory operations.
As will be illustrated by Section 4, it is possible to ensure sequential consistency by placing most hardware
restrictions only on the synchronization operations. Further, of the synchronization operations, the paired operations
require more restrictions. Thus, if hardware could distinguish the type of an operation, it could complete
data operations faster than all the other operations, and unpaired synchronization operations faster than the paired
synchronization operations, without violating sequential consistency. A data-race-free-1 system gives programmers
the option of distinguishing the above types of operations to enable higher performance.
2.2. Definition of Data-Race-Free-1
Section 2.1 informally characterized memory operations based on the function they perform, and indicated
that by distinguishing memory operations based on this characterization, higher performance can be obtained
without violating sequential consistency. This section first discusses how the memory operations can be distinguished
based on their characterization on a data-race-free-1 system, and then gives the formal criterion for
when the operations are distinguished correctly for data-race-free-1. The section concludes with the definition of
the data-race-free-1 memory model.
does not impose any restrictions on how different memory operations may be dis-
tinguished. One option for distinguishing data operations from synchronization operations is for hardware to provide
different instructions that may be used for each type of operation. For example, only special instructions
such as Test&Set and Unset may be used to generate synchronization operations. Alternatively, only operations
to certain memory-mapped locations may be distinguished as synchronization operations. One way of distinguishing
between paired and unpaired synchronization operations is for hardware to provide special instructions
for synchronization operations and a static pairable relation on those instructions; a write and a read in an execution
are distinguished by the hardware as paired release and acquire if they are generated by instructions related
by the pairable relation, and if the read returns the value of the write. Figure 2 gives examples of different instructions
and the pairable relation, and illustrates their use.
The following discusses when a programmer distinguishes operations correctly for data-race-free-1. If the
operations are distinguished exactly according to their function outlined in Section 2.1, then the distinction is
indeed correct. However, data-race-free-1 does not require a programmer to distinguish operations to match their
SyncRead,flag
(a)
Test&Set,s
(b)
Fetch&Inc,count
Fetch&Inc,count
SyncWrite,flag
Fetch of
Inc of
SyncWrite
Unset,s
Test&Set,s
Unset,s
Test of
Test&Set
Set of
Test&Set
Unset -
while (Test&Set,s) {;}
Unset,s
data ops in critical section
data ops before barrier
/* code for barrier */
if (Fetch&Inc,count == N) {
data ops after barrier
Test&Set,s
/* code for critical setion */
Sync-
Read
acquire
acquire
release
release
SyncRead,flag
local_flag
while(SyncRead,flag != local_flag) {;}
else
Figure
2. Synchronization instructions and the pairable relation for different systems.
Figures 2a and 2b represent two systems with different sets of instructions that can be used for synchronization
operations. For each system, the figure shows the different synchronization operations and the pairable relation,
along with programs and executions that use these operations. The table in each figure lists the read synchronization
operations (potential acquires) horizontally, and the write synchronization operations (potential releases) verti-
cally. A '-' indicates that the synchronization operations of the corresponding row and column are pairable; they
will be paired in an execution if the read returns the value written by the write in that execution. The executions
occur on sequentially consistent hardware and their operations execute in the order shown. op,x denotes an operation
op on location x. DataRead and DataWrite denote data operations. The Test&Set and Fetch&Inc [17] instructions
are defined to be atomic instructions. Their read and write operations are represented together as Test&Set,x
or Fetch&Inc,x. Paired operations are connected with arrows.
Figure 2a shows a system with the Test&Set and Unset instructions, which are useful to implement a critical sec-
tion. A Test&Set atomically reads a memory location and updates it to the value 1. An Unset updates a memory
location to the value 0. A write due to an Unset and a read due to a Test&Set are pairable. The figure shows code
for a critical section and its execution involving two processors.
Figure 2b shows a system with the Fetch&Inc [17], SyncWrite and SyncRead instructions, which are useful to implement
a barrier. Fetch&Inc atomically reads and increments a memory location, SyncWrite is a synchronization
write that updates a memory location to the specified value, and SyncRead is a synchronization read of a memory
location. A write due to a Fetch&Inc is pairable with a read due to another Fetch&Inc and a write due to a
SyncWrite is pairable with a read due to a SyncRead. Also shown is code where N processors synchronize on a
barrier [23], and its execution for 2. The variable local_flag is implemented in a local register of the processor
and operations on it are not shown in the execution.
function exactly. In the absence of precise knowledge regarding the function of an operation, a programmer can
conservatively distinguish an operation as a synchronization operation even if the operation actually performs the
function of a data operation. Sequential consistency will still be guaranteed although the full performance potential
of the system may not be exploited. Henceforth, the characterization of an operation will be the one distinguished
by the programmer (which may be different from that based on the actual function the operation per-
forms). For example, an operation that is actually a data operation, but for which the programmer uses a synchronization
instruction, will be referred to as a synchronization operation.
Intuitively, operations are distinguished correctly for data-race-free-1 if sufficient synchronization operations
are distinguished as releases and acquires. The criteria for sufficiency is that if an operation is distinguished
as data, then it should not be involved in a race; i.e, the program should be data-race-free. The notion of a data
race is formalized by defining a happens-before-1 relation for every execution of a program as follows.
The happens-before-1 relation for an execution is a partial order on the memory operations of the execution.
Informally, happens-before-1 orders two operations initiated by different processors only if paired release and
acquire operations execute between them. Definition 2.2 formalizes this intuition by using the program order and
the synchronization-order-1 relations (Definition 2.1).
Definition 2.1: In an execution, memory operation S 1 is ordered before memory operation S 2 by the
relation if and only if S 1 is a release operation, S 2 is an acquire operation
are paired with each other.
Definition 2.2: The happens-before-1 relation for an execution is the irreflexive transitive closure of
the program order and synchronization-order-1 relations for the execution.
The definitions of a data race, a data-race-free program and the data-race-free-1 model follow.
Definition 2.3: A data race in an execution is a pair of conflicting operations, at least one of which is
data, that is not ordered by the happens-before-1 relation defined for the execution. An execution is
data-race-free if and only if it does not have any data races. A program is data-race-free if and only
if all its sequentially consistent executions are data-race-free.
Definition 2.4: Hardware obeys the data -race -free -1 memory model if and only if the result of
every execution of a data-race-free program on the hardware can be obtained by an execution of the
program on sequentially consistent hardware.
Figures 3a and 3b illustrate executions that respectively exhibit and do not exhibit data races. The execution
in Figure 3a is an implementation of the critical section code in Figure 2a, except that the programmer used a data
operation instead of the Unset synchronization operation for P 0 's write on s. Therefore, happens-before-1
does not order P 0 's write on x and P 1 's read on x. Since the write and read on x conflict and are both data opera-
tions, they form a data race. For similar reasons, P 0 's data write on s forms a data race with P 1 's test, set and
data write on s. Figure 3b shows an execution of the barrier code of Figure 2b. The execution is data-race-free
because happens-before-1 orders all conflicting pairs of operations, where at least one of the pair is data.
Note that the execution of Figure 3b does not use critical sections and therefore data-race-free-1 does not
require that all sharing be done through critical sections. Also note that in programs based on asynchronous algorithms
some operations access data, but are not ordered by synchronization. For such programs to be data-
race-free, these operations also need to be distinguished as synchronization operations.
SyncRead,flag
(a)
Test&Set,s
(b)
Fetch&Inc,count
Fetch&Inc,count
SyncWrite,flag
Test&Set,s
SyncRead,flag
po
po
po
po
po
po
po
po
po
so1
so1
po
po
po
Figure
3. Executions that (a) exhibit and (b) do not exhibit data races.
As discussed in Section 2.1, the definition of data-race-free-1 assumes a program that uses machine instructions
and hardware-defined synchronization primitives. However, programmers using high-level parallel programming
languages can use data-race-free-1 by extending the definition of data-race-free to high-level programs
(as discussed for data-race-free-0 in [1]). The extension is straightforward, but requires high-level parallel
languages to provide special constructs for synchronization, e.g., semaphores, monitors, fork-joins, and task ren-
dezvous. Data-race-free-1 does not place any restrictions on the high-level synchronization mechanisms. It is the
responsibility of the compiler to ensure that a program that is data-race-free at the high-level compiles into one
that is data-race-free at the machine-level, ensuring sequential consistency to the programmer.
3. vs. Weak Ordering, Release Consistency, the VAX Model, and Data-Race-Free-0 for
Programmers
This section compares the data-race-free-1 memory model to weak ordering, release consistency, the VAX,
and data-race-free-0 from a programmer's viewpoint. As stated earlier, the central assumption of this work is that
most programmers prefer to reason with sequential consistency. For such programmers, data-race-free-1 provides
a simple model: if the program is data-race-free, then hardware will appear sequentially consistent.
Both weak ordering and the VAX memory model state that programs have to obey certain conditions for
hardware to be well-behaved. However, sometimes further interpretation may be needed to deduce whether a program
obeys the required conditions (as in the concurrent readers case of Section 1), and how the hardware will
behave for programs that obey the required conditions. Data-race-free-1 expresses both these aspects more explicitly
and formally than weak ordering and the VAX: data-race-free-1 states that a program should be data-race-
free, and hardware appears sequentially consistent to programs that are data-race-free.
and release consistency provide a formal interface for programmers. Data-race-free-1 provides
a similar interface with a few minor differences. The programs for which data-race-free-0 ensures sequential
consistency are also called data-race-free programs [1]. The difference is that data-race-free-0 does not distinguish
between different synchronization operations; it effectively pairs all conflicting synchronization operations
depending on the order in which they execute. This distinction does not significantly affect programmers, but can
be exploited by hardware designers.
The programs for which release consistency ensures sequential consistency are called properly labeled programs
[11]. All data-race-free programs are properly labeled, but there are some properly labeled programs that
are not data-race-free (as defined by Definition 2.4) [15]. The difference is minor and arises because properly
labeled programs have a less explicit notion of pairing. They allow conflicting data operations to be ordered by
operations (nsyncs) that correspond to the nonpairable synchronization operations of data-race-free-1. Although a
memory model that allows all hardware that guarantees sequential consistency to properly labeled programs has
not been formally described, such a model would be similar to data-race-free-1 because of the similarity between
data-race-free and properly labeled programs.
A potential disadvantage of data-race-free-1 relative to weak ordering and release consistency is for programmers
of asynchronous algorithms that do not rely on sequential consistency for correctness [7]. Weak ordering
and release consistency provide such programmers the option of reasoning with their explicit hardware conditions
and writing programs that are not data-race-free, but work correctly and possibly faster. Data-race-free-1 is
based on the assumption that programmers prefer to reason with sequential consistency. Therefore, it does not restrict
the behavior of hardware for a program that is not data-race-free. Nevertheless, for maximum performance,
programmers of asynchronous algorithms could deal directly with specific implementations of data-race-free-1.
This would entail some risk of portability across other data-race-free-1 implementations, but would enable future
faster implementations for the other, more common programs.
To summarize, for programmers, data-race-free-1 is similar to release consistency and data-race-free-0, but
provides a more explicit and formal interface than weak ordering and the VAX model. Previous work discusses
how the requirement of data-race-free programs for all the above models is not very restrictive for programmers
[1, 11], and how data races [2] or violations of sequential consistency due to data races [14] may be dynamically
detected with these models.
4. vs. Weak Ordering, Release Consistency, the VAX Model, and Data-Race-Free-0 for
Hardware Designers
This section compares data-race-free-1 to weak ordering, release consistency, the VAX model, and data-
race-free-0 from a hardware designer's viewpoint. It first shows that data-race-free-1 unifies the four models for a
hardware designer because any implementation of weak ordering, release consistency, the VAX model, or data-
race-free-0 obeys data-race-free-1 (Section 4.1). It then shows that data-race-free-1 is less restrictive than weak
release consistency, and data-race-free-0 for a hardware designer because data-race-free-1 allows an
implementation not allowed by weak ordering, release consistency, or data-race-free-0 (Section 4.2).
4.1. Unifies Weak Ordering, Release Consistency, the VAX Model, and Data-Race-
Free-0 for Hardware Designers
For a hardware designer, data-race-free-1 unifies release consistency, data-race-free-0, weak ordering, and
the VAX model because any implementation of any of the four models obeys data-race-free-1. Specifically,
all implementations of release consistency obey data-race-free-1 because, as discussed in Section 3, all
implementations of release consistency ensure sequential consistency to all data-race-free programs;
all implementations of data-race-free-0 obey data-race-free-1 because, again as discussed in Section 3, all
implementations of data-race-free-0 ensure sequential consistency to all data-race-free programs;
all implementations of weak ordering obey data-race-free-1 because our earlier work shows that all implementations
of weak ordering obey data-race-free-0 [1], and from the above argument, all implementations of
data-race-free-0 obey data-race-free-1;
# data-race-free-1 formalizes the VAX model; therefore, all implementations of the VAX model obey data-
race-free-1.
4.2. Data-Race-Free-1 is Less Restrictive than Weak Ordering, Release Consistency, or Data-Race-Free-0
for Hardware Designers
is less restrictive for a hardware designer to implement than either weak ordering, release
consistency, or data-race-free-0 because data-race-free-1 allows an implementation that is not allowed by weak
release consistency, or data-race-free-0. Figure 4 motivates such an implementation. The figure shows
part of an execution in which two processors execute the critical section code of Figure 2a. Processors P 0 and P 1
Test&Set s until they succeed, execute data operations (including one on location x), and finally Unset s. The critical
section code is data-race-free; therefore, its executions on a data-race-free-1 implementation should appear
sequentially consistent. In the execution of Figure 4, P 0 's Test&Set succeeds first. Therefore, P 1 's Test&Set
succeeds only when it returns the value written by P 0 's Unset. Thus, to appear sequentially consistent, P 1 's data
read of x should return the value written by P 0 's data write of x. Figure 4 shows how implementations of weak
release consistency, and data-race-free-1 can achieve this.
Unset,s
. Test&Set,s
WO stalls P0 until DataWrite completes
WO, RC stall P1 for Unset and (therefore for) DataWrite
DRF1 delays DataRead until DataWrite completes
Test&Set,s
RC delays Unset until DataWrite completes
DRF1 stalls P1 only for Unset
po
po
po
po
po
po
po
so1
need never stall P0 nor delay its operations
Test&Set,s
po
Unset,s
po
po
po
release consistency,
Figure
4. Implementations of memory models.
Both weak ordering and release consistency require P 0 to delay the execution of its Unset until P 0 's data
completes (i.e., is seen by all processors). However, this delay is not necessary to maintain sequential consistency
(as also observed by Zucker [28]), and it is not imposed by the implementation proposal for data-race-
described next. Instead, the implementation maintains sequential consistency by requiring that P 0 's data
write on x completes before P 1 executes its data read on x. It achieves this by ensuring that (i) when P 1 executes
its Test&Set, P 0 notifies P 1 about its incomplete write on x, and (ii) P 1 delays its read on x until P 0 's write on x
completes.
With the new optimization, P 0 can execute its Unset earlier and P 1 's Test&Set can succeed earlier than
with weak ordering or release consistency. Thus, P 1 's reads and writes following its Test&Set (by program order)
that do not conflict with previous operations of P 0 will also complete earlier. Operations such as the data read on
x that conflict with previous operations of P 0 may be delayed until P 0 's corresponding operation completes.
Nevertheless, such operations can also complete earlier than with weak ordering and release consistency. For
example, if P 1 's read on x occurs late enough in the program, P 0 's write may already be complete before P 1
examines the read; therefore, the read can proceed without any delay. Recently, an implementation of release
consistency has been proposed that uses a rollback mechanism to let a processor conditionally execute its reads
following its acquire (such as P 1 's Test&Set) before the acquire completes [12]; our optimization will benefit
such implementations also because it allows the writes following the acquire to be issued and completed earlier,
and lets the reads following the acquire to be committed earlier.
The data-race-free-1 implementation differs from data-race-free-0 implementations because data-race-free-
distinguishes between the Unset and Test&Set synchronization operations and can take different actions for
data-race-free-0 does not make such distinctions.
Section 4.2.1 describes a sufficient condition for implementing data-race-free-1 based on the above motiva-
tion. Section 4.2.2 gives a detailed implementation proposal based on these conditions.
4.2.1. Sufficient Conditions for
Hardware obeys the data-race-free-1 memory model if the result of any execution of a data-race-free program
on the hardware can be obtained by a sequentially consistent execution of the program. The result of an
execution is the set of values its read operations return (Section 2.1). The value returned by a read is the value
from the write (to the same location) that was seen last by the reading processor. Thus, the value returned by a
read depends on the order in which the reading processor sees its read with respect to writes to the same location;
i.e., the order in which a processor sees conflicting operations. Thus, hardware is data-race-free-1 if it obeys the
following conditions.
Conditions: Hardware is data-race-free-1 if for every execution, E, of a data-
race-free program on the hardware, (i) the operations of execution E are the same as those of some
sequentially consistent execution of the program, and (ii) the order in which two conflicting operations
are seen by a processor in execution E is the same as in that sequentially consistent execution.
processor sees a write when a read executed by the processor to the same location as the write will return the
value of that or a subsequent write. A processor sees a read when the read returns its value. These notions are
similar to those of "performed with respect to a processor" and "performed" [9].)
The following gives three requirements (data, synchronization, and control) that are together sufficient for
hardware to satisfy the data-race-free-1 conditions, and therefore to obey data-race-free-1.
The data requirement pertains to all pairs of conflicting operations of a data-race-free program, where at
least one of the operations is a data operation. In an execution on sequentially consistent hardware, such a pair of
operations is ordered by the happens-before-1 relation of the execution, and is seen by all processors in that
order. The data requirement for an execution on data-race-free-1 hardware is that all such pairs
of operations continue to be seen by all processors in the happens-before-1 order of the execution. This requirement
ensures that in Figure 4, P 1 sees P 0 's write of x before the read of x. Based on the discussion of Figure 4,
the data requirement conditions below meet the data requirement for a pair of conflicting operations from different
processors. For conflicting operations from the same processor, it is sufficient to maintain intra-processor data
dependencies. The conditions below assume these are maintained.
In the rest of this section, preceding and following refer to the ordering by program order. An operation,
either synchronization or data, completes (or performs [9]) when it is seen (as defined above) by all processors.
Data Requirement Conditions: Let Rel and Acq be release and acquire operations issued by processors
P rel and P acq respectively. Let Rel and Acq be paired with each other.
Pre-Release Condition - When P rel issues Rel, it remembers the operations preceding Rel that are
incomplete.
Release-Acquire Condition - (i) Before Acq completes, P rel transfers to P acq the addresses and identity
of all its remembered operations. (ii) Before Acq completes, Rel completes and all operations
transferred to P rel (on P rel 's acquires preceding Rel) complete.
Post-Acquire Condition - Let Acq precede Y (by program order) and let the operation X be transferred
to P acq on Acq. (i) Before Y is issued, Acq completes. (ii) If X and Y conflict, then before Y is issued,
completes.
The data requirement conditions can be proved correct by showing that they ensure that if X and Y are
conflicting operations from different processors and happens-before-1 orders X before Y, then X completes before
any processor sees Y. This implies that all processors see X before Y, meeting the data requirement. For the execution
in Figure 4, the pre-release condition ensures that when P 0 executes its Unset, it remembers that
is incomplete. The release-acquire condition ensures that when P 1 executes its successful Test&Set,
transfers the address of x to P 1 . The post-acquire condition ensures that P 1 detects that it has to delay
completes and enforces the delay. Thus, DataRead,x returns the value written by
Besides the data requirement, the data-race-free-1 conditions also require that the order in which two
conflicting synchronization operations are seen by a processor is as on sequentially consistent hardware. This is
the synchronization requirement. The data and synchronization requirements would suffice to satisfy the data-
conditions if they also guaranteed that for any execution, E, on hardware that obeyed these
requirements, there is some sequentially consistent execution with the same operations, the same happens-before-
1, and the same order of execution of conflicting synchronization operations as E. In the absence of control flow
operations (such as branches), the above is automatically ensured. In the presence of control flow operations, how-
ever, an extra requirement, called the control requirement, is needed to ensure the above [3].
Weak ordering, release consistency, and all proposed implementations of data-race-free-0 satisfy the synchronization
requirement explicitly and the control requirement implicitly (by requiring "uniprocessor control
dependencies" to be maintained). Since the key difference between implementations of the earlier models and
the new implementation of data-race-free-1 is in the data requirement, the following describes an implementation
proposal only for the data requirement conditions. In [3], we formalize the above three requirements and give
explicit conditions for the synchronization and control requirements. A conservative way to satisfy the synchronization
requirement is for a processor to also stall the issue of a synchronization operation until the completion of
preceding synchronization operations and the write operations whose values are returned by preceding synchronization
read operations. A conservative way to satisfy the control requirement is for a processor to also block on a
read that controls program flow until the read completes.
Note that further optimizations on the data requirement conditions and on the implementation of the following
section are possible [3]. For example, for the release-acquire condition, the acquire can complete even while
operations transferred to the releasing processor are incomplete, as long as the releasing processor transfers the
identity of those incomplete operations to the acquiring processor. For the post-acquire condition, it is not necessary
to delay an operation (Y) following an acquire until a conflicting operation (X) transferred to the acquiring
processor completes. Instead, it is sufficient to delay Y only until X is seen by the acquiring processor, as long as a
mechanism (such as a cache-coherence protocol) ensures that all writes to the same location are seen in the same
order by all processors. Thus, the releasing processor can also transfer the values to be written by its incomplete
writes. Then reads following an acquire can use the transferred values and need not be delayed.
4.2.2. An Implementation Proposal for Data-Race-Free-1 that does not obey Weak Ordering, Release Con-
sistency, or Data-Race-Free-0
This section describes an implementation proposal for the data requirement conditions. The proposal
assumes an arbitrarily large shared-memory system in which every processor has an independent cache and processors
are connected to memory through an arbitrary interconnection network. The proposal also assumes a
directory-based, writeback, invalidation, ownership, hardware cache-coherence protocol, similar in most respects
to those discussed by Agarwal et al. [4]. One significant feature of the protocol is that invalidations sent on a
write to a line in read-only or shared state are acknowledged by the invalidated processors.
The cache-coherence protocol ensures that (a) all operations are eventually seen by all processors, (b) writes
to the same location are seen in the same order by all processors, and (c) a processor can detect when an operation
it issues is complete. For (c), most operations complete when the issuing processor receives the requested line in
its cache. However, a write (data or synchronization) to a line in read-only or shared state completes when all
invalidated processors send their acknowledgements. (Either the writing processor may directly receive the ack-
nowledgements, or the directory may collect them and then forward a single message to the writing processor to
indicate the completion of the write.)
The implementation proposal involves adding the following four features to a uniprocessor-based processor
logic and the base cache-coherence logic mentioned above. (Tables 1 and 2 summarize these features.)
# Addition of three buffers per processor - incomplete, reserve, and special (Table 1),
# Modification of issue logic to delay the issue of or stall on certain operations (Table 2(a)),
# Modification of cache-coherence logic to allow a processor to retain ownership of a line in the processor's
reserve buffer and to specially handle paired acquires to such a line (Table 2(b)),
# a new processor-to-processor message called "empty special buffer" (Table 2(c)).
The discussion below explains how the above features can be used to implement the pre-release, release-
acquire, and post-acquire parts of the data requirement conditions. (Recall that "preceding" and "following"
refer to the ordering by program order.)
For the pre-release condition, a processor must remember which operations preceding its releases are
incomplete. For this, a processor uses its incomplete buffer to store the address of all its incomplete data opera-
tions. A release is not issued until all preceding synchronization operations complete (to prevent deadlock) and all
preceding data operations are issued. Thus, the incomplete buffer remembers all the operations required by the
pre-release condition. (To distinguish between operations preceding and following a release, entries in the incomplete
buffer may be tagged or multiple incomplete buffers may be used.)
For the release-acquire condition, an acquire cannot complete until the following have occurred regarding
the release paired with the acquire: (a) release is complete, (b) all operations received by the releasing processor
on its acquires preceding the release are complete, and (c) the releasing processor transfers to the new acquiring
processor the addresses of all incomplete operations preceding the release. For this purpose, every processor uses
a reserve buffer to store the processor's releases for which the above conditions do not hold. On a release (which
is a write operation), the releasing processor procures ownership of the released line. The processor does not give
up its ownership while the address of the line is in its reserve buffer. Consequently, the cache-coherence protocol
forwards subsequent requests to the line, including acquires that will be paired with the release, to the releasing
Buffer Contents Purpose
Incomplete Incomplete data operations
(of this processor)
Used to remember incomplete operations (of
this processor) preceding a release (of this pro-
cessor).
Reserve Releases (of this processor)
for which there are incomplete
operations
Used to remember releases (of this processor)
that may cause future paired acquires (of other
processors) to need special attention.
Special Incomplete operations (of
another processor) received
on an acquire (by this processor
Used to identify if an operation (of this proces-
requires special action due to early completion
of acquire (of this processor).
(a) Contents and purpose of buffers
Buffer Insertions Deletions
Event Entry Inserted Event Entry Deleted
Incomplete Data miss Address of data
operation
Data miss complete
Address of data
operation
Reserve Release issued Address of release
operation
Release com-
pletes, operations
preceding release
complete (i.e.,
deleted from incomplete
buffer),
and special buffer
empties
Address of release
operation
Special Acquire completes Addresses received
on acquire
"Empty special
buffer" message
arrives
All entries
(b) Insertion and deletion actions for buffers
Table
1. Key buffers for aggressive implementation of data-race-free-1.
processor. The releasing processor can now stall the acquires paired with the release until conditions (a), (b), and
(c) above are met.
Table
2(b) gives the details of how the base cache-coherence logic can be modified to allow a releasing processor
to retain ownership of the released line in its reserve buffer, and to service acquires paired with the release
only when (a), (b), and (c) above are met. To retain ownership of a released line, the releasing processor stalls
release operations from other processors to the same line and performs a remote service for other external requests
to the same line. The remote service mechanism allows the releasing processor to service the requests of other
Address in
Operation Special Buffer? Action
Process as usual.
Data or unpaired
synchronization
Release No Issue after all previous operations are issued and
all previous synchronization operations complete.
Acquire No Issue after special buffer empties and stall until
acquire completes.
Any Yes Stall or delay issue of only this operation until
special buffer empties.
(a) Modification to issue logic
Address in
Request
Reserve Buffer?
Action
Requests by this processor
Any No Process as usual.
Process as usual.
Any read or write
Cache line replace-
ment
Stall processor until address is deleted from
reserve buffer.
Requests from other processors forwarded to this processor
Any Process as usual.
Release Stall request until address is deleted from reserve
buffer.
Acquire Stall request until special buffer empties and
paired release (in reserve buffer) completes, send
to acquiring processor the released line and entries
of incomplete buffer tagged as preceding the
release, request acquiring processor to not cache
the line, inform directory that this processor is retaining
ownership.
Data or unpaired
synchronization
If read request, send line to other processor; if
write request, update line in this processor's
cache and send acknowledgement to other pro-
request other processor to not cache the
line; inform directory that this processor is retaining
ownership.
(b) Modification to cache-coherence logic at processor
Event Message
All incomplete buffer entries
corresponding to a release deleted
Send "empty special buffer" message
to processors that executed acquires
paired with release.
(c) New processor-to-processor message
Table
2. Aggressive implementation of data-race-free-1.
processors without allowing those processors to cache the line. The mechanisms of stalling operations for an
external release and remote service for other external operations are both necessary. This is because stalling data
operations can lead to deadlock and servicing external release operations remotely would not let the new releasing
processors procure ownership of the line as required for the release-acquire condition. Meeting conditions (a),
(b), and (c) above requires the processor to wait for its release to complete and its special buffer to empty, and to
transfer contents of its incomplete buffer to the acquiring processor.
For the post-acquire condition, a processor must (a) stall on an acquire until it completes, and (b) delay a
following operation until the completion of any conflicting operation transferred to it on the acquire. For this pur-
pose, a processor uses a special buffer to save all the information transferred to it on an acquire. If a following
operation conflicts with an operation stored in the special buffer, the processor can either (a) stall or (b) delay only
this operation, until it receives an "empty special buffer" message from the releasing processor. The releasing
processor sends the "empty special buffer" message when it deletes the address of the release paired with the
acquire from its reserve buffer. For simplicity, an acquiring processor can also stall on the acquire until its special
buffer empties to avoid the complexity of having to delay an operation for incomplete operations of multiple processors
This completes the implementation proposal for the data requirement conditions, assuming a process runs
uninterrupted on the same processor. To handle context switches correctly, a processor must stall before switching
until the various buffers mentioned above empty. Overflow of the above buffers can also be handled by making
a processor stall until an entry is deleted from the relevant buffer.
The above proposal never leads to deadlock or livelock as long as the underlying cache-coherence protocol
is implemented correctly, and messages are not lost in the network (or a time-out that initiates a system clean-up is
generated on a lost message). Specifically, the above proposal never stalls a memory operation indefinitely since
(i) the proposal never delays the completion of issued data operations, and (ii) the proposal delays an operation
only if certain issued data operations are incomplete. Thus, the above proposal does not lead to deadlock or
livelock.
5. vs. Other Models
Previous sections have shown how the data-race-free-1 memory model unifies weak ordering, release con-
sistency, the VAX model, and data-race-free-0. This section first summarizes other memory models proposed in
the literature, and then examines how data-race-free-1 relates to them.
The IBM 370 memory model [19] guarantees that except for a write followed by a read to a different loca-
tion, operations of a single processor will appear to execute in program order, and writes will appear to execute
atomically. The 370 also provides serialization operations. Before executing a serialization operation, a processor
completes all operations that are before the serialization operation according to program order. Before executing
any nonserialization operation, a processor completes all serialization operations that are before that nonserializa-
tion operation according to program order. The processor consistency [11, 16], PRAM [22] and total store ordering
[25] models ensure that writes of a given processor appear to execute in the same order to all other processors.
The models mainly differ in whether a write appears to become visible to all other processors simultaneously or at
different times. The partial store ordering model [25] is similar to total store ordering except that it orders writes
by a processor only if they are separated by a store barrier operation. The model known as release consistency
with processor-consistent special operations [11] is similar to release consistency with sequentially consistent special
operations except that it requires special operations (syncs and nsyncs) to be processor-consistent. The
concurrent-consistency model [26] ensures sequential consistency to all programs except those "which explicitly
test for sequential consistency or take access timings into consideration." The slow memory model [18] requires
that a read return the value of some previous conflicting write. After a value written by (say) processor P i is read,
the values of earlier conflicting writes by P i cannot be returned. The causal memory model [5, 18] ensures that
any write that causally precedes a read is observed by the read. Causal precedence is a transitive relation established
by program order or due to a read that returns the value of a write.
is based on the assumption that most programmers prefer to reason with sequential con-
sistency. Concurrent consistency is the only model above that explicitly states when programmers can expect
sequential consistency; however, the conditions that give sequential consistency seem ambiguous and are difficult
to relate directly to data-race-free-1. The 370 model does not explicitly state when programmers can expect
sequential consistency; however, the previous sections on data-race-free-1 can be used to determine a sufficient
condition as follows. The serialization operations are analogous to the synchronization operations of weak order-
ing; therefore, the 370 appears sequentially consistent to data-race-free programs where serialization operations
that access memory are interpreted as synchronization operations and every write serialization operation is pair-
able with every read serialization operation.
For the remaining models, it is difficult to determine exactly when programmers can expect sequential con-
sistency. If the assumption that programmers prefer to reason with sequential consistency is true, then as stated,
the above models are harder to reason with than data-race-free-1. In the future, we hope to specify the above
models using the approach of data-race-free-1; i.e., specify the models in terms of a formal set of constraints on
programs such that the hardware appears sequentially consistent to all programs that obey those constraints. We
call this approach the sequential consistency normal form. We will investigate if such specifications provide
greater insight and lead to more unifications.
6. Conclusions
Many programmers of shared-memory systems implicitly assume the model of sequential consistency for
the shared memory. Unfortunately, sequential consistency restricts the use of many high performance uniprocessor
optimizations. For higher performance, several alternate memory models have been proposed. Such models
should (1) be simple to reason with and (2) provide high performance. We believe that most programmers prefer
to reason with sequential consistency. Therefore, a way to satisfy the above properties is for a model to appear
sequentially consistent to the most common programs and to give these programs the highest performance possi-
ble. The models of weak ordering, release consistency (with sequentially consistent special operations), the VAX,
and data-race-free-0 are based on the common intuition that if programmers distinguish their data and synchronization
operations, then correct execution can be guaranteed along with high performance. However, each model
formalizes the intuition differently, and has different advantages and disadvantages with respect to the other
models.
This paper proposed a memory model, data-race-free-1, that unifies weak ordering, release consistency, the
VAX model and data-race-free-0, and retains the advantages of each of them. Hardware is data-race-free-1 if it
appears sequentially consistent to all programs that are data-race-free. Data-race-free-1 unifies the four models by
providing a programmer's view that is similar to that of the four models, and by permitting all hardware allowed
by the four models. Compared to weak ordering, data-race-free-1 provides a more formal interface for programmers
since it explicitly states when a program is correctly synchronized (data-race-free) and how hardware
behaves for correctly synchronized programs (sequentially consistent). Also, data-race-free-1 is less restrictive
than weak ordering for hardware designers since it allows an implementation that weak ordering does not allow.
Compared to release consistency, data-race-free-1 is less restrictive for hardware designers since it allows an
implementation that release consistency does not allow. Compared to the VAX model, data-race-free-1 provides
a more formal interface since it explicitly states when a program is correctly synchronized and how hardware
behaves for correctly synchronized programs. Compared to data-race-free-0, data-race-free-1 is less restrictive
for hardware designers since it allows implementations to take different actions on different types of synchronization
operations.
Acknowledgements
We are immensely grateful to Dr. Harold Stone, the editor, for his advice and patience through several revisions
of this paper. We are also grateful to the anonymous referees for many comments and suggestions that have
improved this work considerably. We thank Kourosh Gharachorloo for many insightful discussions on memory
models and comments on earlier drafts of this paper. We also thank Vikram Adve, Brian Bershad, Allan Gottlieb,
Ross Johnson, Alex Klaiber, Jim Larus, David Wood, and Richard Zucker for their valuable comments on earlier
drafts of this paper.
--R
Weak Ordering - A New Definition
Detecting Data Races on Weak Memory Systems
Sufficient Conditions for Implementing the Data-Race-Free-1 Memory Model
An Evaluation of Directory Schemes for Cache Coherence
Implementing and Programming Causal Distributed Shared Memory
Evaluation Using a Multiprocessor Simulation Model
Asynchronous Parallel Successive Overrelaxation for the Symmetric Linear Complementarity Problem
Memory Access Buffering in Multiprocessors
Memory Access Dependencies in Shared-Memory Multiprocessor
Memory Consistency and Event Ordering in Scalable Shared-Memory Multiprocessors
Two Techniques to Enhance the Performance of Memory Consistency Models
Performance Evaluation of Memory Consistency Models for Shared-Memory Multiprocessors
Detecting Violations of
Proving
Computer Sciences Technical Report
The NYU Ultracomputer - Designing an MIMD Shared Memory Parallel Computer
Weakening Consistency to Enhance Concurrency in Distributed Shared Memories
How to Make a Multiprocessor Computer That Correctly Executes Multiprocess Programs
PRAM: A Scalable Shared Memory
Algorithms for Scalable Synchronization on Shared-Memory Multiprocessors
Seminar at Texas Instruments Research Labs (Dallas
Access Ordering and Coherence in Shared Memory Multiprocessors
Efficient and Correct Execution of Parallel Programs that Share Memory
A Study of Weak Consistency Models
--TR
Cache coherence protocols: evaluation using a multiprocessor simulation model
Memory access buffering in multiprocessors
Efficient and correct execution of parallel programs that share memory
An evaluation of directory schemes for cache coherence
Asynchronous parallel successive overrelaxation for the symmetric linear complementarity problem
Access ordering and coherence in shared memory multiprocessors
Memory Access Dependencies in Shared-Memory Multiprocessors
Algorithms for scalable synchronization on shared-memory multiprocessors
Performance evaluation of memory consistency models for shared-memory multiprocessors
Proving sequential consistency of high-performance shared memories (extended abstract)
Detecting violations of sequential consistency
Detecting data races on weak memory systems
Weak orderingMYAMPERSANDmdash;a new definition
Memory consistency and event ordering in scalable shared-memory multiprocessors
Lockup-free instruction fetch/prefetch cache organization
--CTR
Honghui Lu , Alan L. Cox , Willy Zwaenepoel, Contention elimination by replication of sequential sections in distributed shared memory programs, ACM SIGPLAN Notices, v.36 n.7, p.53-61, July 2001
Alvaro E. Campos , Juan E. Navarro, A page-coherent, causally consistent protocol for distributed shared memory, Journal of Systems and Software, v.72 n.3, p.305-319, August 2004
A. L. Cox , S. Dwarkadas , P. Keleher , H. Lu , R. Rajamony , W. Zwaenepoel, Software versus hardware shared-memory implementation: a case study, ACM SIGARCH Computer Architecture News, v.22 n.2, p.106-117, April 1994
Leonidas I. Kontothanassis , Michael L. Scott , Ricardo Bianchini, Lazy release consistency for hardware-coherent multiprocessors, Proceedings of the 1995 ACM/IEEE conference on Supercomputing (CDROM), p.61-es, December 04-08, 1995, San Diego, California, United States
Neves , Miguel Castro , Paulo Guedes, A checkpoint protocol for an entry consistent shared memory system, Proceedings of the thirteenth annual ACM symposium on Principles of distributed computing, p.121-129, August 14-17, 1994, Los Angeles, California, United States
Chong , Kai Hwang, Performance Analysis of Four Memory Consistency Models for Multithreaded Multiprocessors, IEEE Transactions on Parallel and Distributed Systems, v.6 n.10, p.1085-1099, October 1995
Povl T. Koch , Robert J. Fowler , Eric Jul, Message-driven relaxed consistency in a software distributed shared memory, Proceedings of the 1st USENIX conference on Operating Systems Design and Implementation, p.7-es, November 14-17, 1994, Monterey, California
Jos F. Martnez , Josep Torrellas, Speculative synchronization: applying thread-level speculation to explicitly parallel applications, ACM SIGOPS Operating Systems Review, v.36 n.5, December 2002
James R. Larus , Brad Richards , Guhan Viswanathan, LCM: memory system support for parallel language implementation, ACM SIGPLAN Notices, v.29 n.11, p.208-218, Nov. 1994
Robert Stets , Sandhya Dwarkadas , Nikolaos Hardavellas , Galen Hunt , Leonidas Kontothanassis , Srinivasan Parthasarathy , Michael Scott, Cashmere-2L: software coherent shared memory on a clustered remote-write network, ACM SIGOPS Operating Systems Review, v.31 n.5, p.170-183, Dec. 1997
Ramakrishnan Rajamony , Alan L. Cox, Performance debugging shared memory parallel programs using run-time dependence analysis, ACM SIGMETRICS Performance Evaluation Review, v.25 n.1, p.75-87, June 1997
Alex Gontmakher , Avi Mendelson , Assaf Schuster, Using fine grain multithreading for energy efficient computing, Proceedings of the 12th ACM SIGPLAN symposium on Principles and practice of parallel programming, March 14-17, 2007, San Jose, California, USA
Dejan Perkovic , Peter J. Keleher, A Protocol-Centric Approach to on-the-Fly Race Detection, IEEE Transactions on Parallel and Distributed Systems, v.11 n.10, p.1058-1072, October 2000
Leonidas Kontothanassis , Robert Stets , Galen Hunt , Umit Rencuzogullari , Gautam Altekar , Sandhya Dwarkadas , Michael L. Scott, Shared memory computing on clusters with symmetric multiprocessors and system area networks, ACM Transactions on Computer Systems (TOCS), v.23 n.3, p.301-335, August 2005
Guang R. Gao , Vivek Sarkar, Location Consistency-A New Memory Model and Cache Consistency Protocol, IEEE Transactions on Computers, v.49 n.8, p.798-813, August 2000
Fong Pong , Michel Dubois, Formal Automatic Verification of Cache Coherence in Multiprocessors with Relaxed Memory Models, IEEE Transactions on Parallel and Distributed Systems, v.11 n.9, p.989-1006, September 2000
Robert C. Steinke , Gary J. Nutt, A unified theory of shared memory consistency, Journal of the ACM (JACM), v.51 n.5, p.800-849, September 2004
Vijay S. Pai , Parthasarathy Ranganathan , Sarita V. Adve , Tracy Harton, An evaluation of memory consistency models for shared-memory systems with ILP processors, ACM SIGPLAN Notices, v.31 n.9, p.12-23, Sept. 1996
Matthew J. Zekauskas , Wayne A. Sawdon , Brian N. Bershad, Software write detection for a distributed shared memory, Proceedings of the 1st USENIX conference on Operating Systems Design and Implementation, p.8-es, November 14-17, 1994, Monterey, California
Xiaowei Shen , Arvind , Larry Rudolph, Commit-reconcile & fences (CRF): a new memory model for architects and compiler writers, ACM SIGARCH Computer Architecture News, v.27 n.2, p.150-161, May 1999
Fong Pong , Michel Dubois, Verification techniques for cache coherence protocols, ACM Computing Surveys (CSUR), v.29 n.1, p.82-126, March 1997
Allon Adir , Hagit Attiya , Gil Shurek, Information-Flow Models for Shared Memory with an Application to the PowerPC Architecture, IEEE Transactions on Parallel and Distributed Systems, v.14 n.5, p.502-515, May
Dan Grossman , Jeremy Manson , William Pugh, What do high-level memory models mean for transactions?, Proceedings of the 2006 workshop on Memory system performance and correctness, October 22-22, 2006, San Jose, California
Jeremy Manson , William Pugh , Sarita V. Adve, The Java memory model, ACM SIGPLAN Notices, v.40 n.1, p.378-391, January 2005
Jae Bum Lee , Chu Shik Jhon, Reducing coherence overhead of barrier synchronization in software DSMs, Proceedings of the 1998 ACM/IEEE conference on Supercomputing (CDROM), p.1-18, November 07-13, 1998, San Jose, CA
H. Sarojadevi , S. K. Nandy , S. Balakrishnan, On the correctness of program execution when cache coherence is maintained locally at data-sharing boundaries in distributed shared memory multiprocessors, International Journal of Parallel Programming, v.32 n.5, p.415-446, October 2004
M. Rasit Eskicioglu, A comprehensive bibliography of distributed shared memory, ACM SIGOPS Operating Systems Review, v.30 n.1, p.71-96, Jan. 1996 | hazards and raceconditions;data-race-free-1;shared-memory models;weak ordering;shared memory systems;formalization;index termsmultiprocessors;data-race-free-0;sequential consistency;release consistency |
629269 | Detection of Weak Unstable Predicates in Distributed Programs. | This paper discusses detection of global predicates in a distributed program. Earlieralgorithms for detection of global predicates proposed by Chandy and Lamport (1985)work only for stable predicates. A predicate is stable if it does not turn false once itbecomes true. Our algorithms detect even unstable predicates, without excessiveoverhead. In the past, such predicates have been regarded as too difficult to detect.The predicates are specified by using a logic described formally in this paper. We discussdetection of weak conjunctive predicates that are formed by conjunction of predicateslocal to processes in the system. Our detection methods will detect whether such apredicate is true for any interleaving of events in the system, regardless of whether thepredicate is stable. Also, any predicate that can be reduced to a set of weak conjunctivepredicates is detectable. This class of predicates captures many global predicates thatare of interest to a programmer. The message complexity of our algorithm is bounded bythe number of messages used by the program. The main applications of our results are indebugging and testing of distributed programs. Our algorithms have been incorporated ina distributed debugger that runs on a network of Sun workstations in UNIX. | Introduction
A distributed program is one that runs on multiple processors
connected by a communication network. The state
of such a program is distributed across the network and
no process has access to the global state at any instant.
Detection of a global predicate, i.e. a condition that depends
on the state of multiple processes, is a fundamental
problem in distributed computing. This problem arises in
many contexts such as designing, testing and debugging of
distributed programs.
A global predicate may be either stable or unstable. A
stable predicate is one which never turns false once it becomes
true. Some examples of stable predicates are dead-lock
and termination. Once a system has terminated it
will stay terminated. An unstable predicate is one without
such a property. Its value may alternate between true
and false. Chandy and Lamport [3] have given an elegant
algorithm to detect stable predicates. Their algorithm is
based on taking a consistent global snapshot of the system
and checking if the snapshot satisfies the global predicate.
If the snapshot satisfies the stable predicate, then it can
be inferred that the stable predicate is true at the end of
This work was supported in part by the NSF Grant CCR 9110605, the
Navy Grant N00039-91-C-0082, a TRW faculty assistantship award, and
IBM Agreement 153.
V.K. Garg is with the Electrical and Computer Engineering
Dept, University of Texas at Austin, Austin,
[email protected] B. Waldecker is with Austin System Center of
Schlumberger Well Services, Austin,
the snapshot algorithm. Similarly, if the predicate is false
for the snapshot, then it was also false at the beginning of
the snapshot algorithm. By taking such snapshots periodically
a stable property can be detected. Bouge [2] , and
Spezialetti and Kearns [22] have extended this method for
repeated snapshots. This approach does not work for unstable
predicate which may be true only between two snap-shots
and not at the time when the snapshot is taken. An
entirely different approach is required for such predicates.
In this paper, we present an approach which detects a
large class of unstable predicates. We begin by defining
a logic that is used for specification of global predicates.
Formulas in this logic are interpreted over a single run of a
distributed program. A run of a distributed program generates
a partial order of events, and there are many total
orders consistent with this partial order. We call a formula
strong if it is true for all total orders, and weak if there exists
a total order for which it is true. We consider a special
class of predicates defined in this logic in which a global
state formula is either a disjunction, or a conjunction of local
predicates. Since disjunctive predicates can simply be
detected by incorporating a local predicate detection mechanism
at each process, we focus on conjunctive predicates.
In this paper, we describe algorithms for detection of weak
types of these predicates. Detection of strong predicates is
discussed in [10] .
Many of our detection algorithms use timestamp vectors
as proposed by Fidge [6] and Mattern [17]. Each process
detects its local predicate and records the timestamp associated
with the event. These timestamps are sent to
a checker process which uses these timestamps to decide
if the global predicate became true. We show that our
method uses the optimal number of comparisons by providing
an adversary argument. We also show that the checking
process can be decentralized, making our algorithms useful
even for large networks.
The algorithms presented in this paper have many appli-
cations. In debugging a distributed program, a programmer
may specify a breakpoint on a condition using our logic
and then detect if the condition became true. Our algorithms
can also be used for testing distributed programs.
Any condition that must be true in a valid run of a distributed
program may be specified and then its occurrence
can be verified. An important property of our algorithms
is that they detect even those errors which may not manifest
themselves in a particular execution, but may do so
with different processing speeds. As an example, consider
a distributed mutual exclusion algorithm. In some run, it
may be possible that two processes do not access critical
region even if they both had permission to enter the critical
region. Our algorithms will detect such a scenario under
certain conditions described in the paper.
Cooper and Marzullo [5], and Haban and Weigel [11] also
describe predicate detection, but they deal with general
predicates. Detection of such predicates is intractable since
it involves a combinatorial explosion of the state space. For
example, the algorithm proposed by Cooper and Marzullo
[5] has complexity O(k n ) where k is the maximum number
of events a monitored process has executed and n is the
number of processes. The fundamental difference between
our algorithm and their algorithm is that their algorithm
explicitly checks all possible global states, whereas our algorithm
does not. Miller and Choi [19] discuss mainly
linked predicates. They do not discuss detection of conjunctive
predicates (in our sense) which are most useful
in distributed programs. Moreover, they do not make distinction
between program messages and messages used by
the detection algorithm. As a result, the linked predicate
detected by Miller and Choi's algorithm may be true
when the debugger is present but may become false when
it is removed. Our algorithms avoid this problem. Hurfin,
Plouzeau and Raynal [12] also discuss methods for detecting
atomic sequences of predicates in distributed computa-
tions. Spezialetti and Kearns [23] discuss methods for recognizing
event occurrences without taking snapshots. How-
ever, their approach is suitable only for monotonic events
which are similar to stable properties. An overview of these
and some other approaches can be found in [20].
This paper is organized as follows: Section II presents
our logic for describing unstable predicates in a distributed
program. It describes the notion of a distributed run, a
global sequence and the logic for specification of global
predicates. Section III discusses a necessary and sufficient
condition for detection of weak conjunctive predicates. It
also shows that detection of weak conjunctive predicates
is sufficient to detect any global predicate on a finite state
program, or any global predicate that can be written as a
boolean expression of local conditions. Section IV presents
an algorithm for detection of a weak conjunctive predicate.
Section V describes a technique to decentralize our algo-
rithm. Section VI gives some details of an implementation
of our algorithms in a distributed debugger. Finally, section
VII gives conclusions of this paper.
II. Our Model
A. Distributed Run
We assume a loosely-coupled message-passing system without
any shared memory or a global clock. A distributed
program consists of a set of n processes denoted by
communicating solely via asynchronous mes-
sages. In this paper, we will be concerned with a single
run r of a distributed program. Each process P i in that
run generates a single execution trace r[i] which is a finite
sequence of states and actions which alternate beginning
with an initial state. The state of a process is defined
by the value of all its variables including its program
counter. For example, the process P i generates the trace
s i;0 a i;0 s i;1 a are the local states
and a i 's are the local actions in the process P i . There are
three kinds of actions - internal, send and receive. A send
action denoted by send(! means the sending of a
message OE from the process P i to the process P j . A receive
action denoted by receive(! means the receiving
of a message OE from the process P i by the process P j . We
assume in this paper that no messages are lost, altered or
spuriously introduced. We do not make any assumptions
about FIFO nature of the channels. A run r is a vector of
traces with r[i] as the trace of the process P i . From the
reliability of messages we obtain
receive(!
We also define a happened-before relation (denoted by !)
between states similar to that of Lamport's happened-before
relation between events .
1 The state s in the trace r[i] happened-before
(!) the state t in the trace r[j] if and only if one of the
following holds:
1. occurs before t in r[i].
2. The action following s is the send of a message and the
action before t is the reception of that message.
3. There exists a state u in one of the traces such that
t.
The relation ! is a partial order on the states of the processes
in the system. As a result of rules 2 and 3 in the
above definition, we say that there is a message path from
state s to state t if s ! t and they are in different processes.
A run can be visualized as a valid error-free process time
diagram [16].
Example 2 Consider the following distributed program:
Process
var x:integer initially 7; var y,z:integer initially (0,0);
begin begin
possible values of
program counters. A distributed run r is given by:
Another run r 0 can be constructed when two messages sent
by the process P 1 are received in the reverse order.
B. Global Sequence
A run defines a partial order (!) on the set of actions
and states. For simplicity, we ignore actions from a run
and focus just on states in traces. Thus, r[i] denotes the
sequence of states of P i . In general, there are many total
orders that are consistent with (or linearizations of) this
partial order. A global sequence corresponds to a view of
the run which could be obtained given the existence of a
global clock. Thus, a global sequence is a sequence of global
states where a global state is a vector of local states. This
definition of a global state is different from that of Chandy
and Lamport which includes states of the channels. In our
model, a channel is just a set of all those messages that
have been sent but not received yet. Since this set can be
deduced from all the local states, we do not require the
state of channels to be explicitly included in the global
state. We denote the set of global sequences consistent
with a run r as linear(r). A global sequence g is a finite
sequence of global states denoted as l , where
k is a global state for 0 - k - l. Its suffix starting with
l ) is denoted by g k . Clearly, if the
observer restricts his attention to a single process P i , then
he would observe r[i] or a stutter of r[i]. A stutter of r[i] is
a finite sequence where each state in r[i] may be repeated a
finite number of times. The stutter arises because we have
purposely avoided any reference to physical time. Let skt
mean that s 6! t - t 6! s. Then, a global sequence of a run
is defined as:
Definition 3 g is a global sequence of a run r (denoted by
only if the following constraints hold:
restricted to P (or a stutter of r[i])
[i] is the state of P i in the
global state g k
Example 4 Some global sequences consistent with the run
r in Example 2 are given below:
(l
(l
Our model of a distributed run and global sequences does
not assume that the system computation can always be
specified as some interleaving of local actions. The next
global state of a global sequence may result from multiple
independent local actions.
C. Logic Operators
There are three syntactic categories in our logic - bool,
lin and form. The syntax of our logic is as follows:
form ::= A: lin j E: lin
lin
lin - lin j :lin j bool
bool ::= a predicate over a global system state
A bool is a boolean expression defined on a single global
state of the system. Its value can be determined if the
global state is known. For example, if the global state has
then the bool (x - y) is true. Here x
and y could be variables in different processes. A lin is
a temporal formula defined over a global sequence. 3 lin
means that there exists a suffix of the global sequence such
that lin is true for the suffix [21]. We also use 2 as the dual
of 3. We have also introduced a binary operator (,!) to
capture sequencing directly. p ,! q means that there exists
suffixes g i and g j of the global sequence such that p is true
of the suffix g i , q is true of the suffix g j , and i ! j. A form
is defined over a set of global sequences and it is simply a
lin qualified with the universal (A:) or the existential (E:)
quantifier. Thus, the semantics of our logic is as follows:
lin
lin
A, and E quantify over the set of global sequences that
a distributed run may exhibit given the trace for each pro-
cess. A:p means that the predicate p holds for all global
sequences and E:p means that the predicate p holds for
some global sequence. We call formulas starting with A:
as strong formulas and formulas starting with E: as weak
formulas. The intuition behind the term strong is that a
strong formula is true no matter how fast or slow the individual
processes in the system execute. That is, it holds
for all execution speeds which generate the same trace for
an individual process. A weak formula is true if and only
if there exists one global sequence in which it is true. In
other words, the predicate can be made true by choosing
appropriate execution speeds of various processors.
The difficulty of checking truthness of a global predicate
arises from two sources. First, if there are n processes in
the system, the total number of global sequences (in which
a global state is not repeated) is exponential in n and the
size of the traces. Secondly, the global state is distributed
across the network during an actual run. Thus, detection
of any general predicate in the above logic is not feasible
in a distributed program. To avoid the problem of combinatorial
explosion, we focus on detection of predicates
belonging to a class that we believe captures a large sub-set
of predicates interesting to a programmer. We use the
word local to refer to a predicate or condition that involves
the state of a single process in the system. Such a condition
can be easily checked by the process itself. We detect
predicates that are boolean expressions of local predicates.
Following are examples of the formulas detectable by our
algorithms:
1. Suppose we are developing a mutual exclusion algo-
rithm. Let CS i represent the local predicate that the process
P i is in critical section. Then, the following formula
detects any possibility of violation of mutual exclusion for
a particular run:
2. In the example 4, we can check if
Note that
is not true for the global sequence g, but it is true for the
global sequence h. Our algorithm will detect the above
predicate to be true for the run r even though the global
sequence executed may be g.
3. Assume that in a database application, serializability is
enforced using a two phase locking scheme [15]. Further
assume that there are two types of locks: read and write.
Then, the following formula may be useful to identify an
error in implementation:
III. Weak Conjunctive Predicates
A weak conjunctive predicate (WCP) is true for a given
run if and only if there exists a global sequence consistent
with that run in which all conjuncts are true in some global
state. Practically speaking, this type of predicate is most
useful for bad or undesirable predicates (i.e. predicates that
should never become true). In such cases, the programmer
would like to know whenever it is possible that the bad
predicate may become true. As an example, consider the
classical mutual exclusion situation. We may use a WCP
to check if the correctness criterion of never having two or
more processes in their critical sections at the same time is
met. We would want to detect the predicate "process x is in
its critical section and process y is in its critical section". It
is important to observe that our algorithms will report the
possibility of mutual exclusion violation even if it was not
violated in the execution that happened. The detection
will occur if and only if there exists a consistent cut in
which all local predicates are true. Thus, our techniques
detect errors that may be hidden in some run due to race
conditions.
A. Importance of Weak Conjunctive Predicates
Conjunctive predicates form the most interesting class of
predicates because their detection is sufficient for detection
of any global predicate which can be written as a boolean
expression of local predicates. This observation is shown
below:
Lemma 5 Let p be any predicate constructed from local
predicates using boolean connectives. can be
detected using an algorithm that can detect
is a pure conjunction of local predicates.
Proof: We first write p in its disjunctive normal form.
Thus, E: 3
a pure conjunction of local predicates. Next, we observe
that
f semantics of E and 3 g
f semantics of -g
f semantics of E and 3 g
Thus, the problem of detecting is reduced to solving
l problems of detecting is a pure conjunction
of local predicates.
Our approach is most useful when the global predicate
can be written as a boolean expression of local predicates.
As an example, consider a distributed program in which
y and z are in three different processes. Then,
can be rewritten as
where each part is a weak conjunctive predicate.
We note that even if the global predicate is not a boolean
expression of local predicates, but it is satisfied by only a
finite number of possible global states, then it can again be
rewritten as a disjunction of weak conjunctive predicates.
For example, consider the predicate
x and y are in different processes. is not a local
predicate as it depends on both processes. However, if we
know that x and y can only take values f0; 1g, then the
above expression can be rewritten as
This is equivalent to
Each of the disjunct in this expression is a weak conjunctive
predicate.
We observe that predicates of the form A : 2bool can
also be easily detected as they are simply duals of
which can be detected as shown in Section .
In this paper, we have emphasized conjunctive predicates
and not disjunctive predicates. The reason is that
disjunctive predicates are quite simple to detect. To detect
a disjunctive predicate E:3LP 1 - LP 2 - LPm , it
is sufficient for the process P i to monitor LP i . If any of
the process finds its local predicate true, the disjunctive
predicate is true.
B. Conditions for Weak Conjunctive Predicates
We use LP i to denote a local predicate in the process P i ,
and LP i (s) to denote that the predicate LP i is true in the
state s. We say that occurs in the sequence
r[i].
Our aim is to detect whether E: 3(LP 1 -LP
holds for a given r. We can assume m - n because LP i -
LP j is just another local predicate if LP i and LP j belong
to the same process. We now present a theorem which
states the necessary and sufficient conditions for a weak
conjunctive predicate to hold.
Theorem 6 E: 3(LP 1 -LP true for a run
r iff for all 1 - such that LP i is true in
state s i , and s i and s j are incomparable for i 6= j. That is,
r
ks
Proof: First assume that E: 3(LP 1 - LP
true for the run r. By definition, there is a global sequence
2 linear(r) which has a global state, g where all local
predicates are true. We define s
r[i]). Now consider any two distinct
indices i and j between 1 and m. Since s i and s j correspond
to the same global state, s i and s j must be incomparable
by (S2). Therefore, 9s
ks
We prove the other direction (() for 2. The proof
for the general case is similar. Assume that there exist
states such that states s 1 and s 2 are
incomparable, and LP 1 This implies that
there is no message path from s 1 to s 2 or vice-versa. Thus,
any message received in or before s 2 could not have been
sent after s 1 and any message received in or before s 1 could
not have been sent after s 2 . Fig. 1. illustrates this. Thus
1. 1.
2.
2. P1 freezes at s1 and
P2 executes until s2.
2.
step 1. P1 and P2 execute until
P1 at s1 and P2 before s2
Fig. 1. Incomparable States Producing A Single Global
State
it is possible to construct the following execution (global
1. Let both processes execute consistent with the run r
until either P 1 is at s 1 and P 2 is before s 2 , or P 2 is at s 2
and P 1 is before s 1 . Assume without loss of generality
that the former case holds.
2. Freeze P 1 at s 1 and let P 2 execute until it is at s 2 .
This is possible because there is no message sent after
received before s 2 .
We now have a global state g
are true in g .
IV. Detection of Weak Conjunctive Predicates2
[1,0,0]
[1,1,0]
Fig. 2. Examples of lcmvectors
Theorem 6 shows that it is necessary and sufficient to
find a set of incomparable states in which local predicates
are true to detect a weak conjunctive predicate. In this
section, we present a centralized algorithm to do so. Later,
we will see how the algorithm can be decentralized. In
this algorithm, one process serves as a checker. All other
processes involved in WCP are referred to as non-checker
processes. These processes, shown in Fig. 3, check for local
predicates.
Each non-checker process keeps its own local lcmvector
(last causal message vector) of timestamps. These timestamp
vectors are slight a modification of the virtual time
vectors proposed by [6,17]. For the process P j , lcmvec-
tor[i] (i 6= j) is the message id of the most recent message
from P i (to anybody) which has a causal relationship to
for the process P j is the next message id
that P j will use. To maintain the lcmvector information,
we require every process to include its lcmvector in each
program message it sends. Whenever a process receives a
program message, it updates its own lcmvector by taking
the component-wise maximum of its lcmvector and the one
contained in the message. Fig. 2 illustrates this by showing
lcmvector in each interval. Whenever the local
predicate of a process becomes true for the first time since
the most recently sent message (or the beginning of the
trace), it generates a debug message containing its local
timestamp vector and sends it to the checker process.
One of the reasons that the above algorithm is practical
is that a process is not required to send its lcmvector every
time the local predicate is detected. A simple observation
tells us that the lcmvector need not be sent if there has
been no message activity since the last time the lcmvector
was sent. This is because the lcmvector can change its
value only when a message is sent or received. We now
show that it is sufficient to send the lcmvector once after
each message is sent irrespective of the number of messages
received.
Let local(s) denote that the local predicate is true in
state s. We define the predicate first(s) to be true iff
the local predicate is true for the first time since the most
recently sent message (or the beginning of the trace). We
var
lcmvector: array [1.n] of integer;
last causal msg rcvd from process 1 to n*/
firstflag: boolean init true;
local pred: Boolean Expression;
/*the local pred. to be tested by this process*/
2 For sending do
send (prog,
Upon receive (prog, msg
Upon (local pred = true)- firstflag do
firstflag := false;
send (dbg, lcmvector) to the checker process;
Fig. 3. Algorithm for weak conjunctive predicates -
nonchecker process P id
say are the states
in different processes making the wcp true (as in Theorem
6).
Theorem 7 9s
Proof: (() is trivially true. We show ()). By symmetry
it is sufficient to prove the existence of s 0
1 such that
1 as the first
state in the trace of P 1 since the most recently sent message
or the beginning of the trace such that local(s 0
true. As s 1 exists, we know that s 0
exists. By our
choice of s 0
1 ) is true. Our proof obligation is to
show that wcp(s 0
It is sufficient to show that
ks m. For any s j , s 1 6! s j and there is
no message sent after s 0
Also s j 6! s 0
imply
that contradiction. Therefore, we conclude that
ks j for any 2 - j - m.
We now analyze the complexity of non-checker processes.
The space complexity is given by the array lcmvector and
is O(n). The main time complexity is involved in detecting
the local predicates which is the same as for a sequential
debugger. Additional time is required to maintain time
vectors. This is O(n) for every receive of a message. In
the worst case, one debug message is generated for each
program message sent, so the worst case message complexity
is O(m s s is the number of program messages
sent. In addition, program messages have to include time
vectors.
We now give the algorithm for the checker process which
detects the WCP using the debug messages sent by other
processes. The checker process has a separate queue for
each process involved in the WCP. Incoming debug messages
from processes are enqueued in the appropriate queue.
We assume that the checker process gets its message from
any process in FIFO order. Note that we do not require
FIFO for the underlying computation. Only the detection
algorithm needs to implement FIFO property for efficiency
purposes. If the underlying communication is not
FIFO, the checker process can ensure that it receives messages
from non-checker processes in FIFO by using sequence
numbers in messages.
The checker process applies the following definition to
determine the order between two lcmvectors. For any two
lcmvectors, u and v,
Furthermore, if we know the
processes the vectors came from, the comparison between
two lcmvectors can be made in constant time. Let P roc :
ng map a lcmvector to the process it belongs
to. Then, the required computation to check if the
lcmvector u is less than the lcmvector v is
Lemma 8 Let s and t be states in processes P i and P j with
lcmvectors u and v, respectively. Then,
t, then there is a message path from s to t. There-
fore, since P j updates its lcmvector upon receipt of a message
and this update is done by taking the component-wise
maximum, we know the following holds:
Furthermore, since v[j] is the next message id to be used
by could not have seen this value as t 6! s. We
thereby know that v[j] ? u[j]. Hence, the following holds:
We now show that :(s ! t)
by the first
part of this theorem. If (skt)) then there is no message
path from the state s to the state t or vice-versa. Hence,
when P i is at s and P j is at t,
Therefore,
Thus, the task of the checker process is reduced to checking
ordering between lcmvectors to determine the ordering
between states. The following observation is critical for reducing
the number of comparisons in the checker process:
Lemma 9 If the lcmvector at the head of one queue is
less than the lcmvector at the head of any other queue,
then the smaller lcmvector may be eliminated from further
consideration in checking to see if the WCP is satisfied.
Proof: In order for the WCP to be satisfied, we must find
a set of lcmvectors, one from each queue, such that each is
incomparable with all the others in the set. If the lcmvector
at the head of one queue (q i ) is less than that at the head of
another queue (q j ), we know it will be less than any other
lcmvectors in q j because the queues are in increasing order
from head to tail. Also any later arrivals into q j must be
greater than that at the head of q i . Hence, no entry in q j
will ever be incomparable with that at the head of q i so the
head of q i may be eliminated from further consideration in
checking to see if the WCP is satisfied.
The algorithm given in Fig. 4 is initiated whenever any
new lcmvector is received. If the corresponding queue is
non-empty, then it is simply inserted in the queue; other-
wise, there exists a possibility that the conjunctive predicate
may have become true. The algorithm checks for
var
changed, newchanged: set of f1,2,.,mg
Upon recv(elem) from P k do
changed
while (changed 6= OE) begin
newchanged := fg;
for i in changed, and j in f1,2,.,m gdo
begin
newchanged:=newchanged [ fig;
newchanged:=newchanged [ fjg;
changed := newchanged;
for i in changed do deletehead(q i );
end;/* while */
Fig. 4. Algorithm for weak conjunctive predicates - the
checker process
incomparable lcmvectors by comparing only the heads of
queues. Moreover, it compares only those heads of the
queues which have not been compared earlier. For this
purpose, it uses the variable changed which is the set of
indices for which the head of the queues have been up-
dated. The while loop maintains the invariant:
This is done by finding all those elements which are lower
than some other elements and including them in changed.
This means that there can not be two comparable elements
in f1; 2; :::; mg \Gamma changed. The loop terminates when
changed is empty. At that point, if all queues are non-
empty, then by the invariant I, we can deduce that all the
heads are incomparable. Let there be m queues with at
most p elements in any queue. The next theorem deals
with the complexity of the above algorithm.
Theorem 10 The above algorithm requires at most O(m 2 p)
comparisons.
Proof: Let comp(k) denote the number of comparisons required
in the k th iteration of the while loop. Let t denote
the total number of iterations of the while loop. Then, the
total number of comparisons equals
represent the value of changed at the k th it-
eration. jchanged(k)j for k - 2 represents the number
of elements deleted in the iteration of the while
loop. From the structure of the for-loops we get that
Therefore, the total number
of comparisons required are
The following theorem proves that the complexity of the
above problem is at
thus showing that our
algorithm is optimal [8].
Theorem 11 Any algorithm which determines whether there
exists a set of incomparable vectors of size m in m chains
of size at most p, makes at least pm(m \Gamma 1)=2 comparisons.
Proof: We first show it for the case when the size of each
queue is exactly one, i.e. 1. The adversary will give to
the algorithm a set in which either zero or exactly one pair
of elements are comparable. The adversary also chooses
to answer "incomparable" to first m(m \Gamma
tions. Thus, the algorithm cannot determine if the set has
a comparable pair unless it asks about all the pairs.
We now show the result for a general p. Let q i [k] denote
the k th element in the queue q i . The adversary will give
the algorithm q i 's with the following characteristic:
Thus, the above problem reduces to p instances of the problem
which checks if any of the m elements is incompara-
ble. If the algorithm does not completely solve one instance
then the adversary chooses that instance to show m queues
consistent with all the its answers but different in the final
outcome.
V. Decentralization of the Detection Algorithm
We now show techniques for decentralizing the above
algorithm. From the property (P1), we can deduce that if
a set of vectors S forms an anti-chain (that is all pairs of
vectors are incomparable), then the following holds:
8 distinct s; t
We denote this condition by the predicate inc(S). The following
theorem shows that the process of checking inc(S)
can be decomposed into that of checking it for smaller sets.
Theorem 12 Let U be sets of lcmvectors, such
that
by taking componentwise maximum of all vectors in the set
X. Then,
inc(S) iff inc(T
inc(T ) and inc(U ) are clearly true because
We show that
The other conjunct is proved in a similar fashion.
From (P 2), we deduce that 8 distinct s; t
s[P roc(t)]. This means that max T [P
by the definition of max T . From (P 2), we also deduce
that 8 u This means that
by the definition of maxU .
From the above two assertions we conclude that
We will show that (P 2) holds for S, i.e.
8 distinct s; t
If both s and t belong either to T and U , then the above
is true from inc(T ) and inc(U ). Let us assume without
loss of generality that t 2 T and u 2 U . We need to show
that t[P roc(t)] ? u[P roc(t)] (the other part is proved sim-
ilarly). From inc(T ) we conclude that,
t[P roc(t)]. And now from max T [P roc(t)] ? maxU [P roc(t)]
we conclude that t[P roc(t)] ? u[P roc(t)].
Using the above theorem and the notions of a hierar-
chy, the algorithm for checking WCP can be decentralized
as follows. We may divide the set of processes into two
groups. The group checker process checks for WCP within
its group. On finding one, it sends the maximum of all
lcmvectors to a higher process in the hierarchy. This process
checks the last two conjuncts of the above theorem.
Clearly, the above argument can be generalized to a hierarchy
of any depth.
Example 13 Consider a distributed program with four
processes. Let the lcmvectors corresponding to these processes
be
Now instead of checking whether the entire set consists of
incomparable vectors, we divide it into two subsets
We check that each one of them is incomparable. This computation
can be done by group checker processes. Group
to the higher-level process. This process can check that
strictly greater than maxU in the first two components
and maxU is strictly greater than max T in the last
two components. Hence, by Theorem 12, all vectors in the
set S are pairwise incomparable.
VI. Implementation: UTDDB
The main application of our results are in debugging
and testing of distributed programs. We have incorporated
our algorithms in the distributed debugger called UTDDB
(University of Texas Distributed Debugger) [14]. The on-line
debugger is able to detect global states or sequences of
global states in a distributed computation. UTDDB consists
of two types of processes - coordinator and monitor
type. There exists only one coordinator process, but the
number of monitor processes is the same as the number of
application processes in the underlying distributed computation
The coordinator process serves as the checker process
for WCP as well as the user-interface of UTDDB to the
programmer. It accepts input from the programmer such
as distributed predicates to be detected. It also reports to
the programmer if the predicate is detected.
Monitor process are hidden from the programmer. Each
of the monitor processes, detects local predicates defined
within the domain of the application process it is monitor-
ing. This is done by single stepping the program. After
each step, the monitor examines the address space of the
application process to check if any of the simple predicates
in its list are true. It is also responsible to implement algorithms
described as a non-checker process in Section . In
particular, it maintains the vector clock mechanism.
In a distributed debugger, the delays between occurrence
of a predicate, its detection and halting of the program may
be substantial. Thus, when the program is finally halted,
it may no longer be in a state the programmer is interested
in. Therefore, for the weak conjunctive predicate, UT-
DDB gives the programmer the option of rolling back the
distributed computation to a consistent global state where
the predicate is true. The coordinator uses the set of timestamps
that detected the WCP predicate to calculate this
global state which it then sends to all the monitors. As the
application processes execute, they record incoming events
to a file. So, when a monitor receives a message telling it to
roll back an application process, the monitor restarts the
application process and replays the recorded events until
the process reaches a local state that is part of the global
state where the weak conjunctive predicate is true. Such
a restart assumes that the only non-determinism in the
program is due to reordering of messages.
Our algorithms are also used in a trace analyzer (another
part of UTDDB) for distributed programs [4]. Our analyzer
monitors a distributed program and gathers enough
information to form a distributed run as described in Section
II. This approach reduces the probe effect that the
distributed program may experience if the detection was
carried out while the program was in execution. The user
can then ask UTDDB whether any predicate expressed in a
subset of the logic described in this paper ever became true.
We are currently extending these algorithms for detection
of sequences of global predicates [1,9,25], and relational
global predicates [24].
VII. Conclusions
We have discussed detection of global predicates in a
distributed program. Earlier algorithms for detection of
global predicates proposed by Chandy and Lamport work
only for stable predicates. Our algorithms detect even unstable
predicates with reasonable time, space and message
complexity.
Our experience with these algorithms has been extremely
encouraging. In the current implementation, the main
overhead is in the local monitor process for checking local
predicates. By providing special hardware support even
this overhead can be reduced. For example, most architectures
provide special hardware support such as break-point
traps if certain location is accessed. This feature can
be used to make detection of local predicates of the form
(program at line x) very efficient.
We believe that algorithms presented in this paper should
be part of every distributed debugger because they incur
low overhead, and are quite useful in identifying errors in
the program.
Acknowledgements
We would like to thank Bryan Chin, Mohamed Gouda,
Greg Hoagland, Jay Misra, William Myre, Don Pazel, and
Alex Tomlinson for their comments and observations which
have enabled us to strengthen this work. We would also
like to thank Bryan Chin for implementing offline versions
of our algorithms and Greg Hoagland for incorporating our
algorithms in UTDDB. We would also like to thank anonymous
referees for their meticulous review of an earlier version
of the paper.
--R
"Distributed Debugging Tools for Heterogeneous Distributed Systems"
"Repeated Snapshots in Distributed Systems with Synchronous Communication and Their Implementation in CSP"
"Distributed Snapshots: Determining Global States of Distributed Systems"
"An Offline Debugger for Distributed Programs"
"Consistent Detection of Global Predicates"
"Partial Orders for Parallel Debugging"
"Causal Distributed Break- points"
"Some Optimal Algorithms for Decomposed Partially Ordered Sets,"
"Concurrent Regular Expressions and their Relationship to Petri Net Languages,"
"Detection of Unstable Predicate in Distributed Programs,"
"Global events and global breakpoints in distributed systems"
"Detecting Atomic Sequences of Predicates in Distributed Computations,"
"Computing Particular Snapshots in Distributed Systems"
"A Debugger for Distributed Programs"
Database System Concepts
"Time, Clocks, and the Ordering of Events in a Distributed System"
"Virtual time and global states of distributed sys- tems"
"Debugging Concurrent Programs"
"Breakpoints and Halting in Distributed Programs"
"Detecting Causal Relationships in Distributed Computations: In Search of the Holy Grail"
"The Complexity of Propositional Linear Temporal Logic"
"Efficient Distributed Snapshots"
"A General Approach to Recognizing Event Occurrences in Distributed Computations"
"Detecting Relational Global Predicates in Distributed Systems,"
"Detection of Unstable Predicates in Debugging Distributed Programs"
--TR
The complexity of propositional linear temporal logics
Database system concepts
Repeated snapshots in distributed systems with synchronous communications and their implementation in CSP
Global events and global breakpoints in distributed systems
Partial orders for parallel debugging
Debugging concurrent programs
Consistent detection of global predicates
Detection of unstable predicates in debugging distributed programs
Concurrent regular expressions and their relationship to Petri nets
Some optimal algorithms for decomposed partially ordered sets
Detecting relational global predicates in distributed systems
Detecting atomic sequences of predicates in distributed computations
Distributed snapshots
Time, clocks, and the ordering of events in a distributed system
Detection of Unstable Predicates in Distributed Programs
--CTR
Vijay K. Garg, Methods for Observing Global Properties in Distributed Systems, IEEE Parallel & Distributed Technology: Systems & Technology, v.5 n.4, p.69-77, October 1997
Sujatha Kashyap , Vijay K. Garg, Intractability results in predicate detection, Information Processing Letters, v.94 n.6, p.277-282,
Hsien-Kuang Chiou , Willard Korfhage, Enhancing Distributed Event Predicate Detection Algorithms, IEEE Transactions on Parallel and Distributed Systems, v.7 n.7, p.673-676, July 1996
Loon-Been Chen , I-Chen Wu, An Efficient Distributed Online Algorithm to Detect Strong Conjunctive Predicates, IEEE Transactions on Software Engineering, v.28 n.11, p.1077-1084, November 2002
Punit Chandra , Ajay D. Kshemkalyani, Distributed algorithm to detect strong conjunctive predicates, Information Processing Letters, v.87 n.5, p.243-249, 15 September
Karun N. Biyani , Sandeep S. Kulkarni, Testing Dynamic Adaptation in Distributed Systems, Proceedings of the Second International Workshop on Automation of Software Test, p.10, May 20-26, 2007
Ajay D. Kshemkalyani, A Fine-Grained Modality Classification for Global Predicates, IEEE Transactions on Parallel and Distributed Systems, v.14 n.8, p.807-816, August
Michel Hurfin , Masaaki Mizuno , Mukesh Singhal , Michel Raynal, Efficient Distributed Detection of Conjunctions of Local Predicates, IEEE Transactions on Software Engineering, v.24 n.8, p.664-677, August 1998
Sujatha Kashyap , Vijay K. Garg, Exploiting predicate structure for efficient reachability detection, Proceedings of the 20th IEEE/ACM international Conference on Automated software engineering, November 07-11, 2005, Long Beach, CA, USA
Guy Dumais , Hon F. Li, Distributed Predicate Detection in Series-Parallel Systems, IEEE Transactions on Parallel and Distributed Systems, v.13 n.4, p.373-387, April 2002
Craig M. Chase , Vijay K. Garg, Detection of global predicates: techniques and their limitations, Distributed Computing, v.11 n.4, p.191-201, October 1998
Vijay K. Garg , Brian Waldecker, Detection of Strong Unstable Predicates in Distributed Programs, IEEE Transactions on Parallel and Distributed Systems, v.7 n.12, p.1323-1333, December 1996
Scott D. Stoller, Detecting global predicates in distributed systems with clocks, Distributed Computing, v.13 n.2, p.85-98, April 2000
Punit Chandra , Ajay D. Kshemkalyani, Causality-Based Predicate Detection across Space and Time, IEEE Transactions on Computers, v.54 n.11, p.1438-1453, November 2005
Anish Arora , Sandeep S. Kulkarni , Murat Demirbas, Resettable vector clocks, Journal of Parallel and Distributed Computing, v.66 n.2, p.221-237, February 2006
Anirban Majumdar , Clark Thomborson, Manufacturing opaque predicates in distributed systems for code obfuscation, Proceedings of the 29th Australasian Computer Science Conference, p.187-196, January 16-19, 2006, Hobart, Australia
Felix C. Grtner, Fundamentals of fault-tolerant distributed computing in asynchronous environments, ACM Computing Surveys (CSUR), v.31 n.1, p.1-26, March 1999 | weak unstable predicates;distributed memory systems;distributed programs;programtesting;global predicates;sun workstations;testing;UNIX;program debugging;debugging;distributed algorithms;distributed debugger;weakconjunctive predicates;message complexity;index termscommunication complexity |
629280 | Structuring Fault-Tolerant Object Systems for Modularity in a Distributed Environment. | The object-oriented approach to system structuring has found widespread acceptanceamong designers and developers of robust computing systems. The authors propose asystem structure for distributed programming systems that support persistent objects anddescribe how properties such as persistence and recoverability can be implemented. Theproposed structure is modular, permitting easy exploitation of any distributed computingfacilities provided by the underlying system. An existing system constructed according tothe principles espoused here is examined to illustrate the practical utility of the proposedapproach to system structuring. | Introduction
One computational model that has been advocated for constructing robust
distributed applications is based upon the concept of using nested atomic
actions (nested atomic transactions) controlling operations on persistent
(long-lived) objects. In this model, each object is an instance of some class.
The class defines the set of instance variables each object will contain and the
operations or methods that determine the externally visible behaviour of the
object. The operations of an object have access to the instance variables and
can thus modify the internal state of that object. It is assumed that, in the
absence of failures and concurrency, the invocation of an operation produces
consistent (class specific) state changes to the object. Atomic actions can then
be used to ensure that consistency is preserved even in the presence of
concurrent invocations and failures. Designing and implementing a
programming system capable of supporting such 'objects and actions' based
applications by utilising existing distributed system services is a challenging
task. Support for distributed computing on currently available systems varies
from the provision of bare essential services, in the form of networking
support for message passing, to slightly more advanced services for
interprocess communication (e.g., remote procedure calls), naming and
binding (for locating named services) and remote file access. The challenge
lies in integrating these services into an advanced programming
environment. In this paper we present an architecture which we claim to be
modular in nature: the overall system functionality is divided into a number of
modules which interact with each other through well defined narrow
interfaces. We then describe how this facilitates the task of implementing the
* Paper to appear in IEEE Trans. on Parallel and Distributed Systems
architecture on a variety of systems with differing support for distributed
computing.
In the next section, we present an 'object and action' model of
computation, indicating how a number of distribution transparency
mechanisms can be integrated within that model. Section 3 then identifies the
major system components and their interfaces and the interactions of those
components. The proposed system structure is based upon a retrospective
examination of a distributed system - Arjuna [11, 23, 29] - built at Newcastle.
The main aspects of this system are presented in section 4 and are examined
in light of the discussion in the preceding section. In this section we also
describe how the modular structure of the system has enabled us to port it on
to a number of distributed computing platforms. The Arjuna system thus
demonstrates the practicality of the proposed approach to system structuring
discussed in section 3.
Basic Concepts and Assumptions
It will be assumed that the hardware components of the system are
computers (nodes), connected by a communication subsystem. A node is
assumed to work either as specified or simply to stop working (crash). After a
crash, a node is repaired within a finite amount of time and made active
again. A node may have both stable (crash-proof) and non-stable (volatile)
storage or just non-stable storage. All of the data stored on volatile storage is
assumed to be lost when a crash occurs; any data stored on stable storage
remains unaffected by a crash. Faults in the communication subsystem may
result in failures such as lost, duplicated or corrupted messages. Well known
network protocol techniques are available for coping with such failures, so
their treatment will not be discussed further. We assume that processes on
functioning nodes are capable of communicating with each other.
To develop our ideas, we will first describe some desirable transparency
properties a distributed system should support. It is common to say that a
distributed system should be 'transparent' which means that it can be made
to behave, where necessary, like its non-distributed counterpart. There are
several complementary aspects to transparency [1]:
. Access transparency mechanisms provide a uniform means of
invoking operations of both local and remote objects, concealing any
ensuing network-related communications;
. Location transparency mechanisms conceal the need to know the
whereabouts of an object; knowing the name of an object is sufficient to
be able to access it;
. Migration transparency mechanisms build upon the previous two
mechanisms to support movement of objects from node to node to
improve performance or fault-tolerance;
. Concurrency transparency mechanisms ensure interference-free access
to shared objects in the presence of concurrent invocations;
. Replication transparency mechanisms increase the availability of
objects by replicating them, concealing the intricacies of replica
consistency maintenance;
. Failure transparency mechanisms help exploit the redundancy in the
system to mask failures where possible and to effect recovery
measures.
As stated earlier, we are considering a programming system in which
application programs are composed out of atomic actions (atomic
manipulating persistent (long-lived) objects. Atomic actions can
be nested. We will be concerned mainly with tolerating 'lower-level'
hardware related failures such as node crashes. So, it will be assumed that, in
the absence of failures, the invocation of an operation produces consistent
(class specific) state changes to the object. Atomic actions then ensure that
only consistent state changes to objects take place despite failures. We will
consider an application program initiated on a node to be the root of a
computation. Distributed execution is achieved by invoking operations on
objects which may be remote from the invoker. An operation invocation upon
a remote object is performed via a remote procedure call (RPC). Since many
object-oriented languages define operation invocation to be synchronous [30],
RPC is a natural communications paradigm to adopt for the support of access
transparency in object-oriented languages. Furthermore, all operation
invocations may be controlled by the use of atomic actions which have the
properties of (i) serialisability, (ii) failure atomicity, and (iii) permanence of
effect. Serialisability ensures that concurrent invocations on shared objects are
free from interference (i.e., any concurrent execution can be shown to be
equivalent to some serial order of execution). Some form of concurrency
control policy, such as that enforced by two-phase locking, is required to
ensure the serialisability property of actions. Failure atomicity ensures that a
computation will either be terminated normally (committed), producing the
intended results (and intended state changes to the objects involved) or
aborted producing no results and no state changes to the objects. This
atomicity property may be obtained by the appropriate use of backward error
recovery, which can be invoked whenever a failure occurs that cannot be
masked. Typical failures causing a computation to be aborted include node
crashes and communication failures such as the continued loss of messages. It
is reasonable to assume that once a top-level atomic action terminates
normally, the results produced are not destroyed by subsequent node
crashes. This is ensured by the third property, permanence of effect , which
requires that any committed state changes (i.e., new states of objects modified
in the atomic action) are recorded on stable storage. A commit protocol is
required during the termination of an atomic action to ensure that either all
the objects updated within the action have their new states recorded on stable
storage (committed), or, if the atomic action aborts, no updates get recorded
[5, 13].
The object and atomic action model provides a natural framework for
designing fault-tolerant systems with persistent objects. In this model, a
persistent object not in use is normally held in a passive state with its state
residing in an object store or object database and activated on demand (i.e.,
when an invocation is made) by loading its state and methods from the object
store to the volatile store, and associating a server process for receiving RPC
invocations. Atomic actions are employed to control the state changes to
activated objects, and the properties of atomic actions given above ensure
failure transparency. Atomic actions also ensure concurrency transparency,
through concurrency control protocols, such as two-phase locking. Access
transparency is normally provided by integrating an RPC pre-processor into
the program development cycle which produces "stub" code for both the
application and the object implementation. A variety of naming, binding and
caching strategies are possible to achieve location and migration
transparencies.
Normally, the persistent state of an object resides on a single node in one
object store, however, the availability of an object can be increased by
replicating it and thus storing it in more than one object store. Object replicas
must be managed through appropriate replica-consistency protocols to
ensure that the object copies remain mutually consistent. In a subsequent
section we will describe how such protocols can be integrated within action
based systems to provide replication transparency.
We assume some primitive features from a heterogeneous distributed
system:
(i) The state of any object can have a context independent representation
(i.e., free of references to a specific address-space). This implies that
objects can be de-activated for storage or transmission over a network.
(ii) Executable versions of the methods of an object are available on all the
nodes of interest. This implies that objects can be moved throughout
the network simply by transmitting their states.
(iii) Machine-independent representations of data can be obtained for
storage or transmission. This requirement is related to, but distinct
from (i) in that this property enables interpretation of the passive state
of an object in an heterogeneous environment.
Several prototype object-oriented systems have been built, often
emphasising different facets of the overall functionality. For example,
systems such as Argus [14], Arjuna [11, 23, 29], SOS [28] and Guide [4] have
emphasised fault-tolerance and distribution aspects, languages such as PS-Algol
[3], Galileo [2] and E [25] have contributed to our understanding of
persistence as a language feature, while efforts such as [12] have contributed
to the understanding of the design of object stores and their relationship to
database systems. We build on these efforts and describe the necessary
features of a modular distributed programming system supporting persistent
objects.
3 System Structure
3.1 Computation Model and System Modules
With the above discussion in mind, we will first present a simple client-server
based model for accessing and manipulating persistent objects and then
identify the main system modules necessary for supporting the model. As
stated earlier, we will consider an application program initiated on a single
node to be the root of a computation; distributed execution is achieved by
invoking operations on objects which may be remote from the invoker. We
assume that for each persistent object there is at least one node (say a) which,
if functioning, is capable of running an object server which can execute the
operations of that object (in effect, this would require that a has access to the
executable binary of the code for the object's methods as well as the persistent
state of the object stored on some object store). Before a client can invoke an
operation on an object, it must first be connected or bound to the object server
managing that object. It will be the responsibility of a node, such as a, to
provide such a connection service to clients. If the object in question is in a
passive state, then a is also responsible for activating the object before
connecting the requesting client to the server. In order to get a connection, an
application program must be able to obtain location information about the
object (such as the name of the node where the server for the object can be
made available). We assume that each persistent object possesses a unique,
system given identifier (UID). In our model an application program obtains
the location information in two stages: (i) by first presenting the application
level name of the object (a string) to a globally accessible naming service;
assuming the object has been registered with the naming service, the naming
service maps this string to the UID of the object; (ii) the application program
then presents the UID of the object to a globally accessible binding service to
obtain the location information. Once an application program (client) has
obtained the location information about an object it can request the relevant
node to establish a connection (binding) to the server managing that object.
The typical structure of an application level program is shown below:
<create bindings>; <invoke operations from within atomic actions>; <break bindings>
In our model, bindings are not stable (do not survive the crash of the
client or server). Bindings to servers are created as objects enter the scope in
the application program. If some bound server subsequently crashes then the
corresponding binding is broken and not repaired within the lifetime of the
program (even if the server node is functioning again); all the surviving
bindings are explicitly broken as objects go out of the scope of the application
program. An activated object which is no longer in use - because it is not
within the scope of any client application - will not have any clients bound to
its server; this object can be de-activated simply by destroying the association
between the object and the server process, and discarding the volatile image
of the object (recall that the object will always have its latest committed state
stored in some stable object store).
The disk representation of an object in the object store may differ from its
volatile store representation (e.g., pointers may be represented as offsets or
UIDs). Our model assumes that an object is responsible for providing the
relevant state transformation operations that enable its state to be stored and
retrieved from the object store. The server of an activated object can then use
these operations during abort or commit processing. Further, we assume that
each object is responsible for performing appropriate concurrency control to
ensure serialisability of atomic actions. In effect this means that each object
will have a concurrency control object associated with it. In the case of
locking, each method of an object will have an operation for acquiring, if
necessary, a (read or write) lock from the associated 'lock manager' object
before accessing the object's state; the locks are released when the
commit/abort operations are executed.
We can now identify the main modules of a distributed programming
system, and the services they provide for supporting persistent objects.
. Atomic Action module: provides atomic action support to application
programs in form of operations for starting, committing and aborting
atomic actions;
. RPC module: provides facilities to clients for connecting (disconnecting)
to object servers and invoking operations on objects;
. Naming module: provides a mapping from user-given names of objects
to UIDs;
. Binding module: provides a mapping from UIDs to location information
such as the identity of the host where the server for the object can be
made available;
. Persistent Object Support module: provides object servers and access to
stable storage for objects.
The relationship amongst these modules is depicted in Figure 1. Every
node in the system will provide RPC and Atomic Action modules. Any node
capable of providing object servers and/or (stable) object storage will in
addition contain a Persistent Object Support module. A node containing an
object store can provide object storage services via its Persistent Object
Support module. Nodes without stable storage may access these services via
their local RPC module. Naming and Binding modules are not necessary on
every node since their services can also be utilised through the services
provided by the RPC module.
Application Application Application
Object and Action Module
RPC
Persistent
Object Support
Operating System
Portable implementation
System dependant implementation
Binder
Naming
module
Figure
1: Components of a Persistent Object System
The above system structure also enables application programs to be made
portable. An application program directly uses the services provided by the
Atomic Action module which is responsible for controlling access to the rest
of the modules. If all the persistent objects that an application references are
accessed via the Atomic Action service interface, then the portability of the
application depends only on the portability of the Atomic Action module
implementation. The figure suggests that Atomic Action, Naming and
Binding services can also be implemented in a system-independent, portable
way. RPC and Persistent Object Support modules are necessarily system
dependent at some level as they rely directly on operating system services. It
is possible to make naming and binding services portable by structuring them
as application level programs which make use of the Atomic Action module
in a manner suggested above. The Atomic Action module itself can be made
portable provided the services it requires from the RPC and Persistent Object
Support modules are such that they can be easily mapped onto those already
provided by the underlying system (for example, [6] describes how a uniform
RPC system can be built by making use of existing RPC services). An
application may well make use of the host operating system services directly
(e.g., window management) in which case it can lose its portability attribute
(see
Figure
1). Not surprisingly, the only way to regain portability is for the
application to use portable sub-systems for all services (e.g., use the X
Window System [26] for portable graphics services).
In the following discussion, we will initially make two simplifying
assumptions: (i) an object can be activated only at the host node of the object
store (that is, a node without an object store will not be able to provide object
servers); and (ii) objects are not replicated. These restrictions will be removed
subsequently.
3.2 Atomic Action Module
This module can be designed in two ways: (a) as a module providing
language independent primitive operations, such as begin-action, end-action
and abort-action, which can be used by arbitrary application programs; or (b)
as an object-oriented, language specific run-time environment for atomic
actions. The main advantage of the latter approach is that the ensuing class
hierarchy provides scope for application specific enhancements, such as class-specific
concurrency control, which are difficult to provide in the former
approach. Although the choice is not central to the ideas being put forward
here, we discuss the second approach (mainly because we have experience of
building such an environment for C++ [30], as described in section 4).
To explain the functionality required from the Atomic Action module and
the way it utilises the services of other modules, we will consider a simple
C++ program (See Figure 2). In this simple example, an application program
updates a remote persistent object, called thisone, which is of class
Example, with the option of recovering the state of the object if some
condition is not met. The application program creates an instance of an
AtomicAction , called A, begins the action, operates on the object, then
commits or aborts the action. We assume that this program will be first
processed by a language specific stub generator (e.g., [23] for C++) whose
function is to processes a user's application program to generate the
necessary client-server code for accessing remote objects via RPCs. A detailed
explanation of the steps follows:
1. Example B ("thisone"); // bind to the server
2. AtomicAction A;
3. A.Begin(); // start of atomic action A
4. B.op(); // invocation of operation, op on object B
5. if (.) A.Abort(); // abortion of atomic action A
6. else A.End(); // commitment of atomic action A
Figure
2: An atomic action example
Line 1: An instance, B, of (client stub) class Example is created by
executing the constructor for that object. The string "thisone" is used
at object creation time to indicate the name of the persistent object the
program wants to access (the identifier B acts as a local name for the
persistent object thisone). As B is created, the following functions are
performed (more precisely, these actions are performed by the client
stub generated for B):
(i) an operation of the Naming service is invoked, passing the string
"thisone" to obtain the UID of the object;
(ii) an operation of the Binding service is then invoked to obtain the
name of the host (say a) where the server for the object can be
made available; and finally,
(iii) an operation of the local RPC module is invoked to create a
binding with the server associated with the object named by UID
at node a; the binding is in the form of a communication identifier
(CID), a port of the server, which is suitable for RPC
communications. The details given in the descriptions of RPC and
Persistent Object Support modules in the following subsections
will make it clear how such a binding can be established.
Line 2: An instance, A , of class AtomicAction is created.
Line 3: A's begin operation is invoked to start the atomic action.
Line 4: Operation op of B is invoked by via the RPC module. As objects
are responsible for controlling concurrency, the method of this
operation will take any necessary steps, for example, acquiring an
appropriate lock.
Line 5: The action may be aborted under program control, undoing all the
changes to B .
Line 6: The end operation is responsible for committing the atomic action
(typically using the two-phase commit protocol). This is done by
invoking the prepare operation of the server of B (during phase one)
to enable B to be made stable. If the prepare operation succeeds, the
commit operation of the server is invoked for making the new state of
the object stable, otherwise the abort operation is invoked causing the
action to abort.
When B goes out of scope (this program fragment is not shown in the
figure), it is destroyed by executing its destructor. As a part of this, the client-side
destructor (the stub destructor for B ) breaks the binding with the object
server at the remote node (a specific RPC module operation will be required
for this purpose). The functionality required from the RPC, Persistent Object
Support, Naming and Binding modules can now be explained in more detail.
3.3 Remote Procedure Call Module
The RPC module provides distinct client and server interfaces with the
following operations: initiate , terminate (which are operations for
establishment and disestablishment of bindings with servers) and call (the
operation that does the RPC), all these three operations are provided by the
client interface, with get_request and send_reply being provided by
the server interface. The operations provided by the RPC module are not
generally used directly by the application program, but by the generated
stubs for the client and server which are produced by a stub generator as
mentioned before. Clients and servers have communication identifiers, CIDs
(such as sockets in Unix), for sending and receiving messages. The RPC
module of each node has a connection manager process that is responsible for
creating and terminating bindings to local servers. The implementation of
initiate(UID, hostname, .) operation involves the connection
manager process co-operating with a local object store process (see the next
subsection) to return the CID of the object server to the caller.
The client interface operations have the following semantics: a normal
termination will indicate that a reply message containing the results of the
execution has been received from the server; an exceptional return will
indicate that no such message was received, and the operation may or may
not have been executed (normally this will occur because of the crash of the
server and the client's response will be to abort the current atomic action, if
any). The program structures shown in the previous sub-sections show that
binding creation (destruction) can be performed from outside of application
level atomic actions. So it is instructive to enquire what would happen in the
presence of client or server failure before (after) an application level action
has started (finished). The simple case is the crash of a server node, which has
the automatic effect of breaking the connection with all of its clients: if a client
subsequently enters an atomic action and invokes the server, the invocation
will return exceptionally and the action will be aborted; if the client is in the
process of breaking the bindings then this has occurred already. More
difficult is the case of a client crash. Suppose the client crashes immediately
after executing the statement in line 2 (figure 2). Then explicit steps must be
taken to break the 'orphaned' binding: the server node must detect this crash
and break the binding. The functionality of a connection manager process can
be embellished to include periodic checking of connections with client nodes
[22].
Every active object is associated with some object server; this server uses
get_request and send_reply to service operation invocations. One server
may manage several objects (i.e., the correlation between server processes
and objects may not be one-to-one). Any internal details of the server such as
thread management for handling invocations are not relevant to this
discussion.
3.4 Persistent Object Support Module
The Persistent Object Support module, with support from the RPC module,
hides the (potential) remoteness of (stable) object storage systems from the
applications; it also hides system specific details of stable storage, and
provides a uniform service interface for persistent objects. This module is
composed of two components: (i) an object-manager component, responsible
for the provision of object servers; and (ii) an object-store component that acts
as a front end to the local object storage sub-system. The object store
representation (disk representation) of an object may differ from its volatile
store representation (e.g., pointers may be represented as offsets or UIDs). We
assume that the disk representation of objects are instances of the class
ObjectState. Instances of class ObjectState are machine independent
representations of the states of passive objects, convenient for transmission
between volatile store and object store, and also via messages from node to
node. A persistent object is assumed to be capable of converting its state into
an ObjectState instance and converting a previously packed
ObjectState instance into its instance variables (by using operations
save_state and restore_state respectively). Figure 3 shows the state
transformations of a persistent object along with the operations that produce
the transformations (operations read_state and write_state are
provided by the object-store component).
The primary function of an object-store component is to store and retrieve
instances of the class ObjectState : the read_state operation returns the
instance of ObjectState named by a UID and the write_state operation
stores an instance of ObjectState in the object store under the given UID.
In addition we assume two operations, create and delete for creating and
deleting objects.
restore_state()
read_state()
Non-volatile
storage
Volatile
storage
ObjectState in
stable storage
stablestorage
storage
ObjectState
in memory User Object
stable
state
User
Object
Loading stable state Saving state
Figure
3: Object States
A typical implementation of the Persistent Object Support module would
be as follows. The storage and retrieval of objects is managed by a store demon
belonging to the object-store component. The sequence of events discussed
previously with reference to the program fragment in Figure 2 can now be
explained in terms of the activities at the Persistent Object Support module.
Assume that the client program is executing at node N 1 and object thisone
is at the object store of node N 2 (see Figure 4). The client process executing
the program fragment will contain the stub for object B. Thus, at line 2, the
client will execute the generated stub for B. This stub for B is responsible for
accessing the naming and binding services as discussed earlier to obtain the
location information for the object, and then to invoke the i nitiate
operation of the local RPC module in order to send a connection request to
the connection manager at N 2 . Upon receiving such a request this manager
invokes the activate(UID) operation provided by the object-manager
component. The object-manager is responsible for maintaining mappings
between UIDs of activated objects to corresponding servers. Assume first that
the object is currently active; then the object-manager will return, via the
connection manager, the CID of the server to the client at N 1 , thereby
terminating the invocation of initiate at N 1 . Assume now that the object is
passive; then the object-manager will make use of some node specific
activation policy based on which it will either create a new server for object B
or instruct an existing server to activate B. That server uses the store demon
for retrieving the objectState instance (by UID), loads the methods of B
into the server, and invokes the restore_state operation of B. The server
acquires a CID and returns it to the client thus terminating the invocation of
initiate .
Client Store Daemon
Object
Server
Connection
Manager
Figure
4: Accessing an Object
We introduce three additional operations of object-store component that
are necessary for commit processing: write_shadow , commit_shadow , and
delete_shadow . When the prepare operation for commit processing is
received by the server, the volatile state of the object B will be converted into
an instance of ObjectState (by using the save_state operation
provided by B) and the object-store operation, write_shadow , will be
invoked to create a (possibly temporary) stable version. If the server
subsequently receives a commit invocation, it executes the commit_shadow
operation of the object-store to make the temporary version the new stable
state of the object. The response of the server to an abort operation is to
execute the delete_shadow operation and to discard the volatile copy of the
object.
To summarise: the Persistent Object Support module of a node provides
eight operations: a single operation activate to the local connection
manager process, and seven operations to local object server processes
(create, delete, read_state, write_state, write_shadow,
delete_shadow, commit_shadow); an object server itself provides
operations prepare, commit and abort for commit/abort processing of
the persistent object(s) it is managing. The operations a persistent object has
to provide in order to make itself persistent and recoverable are save_state
and restore_state.
It should be noted that an atomic action itself needs to record some
recovery data on stable storage (e.g., an intentions list) for committing or
aborting the action in the presence of failures. In the example considered, the
intention list will be split between the client and server; however these
details, which have been discussed extensively in the literature, have been
glossed over here. Support of nested and concurrent atomic actions further
complicates the details of managing the commit records, but these aspects are
also not central to the present discussion.
3.5 Naming and Binding Modules
The Naming and Binding services together support the location of objects by
name and the management of naming contexts. Such services are often
designed as a part of a single 'name server' which becomes responsible for
mapping user supplied names of objects to their locations (e.g., [21]).
However, these two services provide logically distinct functions related to
applications. Whereas the object name to UID mappings maintained by the
Naming module are expected to be static, the UID to location mappings
maintained by the Binding module can change dynamically in a system
supporting migration and replication.
The user-supplied names associated with objects are a convenience for the
application programmer, not a fundamental part of the system's operation;
within the system, an object is identified by its unique identifier, UID. The
mapping from names of persistent objects to their corresponding UIDs is
performed by the Naming service operation, lookup, which returns a UID.
The Naming service itself can be implemented out of persistent objects by
making use of the services provided by the Atomic Action module. This
apparent recursion in design is easily broken by using well-known CIDs for
accessing the Naming services. In addition to the lookup operation, the
Naming service should also provide add and delete operations for
inserting and removing string names in a given naming context. A naming
service can always be designed to exploit an existing service (such as the
Network Information Service [31]) rather than depending solely on the
Atomic Action and other related modules for persistent object storage.
The Binding service, which maps UIDs to hosts, can also be designed as
an application of the Atomic Action services. In addition to the locate
operation, add and delete operations must also be made available.
Enhancements to the functionality provided by the Binding service are
required to support migration and replication of objects, as we discuss below.
3.6 Provision of Migration and Replication Transparencies
The architecture discussed so far possesses the functionality for supporting
all the transparencies described earlier except replication and migration. We
discuss now the enhancements necessary to support these two
transparencies. First of all we observe that the Naming service need not be
affected, since it only maintains name to UID mappings for objects. The
Binding service will be affected however, as for example a given object will
be required to have its state stored on several object stores to support
replication. This and other aspects are discussed below, starting with
migration.
A simple but quite effective form of object migration facility can be made
available by supporting migration during activation of an object: by
permitting an object to be activated away from its object store node. This can
be achieved by allowing the operations of a Persistent Object Support module
to be invocable by remote object servers (and not just the local ones), thereby
permitting an object server process to obtain the state (and methods) of the
object from a remote object store. Thus a node without an object store can also
now run object servers; such a node will contain a Persistent Object Support
module, but without its object-store component. For the sake of simplicity, we
will assume that the state and methods of an object are stored together in a
single object store (this restriction can be removed easily without affecting the
main ideas to be discussed below). One possible way of mechanising remote
activation is discussed now. We assume that the object-manager component
of a Persistent Object Support module now no longer maintains the mappings
between UID to servers for activated objects, rather this information is made
part of the Binding service. Thus, for a passive object, the l ocate(UID)
function of the Binding service will return to the client the hostname of the
object store node, together with a list of nodes where object servers can be
made available, and for an active object, the pair (hostname, CID) indicating
the CID of the object server managing the object at the node 'hostname'. A
passive object will be activated as follows: from the list containing the names
of potential server nodes and the object store node returned by the Binder,
the client uses some criterion (e.g. the nearest node) for selecting a desirable
server node for activation (say N i ), and then directs its initiate request to
the connection manager process of N i , giving it the name of the object store
node (say N k ); at N i an object server process gets the task of activating the
object; this server fetches the necessary methods and state from N k , acquires
a CID and returns the CID to the client; the initiate operation terminates
after the client has registered this CID with the Binding service. Registration
with the Binder is necessary to ensure that any other client accessing this
object also gets bound to the same server. Since we are assuming that an
object is responsible for enforcing its own concurrency control policy - this to
a large extent solves the problem of migrating concurrency control
information with the object, since the "concurrency controller" of the object
will move with the object. The scheme discussed here can be extended to
permit movement of objects in between invocations, provided a client can
locate the object that has since moved. A simple way of making migration
information available to other clients is to leave a 'forwarding address' at the
old site so that any invocations directed there can be automatically forwarded
(see [8] for a more detailed discussion).
We turn our attention to the topic of replication transparency. We have so
far assumed that the persistent state of an object resides on a single object
store of a node; if that node is down, then the object becomes unavailable. The
availability of an object may be increased by replicating it on several nodes
and thus storing its state in more than one object store. Such object replicas
must then be managed through appropriate replica-consistency protocols to
ensure that the object copies remain mutually consistent. We will consider the
case of strong consistency which requires that all replicas that are regarded as
available be mutually consistent (so the persistent states of all available
replicas are required to be identical). We discuss below three aspects of
replica consistency management; the first and the third are concerned mainly
with the management of information about object replicas maintained by the
Binding service, whereas the second is concerned mainly with the
management of replicas once they have been activated.
(1) Object binding: It is necessary to ensure that, when an application
program presents the name (UID) of an object which is currently
passive to the Binding service, the service returns a list containing
information about only those replicas of the object that are (a) mutually
consistent, and also (b) contain the latest persistent state of the object.
From this information, one, more, or all replicas, depending upon the
replication policy in use (see below), can be activated. If the object has
been activated already, then the Binding service must permit binding
to all of the functioning servers that are managing replicas of the
activated object. If we assume a dynamic system permitting changes to
the degree of replication for an object (e.g., a new replica for an object
can be added to the system), then it is important to ensure that such
changes are reflected in the binding service without causing
inconsistencies to the current clients of the object.
(2) Object activation and access: A passive object must be activated
according to a given replication policy. We identify three basic object
replication policies. (i) Active replication: In active replication, more than
one copy of a passive object is activated on distinct nodes and all
activated copies perform processing [27]. (ii) Co-ordinator-cohort passive
replication: Here, as before, several copies of an object are activated;
however only one replica, the co-ordinator , carries out processing [7].
The co-ordinator regularly checkpoints its state to the remaining
replicas, the cohorts. If the failure of the co-ordinator is detected, then
the cohorts elect one of themselves as the new co-ordinator to continue
processing. (iii) Single copy passive replication : In contrast to the previous
two schemes, only a single copy is activated; the activated copy
regularly checkpoints its state to the object stores where states are
stored [5]. This checkpointing can be performed as a part of the commit
processing of the atomic action, so if the activated copy fails, the
application must abort the affected atomic action (restarting the action
will result in a new copy being activated).
Activated copies of replicas (cases (i) and (ii)) must be treated as a
single group by the application in a manner which preserves mutual
consistency. Suppose the replication policy is active replication.
Consider the following scenario (see figure 5), where group G A
invoking a service operation on group G B (a single
object B) and B fails during delivery of the reply to G A . Suppose that
the reply message is received by A 1 but not by A 2 , in which case the
subsequent action taken by A 1 and A 2 can diverge. The problem is
caused by the fact that the failure of B has been 'seen' by A 2 and not A 1 .
To avoid such problems, communication between replica groups can
require reliable distribution and ordering guarantees not associated
with non-replicated systems: reliability ensures that all correctly
functioning members of a group receive messages intended for that
group and ordering ensures that these messages are received in an
identical order at each functioning member [27].
GB
GA
Figure
5: Operation Invocation for Replicated Objects
(3) Commit processing : Once an application has finished using an object, it
is necessary to ensure that the new states of mutually consistent object
replicas get recorded to their object stores; this takes place during the
commit time of the application's atomic action. At the same time, it is
also necessary to ensure that the information about object replicas
maintained by the Binding service remains accurate. Consider an
application that modifies some object, say A, and active replication is in
use; suppose at the start of the application two replicas for A
are available, but that the crash of a node makes one of them (say
modified; then at commit time,
information maintained about A within the Binding service should be
modified to 'exclude' A 2 from the list of available replicas of A
(otherwise subsequent applications may end up using mutually
inconsistent copies of A).
We conclude this subsection by observing that the introduction of
migration and replication transparencies enforces consistency requirements
on the Binding service that can be best met by composing the service out of
persistent objects whose operations are structured as atomic actions (see [16]
for more discussion).
4 Case Study: an Examination of Arjuna
We have arrived at the system structuring ideas presented in the previous
section based on our experience of designing and implementing a distributed
programming system called Arjuna [11, 23, 29]. Arjuna is an object-oriented
programming system implemented in C++ that provides a set of tools for the
construction of fault-tolerant distributed applications constructed according
to the model discussed in section 2. Arjuna provides nested atomic actions for
structuring application programs. Atomic actions control sequences of
operations upon (local and remote) objects, which are instances of C++
classes. Operations upon remote objects are invoked through the use of
remote procedure calls (RPCs). At the time of writing (December 1992), the
prototype system has been operational for more than two years and has
provided us with valuable insight into the design and development of such
systems. The architecture presented in section 3 can be regarded as an
idealised version of Arjuna.
4.1 Arjuna on Systems with Support for Networking Only
This section of the paper first describes the Arjuna system as designed and
implemented to run on Unix workstations with just networking support for
distributed computing (Unix sockets for message passing over the network);
so all of the five modules shown in figure 1 (Atomic Action, Naming,
Binding, Persistent Object Support and RPC modules) had to be
implemented. In our discussion, we will be focusing on the approach taken to
implementing the Atomic Action module.
The Atomic Action module has been implemented using a number of C++
classes which are organised in a class hierarchy that will be familiar to the
developers of "traditional" (single node) centralised object-oriented systems.
At the application level, objects are the only visible entities; the client and
server processes that do the actual work are hidden. In Arjuna, server
processes are created dynamically as RPCs are made to objects; these servers
are created using the facilities provided by the underlying RPC subsystem,
Rajdoot, also built by us [22]. The current implementation of Arjuna makes
use of the Unix file system for long term storage of objects, with a class
ObjectStore providing an object-oriented interface to the file system. The
design and implementation of the Arjuna object store is discussed elsewhere
[11], along with the object naming (UID) scheme. This implementation
strategy for the object store has been acceptable, but its performance is
understandably poor. The naming and binding services themselves have
been implemented out of Arjuna persistent objects.
StateManager
AtomicAction LockManager Lock AbstractRecord
User defined
User defined
Locks
LockRecord Recovery
Record
Figure
The Arjuna Class Hierarchy
The principal classes which make up the class hierarchy of Arjuna Atomic
Action module are depicted in Figure 6. To make use of atomic actions in an
application, instances of the class, AtomicAction must be declared by the
programmer in the application as illustrated in Figure 2; the operations this
class provides (Begin , Abort , End ) can then be used to structure atomic
actions (including nested actions). The only objects controlled by the resulting
atomic actions are those objects which are either instances of Arjuna classes or
are user-defined classes derived from LockManager and hence are members
of the hierarchy shown in Figure 6. Most Arjuna classes are derived from the
base class StateManager , which provides primitive facilities necessary for
managing persistent and recoverable objects. These facilities include support
for the activation and de-activation of objects, and state-based object
recovery. Thus, instances of the class StateManager are the principal users
of the object store service. The class LockManager uses the facilities of
StateManager and provides the concurrency control (two-phase locking in
the current implementation) required for implementing the serialisability
property of atomic actions. The implementation of atomic action facilities for
recovery, persistence management and concurrency control is supported by a
collection of object classes derived from the class AbstractRecord which is
in turn derived from StateManager . For example, instances of
LockRecord and RecoveryRecord record recovery information for Lock
and user-defined objects respectively. The AtomicAction class manages
instances of these classes (using an instance of the class RecordList which
corresponds to the intentions list mentioned before) and is responsible for
performing aborts and commits.
Consider a simple example. Assume that O is a user-defined persistent
object. An application containing an atomic action A accesses this object by
invoking an operation op1 of O which involves state changes to O . The
serialisability property requires that a write lock must be acquired on O
before it is modified; thus the body of op1 should contain a call to the
appropriate operation of the concurrency controller (See Figure 7):
// body of op1
setlock (new Lock(WRITE));
// actual state change operations follow
Figure
7: The use of Locks in Implementing Operations
The operation setlock , provided by the LockManager class, performs
the following functions in this case:
(i) check write lock compatibility with the currently held locks, and if
allowed,
(ii) use StateManager operations for creating a RecoveryRecord
instance for O (the Lock is a WRITE lock so the state of the object must
be retained before modification) and insert it into the RecordList of
(iii) create and insert a LockRecord instance in the RecordList of A.
Suppose that action A is aborted sometime after the lock has been
acquired. Then the abort operation of AtomicAction will process the
RecordList instance associated with A by invoking the abort operation on
the various records. The implementation of this operation by the
LockRecord class will release the WRITE lock while that of
RecoveryRecord will restore the prior state of O .
The AbstractRecord based approach of managing object properties has
proved to be extremely useful in Arjuna. Several uses are summarised here.
RecoveryRecord supports state-based recovery, since its abort operation is
responsible for restoring the prior state of the object. However, its recovery
capability can be altered by refining the abort operation to take some
alternative course of action, such as executing a compensating function. This
is the principal means of implementing type-specific recovery for user-defined
objects in Arjuna. The class LockRecord is a good example of how
recoverable locking is supported for a Lock object: the abort operation of
LockRecord does not perform state restoration, but executes a
release_lock operation. Note that locks are, not surprisingly, also treated
as objects (instances of the class Lock), therefore they employ the same
techniques for making themselves recoverable as any other object. Similarly,
no special mechanism is required for aborting an action that has accessed
remote objects. In this case, instances of RpcCallRecord are inserted into
the RecordList instance of the atomic action as RPCs are made to the
objects. Abortion of an action then involves invoking the abort operation of
these RpcCallRecord instances which in turn send an "abort" RPC to the
servers.
In the previous section we described three object replication approaches;
out of these we have performed trial implementations of active and single
copy passive replication in Arjuna [15, 17]. Active replication is often the
preferred choice for supporting high availability of real-time services where
masking of replica failures with minimum time penalty is considered highly
desirable. Since every functioning member of a replica group performs
processing, active replication of an object requires that all the functioning
replicas of an object receive identical invocations in an identical order. Thus,
active replication requires multicast communication support satisfying
rigorous reliability and ordering requirements. Single copy passive
replication on the other hand can be implemented without recourse to any
complex multicast protocols (as only one replica carries out the computation
at any time); however, its performance in the presence of primary failures can
be poorer as it is necessary to abort the action and retry. We therefore believe
that a fault tolerant system should be capable of supporting a number of
replication schemes. The main elements of our design are summarised below.
(i) The Binding service (implemented as one or more Arjuna objects)
maintains a 'group view database (GVD)' which records the
information on available replicas of an object. The GVD itself can be
replicated using either of the techniques to be described below. This
database is accessed using atomic actions.
(ii) Passive Replication: To access an object (A), the application object first
contacts the GVD, which returns a list containing location information
on all the consistent replicas of A; a simple static ordering scheme is
used for primary selection. The application object uses the RPC module
operation initiate(.) for binding to the primary copy of A . If A
itself accesses replicated objects, then the same technique is used again.
At commit time, each primary object is responsible for updating its
secondaries: this is made possible in Arjuna because the state of Arjuna
objects can be transmitted over the network. If during the execution of
an action, a primary is found to become inaccessible (e.g., its node has
crashed), then the action is aborted; as a part of the abort procedure,
the GVD is accessed and the name of this primary is removed from the
list of available replicas. Since actions can be nested, abortion need not
be of the entire computation (the enclosing action can retry). When a
crashed node containing a replica is repaired, it can include its copy of
the object by running a join atomic action which updates the copy from
some other replica and then inserts its name in the GVD's list for that
object. In summary, the only major changes necessary to non-replicated
version of Arjuna have been the creation and maintenance of GVD, and
modifications to the abort and commit procedures as hinted above.
(iii) Active Replication: Activating an object now consists of activating all the
copies listed in the group view list returned by the GVD. As atomic
actions access replicated objects, a more accurate view of current group
membership for an object is formed (if a copy is detected to have
failed). At commit time, this current view is used for updating the
GVD. Thus, any failed replicas automatically get excluded. The
incorporation of active replication has meant the following two main
changes to non-replicated version of Arjuna (in addition to the need for
the creation and maintenance of GVD already discussed above):
. RPC module: the original unicast RPC has been replaced by a
reliable group RPC, which is capable of invoking all the
functioning copies of the object that were activated (in effect this
has meant replacing the original datagram based RPC
implementation by a reliable multicast protocol based one [15,
17]). In particular, the group RPC ensures that a replicated call
from one group to another appears to behave like a single, non
replicated call.
. Atomic Action module: the module is now responsible for
manipulating object group view information. This means that an
atomic action is required to maintain an 'exclude list' of replicas
detected to have failed; at commit time this list is used for
removing the names of these replicas from the group view list
maintained by the GVD.
In summary, our approach has been to provide the basic binding
information about object replicas via the GVD (an Arjuna object) which can
then be used for providing either active or passive replication. The passive
replication scheme has the advantage that it can be supported on top of any
'conventional' RPC system - this is important to a system like Arjuna which
has been designed to be capable of exploiting the functionality offered by the
underlying distributed system software.
The current design for Arjuna, while elegantly sorting out the functions of
the Atomic Action module into classes, fails to separate the interfaces to the
supporting environment in the manner of section 3. The class
StateManager combines operations relating to Persistent Object Support,
RPC, Naming and Binding. The management of recovery, persistence,
distribution and concurrency control is well-organised around the classes
discussed before, but the interfaces to the services are not so well organised.
The present RPC facility, while supporting the interface discussed, is also
responsible for the creation of object servers, a function which should be
performed by the Persistent Object Support module. The Naming and
Binding services have not been properly separated, their combined functions
currently being performed by a simple name server. Revisions to the system
to carry through the object-oriented design along the lines presented in
Section 3 of this paper are currently underway. These revisions however do
not represent a major overhaul of the system. Thus the system demonstrates
that distributed systems structured along the lines of Figure 1 can be built.
4.2 Arjuna on Other Systems
We will now describe how the Arjuna system described above has been
adapted to run on two quite different systems providing basic support for
distributed computing (e.g., RPC), enabling the Atomic Action module of
Arjuna to utilise some of the services of the host system in place of the
services of the modules built earlier. It is because our system has the modular
structure proposed here that we have been able to perform such ports.
Our first port has been on to the ANSAware distributed computing
platform. The ANSAware platform has been developed by the ANSA project
[1]; the platform provides RPC, object servers (known as capsules) and naming
and binding services via a subsystem known as the Trader , for networked
workstations (several operating systems are supported; we have so far used
only Unix). This porting has been a relatively straight forward exercise. To
start with, we have removed the RPC module used in the original Arjuna
(Rajdoot) and mapped the RPC operations (initiate , terminate and
call) onto those provided by ANSAware. This enables Arjuna applications
to run on top of ANSAware; this port automatically supports passive
replication. In the near future we will enhance this port to use the ANSAware
Trader for registering Arjuna naming and binding services. The ANSAware
system has recently been upgraded to support group invocations [20] for
active replication and object storage services [19]. We believe that these
services can also be used in place of the original Arjuna services used for
supporting active replication and object storage.
We have also performed experiments to ascertain whether Arjuna can be
made to run on integrated environments provided by distributed operating
systems [10]. The experimental configuration we have used consists of a
locally distributed multiprocessor system of twelve T800 transputers, each
with 2 Mbytes of memory, interconnected to form of a two-dimensional grid
(see figure 7). Each transputer runs a copy of Helios, which is a is a general-purpose
distributed operating system [24]. The Helios file server program
(hfs) running on one of the transputers provides access to a disk, which is
used as an object repository.
hfs
Figure
8: A multi-transputer system
The Helios operating system provides a number of facilities for client-server
programming. Helios treats every file, process, and device, including
processors, as an object, each of which can be named using Unix like path
names. Each object is represented by an Object-structure which contains
information such as the full pathname of the object, and the object type e.g.
file, process etc. The Helios Locate function allows an Object-structure to be
obtained for any object in the system, given its name. The function accesses a
local (to a processor) name server which can initiate a flood search
throughout the system if the Object-structure is not available locally. As a
result of the search the local name server is updated with the relevant Object-
structure and subsequent locates for that object are handled entirely locally.
Once an object has been located it may be opened through the use of the
Helios Open function. If the object is a process then the Object-structure
contains the Helios port via which messages may be sent to that process
using the Helios PutMsg function. Messages are received on a port using the
Helios GetMsg function. A process can act as a server by binding one of its
communications identifier, CID, to a name (a service name), registering that
service name with the local Helios name server and waiting for
communication over that CID. Any process may obtain the CID of a
registered server by using the Locate function.
To port Arjuna, we have implemented a number of Helios application
programs, collectively known as the object management layer. This layer
implements an RPC facility using PutMsg, GetMsg functions of Helios, and
object servers which are mapped onto a Helios servers which may then
register themselves as discussed above. Such a server may then receive Open
requests from clients on the communication port associated with the service
name. Although several shortcuts have been taken in this exercise (e.g. client
and server stubs have been hand crafted), the experiment does show that the
functionality required by Arjuna Atomic Action module can be mapped, via
the object management layer, onto the underlying services provided by
Helios.
6 Concluding Remarks
This paper has presented a modular architecture for structuring fault-tolerant
distributed applications. By encapsulating the properties of persistence,
recoverability, shareability, serialisability and failure atomicity in an Atomic
Action module and defining narrow, well-defined interfaces to the
supporting environment, we achieve a significant degree of modularity as
well as portability for atomic action based object-oriented systems. We have
arrived at the ideas presented here based on our experience of building the
Arjuna system which can be made to run on a number of distributed
computing platforms.
The Atomic Action module provides a fixed means of combining the
above stated object properties. We are now investigating whether these can
be provided individually, permitting application specific selection. Such a
system for example could permit shareable objects that need not be
persistent, and vice-versa (although they could be). Furthermore these
properties can be enabled and disabled at run-time based on application
requirements. Our initial work in this direction reported in [9, 18] indicates
that this is indeed possible.
Acknowledgements
The Arjuna project has been and continues to be a team effort. Critical
comments from Graham Parrington, Mark Little and Stuart Wheater are
gratefully acknowledged. Continued interactions with colleagues on the
ANSA-ISA project have proved beneficial. The ANSAware and Helios ports
have been performed by Joao Geada, Stuart Wheater, Jim Smith and Steve
Caughey. The work reported here has been supported in part by
grants from the UK Science and Engineering Research Council
(Grant Numbers GR/F38402, GR/F06494 and GR/H81078) and
ESPRIT projects ISA (Project Number 2267) and BROADCAST
(Basic Research Project Number 6360).
--R
Advanced Networked Systems Architecture (ANSA) Reference Manual
"The Implementation of Galileo's Persistent Values"
"PS-algol: An Algol with a Persistent Heap"
"Architecture and Implementation of Guide, an Object-Oriented Distributed System"
"Concurrency Control and Recovery in Database Systems"
"A Remote Procedure Call Facility for Interconnecting Heterogeneous Computer Systems"
"Exploiting virtual synchrony in distributed systems"
"Implementing Location Independent Invocation"
"Shadows: a flexible run-time support systems for objects in a distributed system"
"Implementing fault-tolerant object systems on distributed memory multiprocessors"
"The Treatment of Persistent Objects in Arjuna,"
"Transaction Management in an Object-Oriented Database System"
"Notes on Data Base Operating Systems,"
"Guardians and Actions: Linguistic Support for Robust, Distributed Programs"
"Replicated K resilient objects in Arjuna"
"Maintaining information about persistent replicated objects in a distributed system"
"Object Replication in a Distributed System"
"Developing a class hierarchy for object-oriented transaction processing"
"A persistent object infrastructure for heterogeneous distributed systems"
"A model for interface groups"
"The Clearinghouse: A decentralized agent for locating named objects in a distributed environment"
"Rajdoot: a remote procedure call mechanism supporting orphan detection and killing"
"Reliable Distributed Programming in C++: The Arjuna Approach"
"The Helios Operating System"
"Persistence in the E Language: Issues and Implementation"
"The X Window System"
"Implementing Fault-Tolerant Services Using the State Machine
"Persistence and migration for C++ objects"
"An overview of the Arjuna distributed programming system"
"Network Programming"
--TR
The C++ programming language
Concurrency control and recovery in database systems
The X window system
A remote procedure call facility for interconnecting heterogeneous computer systems
Exploiting virtual synchrony in distributed systems
Transaction management in an object-oriented database system
The implementation of Galileo''s persistent values
The Helios operating system
The treatment of persistent objects in Arjuna
Persistence in the E Language: Issues and implementation
Implementing fault-tolerant services using the state machine approach: a tutorial
Guardians and Actions: Linguistic Support for Robust, Distributed Programs
An Overview of the Arjuna Distributed Programming System
Implementing Location Independent Invocation
Rajdoot
Developing a Class Hierarchy for Object-Oriented Transaction Processing
Notes on Data Base Operating Systems
Maintaining Information about Persistent Replicated Objects in a Distributed System
Flexible Support System for Objects in Distributed Systems
--CTR
Elisa Bertino , Sushil Jajodia , Luigi Mancini , Indrajit Ray, Advanced Transaction Processing in Multilevel Secure File Stores, IEEE Transactions on Knowledge and Data Engineering, v.10 n.1, p.120-135, January 1998
M. C. Little , S. K. Shrivastava, An examination of the transition of the Arjuna distributed transaction processing software from research to products, Proceedings of the 2nd conference on Industrial Experiences with Systems Software, p.4-4, December 08, 2002, Boston, MA
Salvatore T. March , Charles A. Wood , Gove N. Allen, Research Frontiers in Object Technology, Information Systems Frontiers, v.1 n.1, p.51-74, July 1999 | modularity;distributed computing;atomic actions;fault-tolerantobject systems;index termsobject-oriented methods;object-oriented systems;persistent objects;atomic transactions;distributed programming systems;migration;replication;distributed processing;object-oriented approach;distributed systems;fault tolerance;distributed environment;fault tolerant computing |
629283 | Optimal Processor Assignment for a Class of Pipelined Computations. | The availability of large-scale multitasked parallel architectures introduces the followingprocessor assignment problem. We are given a long sequence of data sets, each of whichis to undergo processing by a collection of tasks whose intertask data dependencies forma series-parallel partial order. Each individual task is potentially parallelizable, with aknown experimentally determined execution signature. Recognizing that data sets can bepipelined through the task structure, the problem is to find a "good" assignment ofprocessors to tasks. Two objectives interest us: minimal response time per data set,given a throughput requirement, and maximal throughput, given a response timerequirement. Our approach is to decompose a series-parallel task system into its essential"serial" and "parallel" components; our problem admits the independent solution andrecomposition of each such component. We provide algorithms for the series analysis, and use an algorithm due to Krishnamurti and Ma for the parallel analysis. For a p processor system and a series-parallel precedence graph with n constituent tasks, we give a O(np/sup 2/) algorithm that finds the optimal assignment (over a broad class ofassignments) for the response time optimization problem; we find the assignmentoptimizing the constrained throughput in O(np/sup 2/ log p) time. These techniques areapplied to a task system in computer vision. | Introduction
In recent years much research has been devoted to the problem of mapping large computations onto
a system of parallel processors. Various aspects of the general problem have been studied, including
different parallel architectures, task structures, communication issues and load balancing [8, 13].
Typically, experimentally observed performance (e.g., speedup or response time) is tabulated as
a function of the number of processors employed, a function sometimes known as the execution
signature [10], or response time function. In this paper we use such functions to determine the
number of processors to be allocated to each of several tasks when the tasks are part of a pipelined
computation. This problem is natural, given the growing availability of multitasked parallel ar-
chitectures, such as PASM [29], the NCube system [14], and Intel's iPSC system [5], in which it
is possible to map tasks to processors and allow parallel execution of multiple tasks in different
logical partitions.
We consider the problem of optimizing the performance of a complex computation applied to
each member of a sequence of data sets. This type of problem arises, for instance, in imaging
systems, where each image frame is analyzed by a sequence of elemental tasks, e.g., fast Fourier
transform or convolution. Other applications include network software, where packets are pipelined
through well-defined functions such as checksum computations, address decoding and framing.
Given the data dependencies between the computation's multiple tasks, we may exploit parallelism
both by pipelining data sets through the task structure, and by applying multiple processors to
individual tasks.
There is a fundamental tradeoff between assigning processors to maximize the overall through-put
(measured as data sets per unit time), and assigning processors to minimize a single data set's
response time. We manage the tradeoff by maximizing one aspect of performance subject to the
constraint that a certain level of performance must be achieved in the other aspect. Under the
assumptions that each of n tasks is statically assigned a subset of dedicated processors and that
an individual task's response time function completely characterizes performance (even when using
shared resources such as the communication network) we show that p processors can be assigned
to a series-parallel task structure in O(np 2 so as to minimize response time while achieving
a given throughput. We are also able to find the assignment that maximizes throughput while
achieving a given minimal response time, in O(np 2 log p) time.
The assumption of a static assignment arises naturally in real-time applications, where the overhead
of swapping executable task code in and out of a processor's memory threatens performance.
this assumption, the optimization problem becomes much more difficult.
Our method involves decomposing a series-parallel graph into series and parallel components
using standard methods; we present algorithms for analyzing series components and use Krishnamurthy
and Ma's algorithm [20] to analyze the parallel components.
We assume that costs of communication between tasks are completely captured in the given
response-time functions. Thus, our techniques can be expected to work well on compute-bound
task systems; our example application is representative of this class, having a computation to
communication ratio of 100. Our techniques may not be applicable when communication costs
that depend on the particular sets of processors assigned to a task (e.g., contention) contribute
significantly to overall performance.
A large literature exists on the topic of mapping workload to processors, see, for instance
[1, 3, 4, 6, 15, 17, 18, 23, 24, 26, 27, 31, 33]. A new problem has recently emerged, that of
scheduling of tasks on multitasked parallel architectures where each task can be assigned a set of
processors. Some formulations consider scheduling policies with the goal of achieving good average
response time and good throughput, given an arrival stream of different, independent parallel
jobs, e.g., [28]. Another common objective, exemplified in [2, 11, 20, 25], is to find a schedule of
processor assignments that minimizes completion time of a single job executed once. The problem
we consider is different from these specifically because we have a parallel job which is to be repeatedly
executed. We consider issues arising from our need to pipeline the repeated executions to get good
throughput, as well as apply parallel processing to the constituent tasks to get good per-execution
response time. Yet another distinguishing characteristic of our problem is an underlying assumption
that a processor is statically assigned to one task, with the implication that every task is always
assigned at least one processor.
Two previously studied problems are close to our formulation. The assignment of processors
to a set of independent tasks is considered in [20]. The single objective is the minimization of the
makespan, which minimizes response time if the tasks are considered to be part of a single parallel
computation, or maximizes throughput if the tasks are considered to form a pipeline. The problem
of assigning processors to independent chains of modules is considered in [7]; this assignment
minimizes the response time if the component tasks are considered to be parallel, and maximizes
the throughput if the component chains are considered to form pipelines. Pipeline computations are
also studied in [19, 30]. In [30], heuristics are given for scheduling planar acyclic task structures and
in [19], a methodology is presented for analyzing pipeline computations using Petri nets together
with techniques for partitioning computations. We have not discovered treatments that address
optimal processor assignment for general pipeline computations, although our solution approach
(dynamic programming) is related to those in [3] and [33].
This paper is organized as follows. Section x2 introduces notation, and formalizes the response-time
problem and the throughput problem. Section x3 presents our algorithms for series systems,
and x4 shows how to optimally assign processors to series-parallel systems. Section x5 shows how
the problem of maximizing throughput subject to a response-time constraint can be solved using
solutions to the response-time problem. Section x6 discusses the application of our techniques to
an actual problem, and Section x7 summarizes this work.
Problem Definition
We consider a set of tasks, t that comprise a computation to be executed using up to
identical processors, on each of a long stream of data sets. Every task is applied to every data
set. We assume the tasks have a series-parallel precedence relation constraining the order in which
we may apply tasks to a given data set; tasks unrelated in the partial order are assumed to process
duplicated copies (or, different elements) of a given data set. Under these assumptions we may
pipeline the computation, so that different tasks are concurrently applied to different data sets.
Each task is potentially parallelizable; for each t i we let f i (n) be the execution time of t i using n
identical processors. f i is called a response-time function (also known as an execution signature [10]).
We assume that f 0 and f n+1 are dummy tasks that serve respectively to identify the initiation and
completion of the computation; correspondingly we take f 0
conditions ensure that no processor is ever assigned to t 0 or
and that at least one processor is assigned to every other task.
An example of the response time functions for a computation with 5 tasks on up to 8 processors
is shown in Table 1. Each row of the table is a response time function for a particular task. Observe
Number of processors
tasks
43
Table
1: Example response time functions. Table gives tasks' execution time (in seconds) as a
function of the number of processors used.
that individual functions need not be convex, nor monotonic.
We may describe an assignment of numbers of processors to each task by a function A: A(i)
gives the number of processors statically and exclusively allocated to t i
. A feasible assignment is
one where
Given A, t i 's execution time is f i (A(i)), and the maximal data set throughput is
g. The response time for a data set is obtained by computing the length R(A) of
the longest path through the graph where each t i is a node weighted by f i (A(i)), and the edges are
defined by the series-parallel precedence relation.
Given some throughput constraint - and processor count q, we define T -
(q) to be the set of all
feasible assignments A that use no more than q processors, and achieve (A) -. The response-time
problem is to find F - (p) the minimum response time over all feasible assignments in T - (p),
that is, the response time for which there is an assignment A for which R(A) is mimimal over
all assignments with p or fewer processors that achieve throughput - or greater. This problem
arises when data sets must be processed at least as fast as a known rate - to avoid losing data;
we wish to minimize the response time among all those assignments that achieve throughput -.
Similarly, given response time constraint fl and processor count q we define R fl (q) to be the set of all
feasible assignments A using no more than q processors, and achieving R(A) - fl. The throughput
problem is to find A 2 R fl (p) for which (A) is maximized. This problem arises in real-time control
applications, where each data set must be processed within a maximal time frame in order to meet
\Gamma\Psi
@@@ @R
\Gamma\Psi
@
@ @R
@
@ @R
\Gamma\Psi
\Gamma\Psi
Figure
1: Example of series-parallel task system T
processing deadlines. We will focus on solutions to the response time problem first and later show
how these may be used to solve the throughput problem.
Since a response-time function completely defines a task, elemental or composite, we will also
use the term "task" to refer to compositions of the more elemental tasks t i
. Let - i
denote such a
composite task and let F i be its optimal response time function. Our general approach is illustrated
through an example. Consider the series-parallel task T in Figure 1 with response-time functions
given
Table
are dummy tasks). We may think of t 2 and t 3 as forming a parallel
subtask-call it - 1 . Given the response time functions for t 2 and t 3 , we will construct an optimal
response time function called F 1 for - 1 , after which we need never explicitly consider t 1 or t 2
separately from each other-F 1 completely captures what we need to know about both of them.
Next, we view - 1 and t 1 as a series task, call it - 2 , and compute the optimal response time function
. The process of identifying series and parallel subtasks and constructing response-time
functions for them continues until we are left with a single response time function that describes
the optimal behavior of T . By tracking the processor assignments necessary to achieve the optimal
response times at each step, we are able to determine the optimal processor allocations for T . A
solution method for parallel tasks has already been given in [20]; we present algorithms for series
tasks.
We will assume that every response-time function is monotone nonincreasing, since, as argued
in [20], any other response-time function can be made decreasing by disregarding those assignments
of processors that cause higher response times. Also, observe that response time functions may
include inherent communication costs due to parallelism, as well as the communication costs that are
suffered by communicating with predecessor and successor tasks. These assumptions are reasonable
when the communication bandwidth is sufficiently high for us to ignore effects due to contention
between pairs of communicating tasks. Our methods may not produce good results when this
assumption does not hold.
3 Individual Parallel Tasks and Series Tasks
The problem of determining an optimal response-time function for parallel tasks has already essentially
been solved in the literature [20]. We describe this solution briefly. Let t be the
tasks used to compose a parallel task - . For each t i we know u -
number of
processors needed so that every elemental task involved in t i has response-time no greater than
1=-. We initialize by allocating processors to each t i . If we run out of processors first then
no processor allocation can meet the throughput requirement. Otherwise, the initial allocation uses
the fewest possible number of processors that do meet this requirement. We then incrementally add
the remaining processors to tasks in such a way that at each step the response time (the maximum
of task response times) is reduced maximally. This algorithm has an O(p log p) time complexity.
Series task structures are interesting in themselves because many pipelines are simple linear
chains [19]. We first describe an algorithm that constructs the optimal response time function F -
for a linear task structure T when each function f i (x) is convex in x. While convexity in elemental
functions is intuitive, nonconvex response-time functions arise from parallel task compositions.
Consequently, a different algorithm for series compositions of nonconvex response-time functions
will be developed later.
Like the parallel composition algorithm, we first assign the minimal number of processors needed
to meet the throughput requirement. The mechanism for this is identical. Supposing that this step
does not exhaust the processor supply, define x i to be the number of processors currently assigned
to t i , initialize x
x i to be the total number of processors already
allocated. We then set F - to reflect an inability to meet the throughput
Number of processors
Table
2: Response time function F 1 for parallel task - 1
requirement, and set F -
Next, for each t i
the change in response time achieved by allocating one more processor to t i . Build a max-priority
heap [16] where the priority of t i is jd(i; x i )j. Finally, enter a loop where, on each iteration the task
with highest priority is allocated another processor, its new priority is computed, and the priority
heap is adjusted. We iterate until all available processors have been assigned. Each iteration of the
loop allocates the next processor to the task which stands to benefit most from the allocation. When
the individual task response functions are convex, then the response time function F -
it greedily
produces is optimal, since the algorithm above is essentially one due to Fox [12], as reported in [32].
Simple inspection reveals that the algorithm has an O(p log n) time complexity. Unlike the similar
algorithm for parallel tasks, correctness here depends on convexity of component task response
times.
The need to treat nonconvex response-time functions arises from the behavior of composed
parallel tasks. Return to our example in Figure 1 and consider the parallel composition - 1 of
elemental tasks t 2 and t 3 , with throughput requirement 0:01. The response-time function F 1 is
shown in Table 2. Note that F 1 is not convex, even though f 2 and f 3 are. This nonconvexity is due
to the peculiar nature of the maximum of two functions and cannot be avoided when dealing with
parallel task compositions. We show below that nonconvexity can be handled, with an additional
cost in complexity.
We begin as before, allocating just enough processors so that the throughput constraint is met.
Assuming so, for any denote the subchain comprised of t
and compute its optimal response time function, C j , subject to throughput constraint -. Using the
principle of optimality[9], we write a recursive definition for u -
min
The dynamic programming equation is understood as follows. Suppose we have already computed
the function C j \Gamma1 . This implicitly asserts that we know how to optimally allocate any number
processors to T j \Gamma1 . Next, given x processors to distribute between tasks t j and
every combination subject to the throughput constraints: i processors for t j and x \Gamma i processors
for . The principle of optimality tells us that the least-cost combination gives us the optimal
assignment of x processors to T j . Since the equation is written as a recursion, the computation will
actually build response time tables from 'bottom up', starting with task t 1 in the first part of the
equation.
This procedure requires O(np 2 ) time. We have been unable to find a solution that gives a better
worst-case behavior in all cases. Some of the difficulties one encounters may be appreciated by study
of our previous example. Consider the construction of - 2 , comprised of the series composition of
As before, let F 1 denote the response time function for - 1 . Table 3 gives the values of
8. The set of possible sums associated with allocating
a fixed number of processors x lie on an assignment diagonal moving from the lower left (assign
processors to - 1 , one to t 1 ) to the upper right (assign one processor to - 1 , of the
table, illustrated by use of a common typeface on a diagonal. Brute force computation of - 2 (x)
consists of generating all sums on the associated diagonal, and choosing the allocation associated
with the least sum. In the general case this is equivalent to looking for the minimum of a function
known to be the sum of a function that decreases in i (e.g. f 1 (i)) and one that increases (e.g.
Unlike the case when these functions are known to be convex as well, in general their
sum does not have any special structure we can exploit-the minimum can be achieved anywhere,
implying that we have to look for it everywhere. It would seem then that dynamic programming
may offer the least-cost solution to the problem.
We note in passing that a straightforward optimization may reduce the running time, but does
29
43 72 59 54
Table
3: Sum of response time functions f 1 and F 1 . The minimum value on each assignment
diagonal is marked by *.
not have a better asymptotic complexity. If both functions being summed are convex, then the
minimum values on adjacent assignment diagonals must be adjacent in a row or column. This
fact can considerably accelerate the solution time, since given the minimum on the x-processor
assignment diagonal we can find the minimum on the diagonal by generating
and comparing only two additional entries (this is a consequence of the greedy algorithm described
earlier). Although we cannot in general assume that both functions are convex, we can view them
as being piece-wise convex. Thus, if t 1 is convex over [a; b], and - 1 is convex over [c; d], then
is convex over [a; b] \Theta [c; d] and we can efficiently find minima on assignment diagonals restricted
to this subdomain. Working through the details (which are straightforward), one finds that the
complexity of this approach is O(rnp), where r is the maximum number of convex subregions
spanned by any given assignment diagonal. Of course, in the worst case leaving us still
with an O(np 2 ) algorithm.
4 Series-Parallel Tasks
Algorithms for the analysis of series and parallel task structures can be used to analyze task-
structures whose graphs form series-parallel directed acyclic graphs. We show that the response
c
c c
ae
ae
Figure
2: Binary decomposition tree
time function for any such graph (with n nodes) can be computed in O(np 2 ) time. A number
of different but equivalent definitions of series-parallel graphs exist. The one we will use is taken
from [34], in which a series-parallel DAG can be parsed as a binary decomposition tree (BDT) in
time proportional to the number of edges. The leaves of such a tree correspond to the DAG nodes
themselves and internal tree nodes describe either parallel (P) or series (S) compositions. Figure 2
illustrates the BDT (labeling S and P nodes by task names used in discussion) corresponding to
the task in Figure 1.
The structure of a BDT specifies the precise order in which we should apply our analyses. The
idea is to build up the overall optimal response-time function from the bottom up. Conceptually
we mark every BDT node as being computed or not, with leaf nodes being the only ones marked
initially. We then enter a loop where each iteration we identify an unmarked BDT node whose
children are both marked. We apply a series composition or parallel composition to those childrens'
response-time functions depending on whether the node is of type S or P, and mark the node. The
algorithm ends when the root node is marked.
In the example, - 1 's response time function is generated using the parallel algorithm on t 2 and
3 , the series composition is applied to t 1 and - 1 , (for composite task - 2 ), which is then composed
via another series composition with t 4 , creating - 3 ; finally, t 5 is combined via a parallel composition
with - 3 to create the response time function for the overall task structure. At each step one must
record the actual number of processors assigned to each task in order to compute the optimal
assignment; this is straightforward and needs no discussion.
From the above, we see that the cost of determining the optimal assignment from a BDT is
O(np 2 ), as every response-time function composition has worst case cost O(p 2 ) and there are
such compositions performed.
5 The Throughput Problem
Real time applications often require that the processing of every data set meet a response-time
deadline. At system design time it becomes necessary to assess the maximal throughput possible
under the constraint. This is our throughput problem. In this section we show how solutions to
the response-time problem can be used to solve this new problem in O(np 2 log p) time.
Our approach depends on the fact that minimal response times behave monotonically with
respect to the throughput constraint.
Lemma 5.1 For any pipeline computation let F - (p) be the minimal possible response time using p
processors, given throughput constraint - and the assumption of static processor-to-task mapping.
Then for every fixed p, F -
(p) is a monotone nondecreasing function of -.
Proof: Let p be fixed. As before, let u be the minimum number of processors required for all
elemental tasks comprising t i to meet throughput constraint -. For every t i ,
a monotone nondecreasing function of -. Recall that T - (p) is the set of all assignments that
meet the throughput constraint - using no more than p processors. Whenever
must have
(p), because of the monotonicity of each u - (t i ). Since F - (p) is the
minimum cost among all assignments in T - (p), we have F - 2
(p).
This result can be viewed as a generalization of Bokhari's graph-based argument for monotonicity
of the minimal "sum" cost, given a "bottleneck" cost [4].
Suppose for a given pipeline computation we are able to solve for F - (p), given any -. The set of
all possible throughput values is f1=f i needed
to generate and sort them. Given response time constraint - fl, and tentative throughput -, we may
determine whether F - (p) -
fl. Since F - (p) is monotone in -, we use a binary search to identify the
greatest - for which F - (p) -
fl. The associated processor assignment maximizes throughput
(using p processors), subject to response time constraint - fl. There being O(log p) solutions of the
response-time problem, the complexity for the throughput problem is O(np 2 log p).
6 An Application
In this section we report the results of applying our methods to a motion estimation system in
computer vision. Motion estimation is an important problem in which the goal is to characterize
the motion of moving objects in a scene. From a computational point of view, continually generated
images from a camera must be processed by a number of tasks. A primary goal is to ensure that
the computational throughput meets the input data rate. Subject to this constraint, we desire that
the response time be as small as possible. The application itself is described in detail in [8, 21].
It should be noted that there are many approaches to solving the motion estimation problem. We
are only interested in an example, and therefore, the following algorithm is not presented as the
only or the best way to perform motion estimation. A comprehensive digest of papers on the
topic of motion understanding can be found in [22]. The following subsection briefly describes the
underlying computations.
6.1 A Motion Estimation System
Our example problem is a linear pipeline with nine stages, each stage is a task. The data sets
input to the task system are a continuous stream of stereo image pairs of a scene containing
the moving vehicles. The tasks perform well-known vision computations such as 2-D convolution,
extracting zero crossings and feature matching, similar to computations in the Image Understanding
Benchmark [35]. All nine tasks were implemented on a distributed memory machine, the Intel
iPSC/2 hypercube [5]. We applied the system above to a problem using outdoor images [8]. The
relevant response-time functions are shown in Table 4 for selected processor sizes. Measurements
include all overheads, computation time and communication times.
Response Times for Individual Tasks (sec.)
No. of Task 1 Task 2 Task 3 Task 4 Task 5 Task 6 Task 7 Task 8 Task 9
Proc.
64* 2.12 0.11 0.007 0.61 2.12 0.11 0.007 4.13 0.71
Table
4: Completion times for individual tasks on the Intel iPSC/2 of various sizes, in seconds (*
indicates extrapolated values)
Figure
3: Minimal response time as a function of the throughput constraint
6.2 Experimental Results
We applied the series task algorithm using Table 4, for a range of possible throughput constraints.
As an example of the output generated by the algorithm, Table 5 shows the processor assignment
for individual tasks for various sizes of the Intel iPSC/2. The last row of the table also shows
the minimum response time, given constraint frames/second. The response times shown
are those predicted by our algorithms. Nevertheless, observed response times using the computed
allocations were observed to be in excellent agreement with these figures-the relative error was
less than 5% in all measurable cases.
The processor allocation behavior is intuitive. Tasks t 1 , t 5 , and t 8 have much larger response
times than the others. As increasingly more processors are allocated to the problem, these three
tasks receive the lion's share of the additional processors.
Figure
3 illustrates the tension between response time and throughput by plotting the minimal
response time function for the entire pipeline computation, as a function of the throughput con-
straint. For any problem there will be a throughput - min achieved when processors are allocated
entirely to minimize response time. The flat region of the curve lies over throughput constraints
- min . The response time curve turns up, sometimes dramatically, as the throughput constraint
moves into a region where response time must be traded off for increased throughput.
Multiprocessor Size (No. of Procs.)
Task Proc. Time Proc. Time Proc. Time Proc. Time
No. Asgn. (Sec.) Asgn. (Sec.) Asgn. (Sec.) Asgn. (Sec.)
Table
5: An example processor allocation for minimizing response time for several sizes of iPSC/2
processors
allocated to individual tasks are shown)
Summary
In this paper we consider performance optimization of series-parallel pipelined computations. The
problem arises when a system of individually parallelizable tasks is to be applied repeatedly to a
long sequence of data sets. Given a large supply of processors, parallelism can be exploited both
by pipelining the data sets through the task structure, and by allocating multiple processors to
individual tasks. We treat the dual problems of minimizing response time subject to a throughput
constraint, and maximizing throughput subject to a response time constraint.
We showed that problems with p processors and n tasks satisfying series-parallel precedence
constraints can be solved in low-order polynomial time: response time (subject to a throughput
constraint) is minimized in O(np 2 ) time, and throughput (subject to a response time constraint)
is maximized in O(np 2 log p) time. To place the work in a realistic setting we evaluated the performance
of our assignment algorithms on the problem of stereo image matching. The results
predicted by our analysis were observed to be very close to measured on actual systems.
Future endeavors include the provision of algorithms for general task structures and investigation
of dynamic assignment algorithms. Also, we believe that our results can be extended to task models
that include "branching", such as are encountered with CASE statements. This feature essentially
forces us to treat response times and throughputs as being stochastic. We also believe that our
approach can be extended to consider the effects of certain types of communication contention.
--R
A partitioning strategy for nonuniform problems on multiprocessors.
Scheduling multiprocessor tasks to minimize schedule length.
A shortest tree algorithm for optimal assignments across space and time in a distributed processor system.
Partitioning problems in parallel
Benchmarking the iPSC/2 hypercube multiprocessor.
On embedding rectangular grids in hypercubes.
Algorithms for mapping and partitioning chain structured parallel com- putations
Parallel architectures and parallel algorithms for integrated vision systems.
Dynamic Programming: Models and Applications.
Dynamic partitioning in a transputer environment.
Complexity of scheduling parallel task systems.
Discrete optimization via marginal analysis.
Solving Problems on Concurrent Processors (Vol.
Architecture of a hypercube supercomputer.
On the embedding of arbitrary meshes in boolean cubes with expansion two dilation two.
Fundamentals of Computer Algorithms
On mapping systolic algorithms onto the hypercube.
A multistage linear array assignment problem.
Pipelined data-parallel algorithms
The processor partitioning problem in special-purpose partitionable systems
Point matching in a time sequence of stereo image pairs.
Motion Understanding
Embedding rectangular grids into square grids with dilation two.
Improved algorithms for mapping parallel and pipelined computa- tions
Utilizing multidimensional loop parallelism on large scale parallel processor systems.
Minimal mesh embeddings in binary hypercubes.
Characterizations of parallelism in applications and their use in scheduling
Scheduling Algorithms for PIPE (Pipelined Image-Processing Engine)
Multiprocessor scheduling with the aid of network flow algorithms.
Optimal partitioning of cache memory.
Allocating programs containing branches and loops within a multiple processor system.
The recognition of series parallel digraphs.
An integrated image understanding benchmark for parallel computers.
--TR
Allocating programs containing branches and loops within a multiple processor system
Scheduling multiprocessor tasks to minimize schedule length
A partitioning strategy for nonuniform problems on multiprocessors
Nearest-neighbor mapping of finite element graphs onto processor meshes
Solving problems on concurrent processors. Vol. 1: General techniques and regular problems
Partitioning Problems in Parallel, Pipeline, and Distributed Computing
Scheduling algorithms for PIPE (Pipelined Image-Processing Engine)
Minimal Mesh Embeddings in Binary Hypercubes
On Embedding Rectangular Grids in Hypercubes
Characterizations of parallelism in applications and their use in scheduling
Complexity of scheduling parallel task systems
Utilizing Multidimensional Loop Parallelism on Large Scale Parallel Processor Systems
Embedding Rectangular Grids into Square Grids with Dilation Two
Dynamic partitioning in a transputer environment
A multistage linear array assignment problem
The DARPA image understanding benchmark for parallel computers
Improved Algorithms for Mapping Pipelined and Parallel Computations
Optimal Partitioning of Cache Memory
Dynamic Programming
Computer Algorithms
Parallel Architectures and Parallel Algorithms for Integrated Vision Systems
Motion Understanding
On Mapping Systolic Algorithms onto the Hypercube
Pipelined Data Parallel Algorithms-II
--CTR
Ian Foster , David R. Kohr, Jr. , Rakesh Krishnaiyer , Alok Choudhary, Double standards: bringing task parallelism to HPF via the message passing interface, Proceedings of the 1996 ACM/IEEE conference on Supercomputing (CDROM), p.36-es, January 01-01, 1996, Pittsburgh, Pennsylvania, United States
Jaspal Subhlok , Gary Vondran, Optimal mapping of sequences of data parallel tasks, ACM SIGPLAN Notices, v.30 n.8, p.134-143, Aug. 1995
Jaspar Subhlok , Gary Vondran, Optimal latency-throughput tradeoffs for data parallel pipelines, Proceedings of the eighth annual ACM symposium on Parallel algorithms and architectures, p.62-71, June 24-26, 1996, Padua, Italy
G. Srinivasa N. Prasanna, Compilation of parallel multimedia computationsextending retiming theory and Amdahl's law, ACM SIGPLAN Notices, v.32 n.7, p.180-192, July 1997
Thomas Gross , David R. O'Hallaron , Jaspal Subhlok, Task Parallelism in a High Performance Fortran Framework, IEEE Parallel & Distributed Technology: Systems & Technology, v.2 n.3, p.16-26, September 1994
Wei-Keng Liao , Alok Choudhary , Donald Weiner , Pramod Varshney, Performance Evaluation of a Parallel Pipeline Computational Model for Space-Time Adaptive Processing, The Journal of Supercomputing, v.31 n.2, p.137-160, December 2004
Sandeep Koranne, A Note on System-on-Chip Test Scheduling Formulation, Journal of Electronic Testing: Theory and Applications, v.20 n.3, p.309-313, June 2004
Jaspal Subhlok , David R. O'Hallaron , Thomas Gross , Peter A. Dinda , Jon Webb, Communication and memory requirements as the basis for mapping task and data parallel programs, Proceedings of the 1994 conference on Supercomputing, p.330-339, December 1994, Washington, D.C., United States
Jaspal Subhlok , David R. O'Hallaron , Thomas Gross , Peter A. Dinda , Jon Webb, Communication and memory requirements as the basis for mapping task and data parallel programs, Proceedings of the 1994 ACM/IEEE conference on Supercomputing, November 14-18, 1994, Washington, D.C.
John W. Chinneck , Vitoria Pureza , Rafik A. Goubran , Gerald M. Karam , Marco Lvoie, A fast task-to-processor assignment heuristic for real-time multiprocessor DSP applications, Computers and Operations Research, v.30 n.5, p.643-670, April
Martin Fleury , Andrew C. Downton , Adrian F. Clark, Performance Metrics for Embedded Parallel Pipelines, IEEE Transactions on Parallel and Distributed Systems, v.11 n.11, p.1164-1185, November 2000
Jinquan Dai , Bo Huang , Long Li , Luddy Harrison, Automatically partitioning packet processing applications for pipelined architectures, ACM SIGPLAN Notices, v.40 n.6, June 2005 | data sets;parallel architectures;series-parallel partial order;series-parallel task system;data dependencies;index termspipeline processing;series analysis;pipelined computations;task structure;multitasked parallel architectures;computer vision;parallel analysis;resource allocation;processor assignment problem |
629334 | On the Performance of Synchronized Programs in Distributed Networks with Random Processing Times and Transmission Delays. | A synchronizer is a compiler that transforms a program designed to run in a synchronousnetwork into a program that runs in an asynchronous network. The behavior of a simplesynchronizer, which also represents a basic mechanism for distributed computing and forthe analysis of marked graphs, was studied by S. Even and S. Rajsbaum (1990) under theassumption that message transmission delays and processing times are constant. Westudy the behavior of the simple synchronizer when processing times and transmissiondelays are random. The main performance measure is the rate of a network, i.e., theaverage number of computational steps executed by a processor in the network per unittime. We analyze the effect of the topology and the probability distributions of therandom variables on the behavior of the network. For random variables with exponentialdistribution, we provide tight (i.e., attainable) bounds and study the effect of abottleneck processor on the rate. | INTRODUCTION
Consider a network of processors which communicate by sending messages along communication
links. The network is synchronous if there is a global clock whose beats are heard by
all the processors simultaneously, and the time interval between clock beats is long enough
for all messages to reach their destinations and for local computational steps to be completed
before the clock beats again. The network is asynchronous if there is no global clock, and the
transmission times of messages are unpredictable.
Computer Science Department. Present address: Instituto de Matematicas-UNAM, Ciudad Universitaria,
D.F. 04510, Mexico. ([email protected])
y Electrical Engineering Department (moshe@techsel)
In general, a program designed for a synchronous network will not run correctly in an
asynchronous network. Instead of designing a new program for the asynchronous network, it
is possible to use a synchronizer, [A1], i.e., a compiler that converts a program designed for
a synchronous network, to run correctly in an asynchronous network. Synchronizers provide
a useful tool because programs for synchronous networks are easier to design, debug and test
than programs for asynchronous networks. Furthermore, an important use of synchronizers is
the design of more efficient asynchronous algorithms [A2]. The problem of designing efficient
synchronizers has been studied in the past (e.g. [A1], [AP90], [PU89]).
The (worst case) time complexity of a distributed algorithm is usually computed assuming
that processing times and message transmission delays are equal to some constant which
represents an upper bound on these durations. The goal of this paper is to study the effect
of random processing times and transmission delays on the performance of synchronous programs
running in an asynchronous network under the control of a simple synchronizer. We
compare the results with the deterministic case [ER1], [ER2], in which processing times, as
well as message delays, are constant (or bounded).
The operation of the synchronizer is as follows: Each processor waits for a message to
arrive on each of its in-coming links before performing the next computational step. When
a computational step is completed (after a random time), it sends one message on each of
its out-going links. The implementation of this synchronizer may require, for instance, that
every message is followed by an end-of-message marker, even if the message is empty. These
end-of-message markers model the flow of information that must exist between every pair of
processors connected by a link in each computational step [A2]. This is how a processor knows
it has to wait for a message which was sent to it, or if no message was sent.
We use this synchronizer in our analysis since it is very simple, yet, it captures the essence
of the synchronizer methodology, i.e., it ensures that a processor does not initiate a new
phase of computation before knowing that all the messages sent to it during the previous
phase have already arrived. Moreover, the synchronizer is equivalent to a marked graph (e.g.
[CHEP]) in which the initial marking has one token per edge. In [Ra91] and [RM92] the
relationship between synchronizers and marked graphs is studied, and it is shown how the
simple synchronizer can model the behavior of any marked graph, of the synchronizers of
[A1], and of distributed schedulers in [BG89], [MMZ88]. Thus, our work is closely related to
problems in stochastic petri nets, where, due to the huge size of the state space, the solution
techniques often rely on simulation (e.g. [M1], [M2], [Ma89]).
Many distributed protocols are based on this simple synchronizer. For example, the snap-shot
algorithm [CL85], clock synchronization algorithms (e.g. [BS88], [OG87]), the synchronizers
of [A1], the distributed schedulers in [BG89], [MMZ88], the optimistic synchronizer
[GRST92]. The synchronizer is similar to synchronizer ff in [A1], but can be be used also in
directed networks, as opposed to other synchronizers suggested in [A1] that require all links to
be bidirectional. In [ER1] and [ER2] the benefits of using the synchronizer as an initialization
procedure are described.
Main Results
This paper is devoted to the performance analysis of strongly connected directed networks
controlled by the simple synchronizer, in which transmission delays, as well as the time it takes
a processor to complete a computational step are random variables. Our main performance
measure is the rate of computation R v , i.e., the average number of computational steps executed
by a processor in the network, per unit time. To facilitate the presentation, we first assume
that the transmission delays are negligible, and only at the end of the paper describe how to
extend the results for networks with non-negligible delays.
In Section 3 we study the case in which the random variables have general probability
distributions. We consider two approaches. First (Section 3.1) we analyze the effect of the
topology on the rate. We use stochastic comparison techniques to compare the rate of networks
with different topologies. We give examples of networks with different topologies, but
with the same rate. Then (Section 3.2) we analyze networks with the same topology but different
processing times. By defining a partial order on the set of distributions, we show that
deterministic (i.e. constant) processing times maximize the rate of computation. For this case,
it is shown in [ER1] that if the processing times are equal to - \Gamma1 , the rate of the network is -,
regardless of the number of processors in the network or its topology. In the next section we
show that in case the processing times are random and unbounded, the rate may be degraded
by a logarithmic factor in the number of processors. This occurs in the case of exponentially
distributed processing times. However, in this section we show that the exponential is the
worst, among a large and natural class of distributions (it yields the minimum rate within a
class of distributions).
In Section 4 we concentrate on the case of processing times that are exponentially distributed
random variables with mean - \Gamma1 . We prove that the rate is between -=4
and -= log(ffi is the maximum (minimum) vertex in-degree or out-degree.
Hence, for regular-degree (either in or out-degree) networks, the rate is \Theta(-= log(ffi
compute the exact rate and the stationary probabilities for the extreme cases of a directed
cycle and a complete graph. Finally, we study the effect of having one processor that runs
slower than the rest of the processors, and we show that in some sense, the directed cycle
network is more sensitive to such a bottleneck processor than a complete network.
In the last section we show that it is easy to extend the results to networks with non-negligible
transmission delays. We consider the exponential distribution case, and show that
adding transmission delays to a regular degree network may reduce its rate by at most a
constant factor, provided that they are not larger (w.r.t. the partial order) than the processing
times. In networks with processing times exponentially distributed with mean 1, and larger
delays with mean - \Gamma1 , we compare the results with those of [ER2], where it was shown that for
the corresponding deterministic case the rate is -. In the probabilistic case of a regular-degree
network, the rate is at least \Theta(-= log ffi). Thus, in both cases (small and large delays), the rate
of a bounded degree network is reduced only by a constant factor.
Previous Work
There exist several results related to our results in Section 3.2 in the literature on stochastic
petri-nets. For instance, dominance results for rather general stochastic petri-nets have been
obtained in [Ba89] and more recently in [BL91] by using Subadditive Ergodic Theory (e.g.
[K73]). It should be noted, however, that the proofs we provide for the simple synchronizer
are different and much simpler and do not require heavy mathematical tools. Other stochastic
ordering studies exist. Papers on acyclic networks and fork-join queues are [PV89] and [BM89,
BMS89, BMT89], respectively. For closed queueing networks the effect of increasing the service
rate of a subset of stations for systems such that the distribution of the number of works in
each station has a product form solution is studied in [SY86].
A model similar to our model in Section 4 is considered in [BT89], where it is claimed
that the rate is '(1= log ffi out ), for regular networks with out-degree equal to ffi out , identically
exponentially distributed transmission delays with mean 1, and negligible processing times.
In [BS88] only a lower bound of \Theta(1= log ffi in ) on the rate is given, for regular networks with
in-degree equal to ffi in , with negligible transmission delays, and identically exponentially distributed
processing times. Recently, it has been shown in [BK91] that subadditive ergodic
theory can be used to derive more general lower bounds on the rate. A bottleneck problem
related to ours has been considered by [B88] where an asymptotic analysis of cyclic queues as
the number of costumers grows is presented. Asymptotic performance of stochastic marked
graphs as the number of tokens grows is studied in [M2]. The class of networks with exponentially
distributed processing times belongs to the more general model of stochastic petri nets
(see [Ma89] for a survey), where it is usually assumed that the state space (of exponential size,
in our case) is given.
The network is modeled by a (finite) directed, strongly connected graph G(V; E), where
ng is the set of vertices of the graph and E ' V \Theta V is the set of directed edges. A
vertex of the graph corresponds to a processor that is running its own program, and a directed
edge corresponds to a communication link from processor u to processor v. In this case,
we shall say that u is an in-neighbor of v, and v is an out-neighbor of u in the network. The
processors communicate by sending messages along the communication links. To facilitate the
presentation, we assume that the message transmission delays are negligible. At the end we
briefly discuss the case of non-negligible transmission delays.
Initially, all processors are in a quiescent state, in which they send no messages and perform
no computations. Once a processor leaves the quiescent state, it never reenters it and is
considered awake. When awakened, each processor operates in phases as described in the
sequel. Assume that at an arbitrary time, t(v), processor v leaves the quiescent state and
enters its first processing state, PS 0 (this may be caused by a message from another processor,
or a signal from the outside world, not considered in our model). Then, processor v remains in
units of time and then transits to its first waiting state, WS 0 . ?From this time
on, let PS k and WS k , k - 0, denote the processing state and the waiting state, respectively,
for the k-th phase. Observe that we are concerned with the rate of computation of the network;
the nature of the computation is of no concern to us here. Thus we take the liberty of denoting
with the same symbol the k-th processing state of all the processors.
The transition rules between states are as follows: If a processor v transits from state PS k
to WS k , it sends one message on each of its outgoing edges. These messages are denoted by
. Note that this labeling is not needed for the implementation of the protocol; it is used
only for its analysis. When v sends the M k messages, we say that v has completed its k-th
processing step.
If a processor v is in state WS k , and has received a message (M k ) on each of its incoming
edges, it removes one message from each of its incoming edges, transits to state PS k+1 , remains
there for - k+1 (v) units of time and then transits to state WS k+1 . Otherwise, (if at least on one
incoming edge, M k has not yet arrived) processor v remains in state WS k until it receives a
message from each of its in-neighbors, and then operates as described above.
The processing times, - k (v), correspond to the time it takes for processor v to complete
the k-th computation step. The processing times - k (v), k - 0, are positive, real-valued
random variables defined over some probability space.
For
k (v) (or t k (v), whenever G is understood) be the k-th completion time, i.e.,
the time at which processor v sends messages M k in network G. Let the in-set of a vertex v in
G, IN G (v) ( or simply IN(v)), be the set of vertices in G that have an edge to v, including v
itself, that is, fvg. With this notation, the operation of processor
is as follows. Once v has sent a message M k at time t k (v), it waits until all processors
with an edge to it send message M k , and then starts its 1)-st computation step; that is,
after the maximum of t k (u), u 2 IN(v), it starts the 1)-st computation step, which takes
units of time, and then sends out M k+1 . For this reason we shall assume in the rest of
the paper that for each vertex v the edge v ! v is in E. The evolution of the network can be
described by the following recursions:
(1)
It is interesting to note that the completion times t k (v) have a simple graph theoretic
interpretation. For a vertex v, let S k (v) be the set of all directed paths of length k ending in v.
For 0, the only path of length 0 ending in v consists of v itself. For a path P
(v)g. Thus,
is a set of random variables; each one is the sum of k+1 random variables. Note that
these random variables are not independent, even if the - i (v)'s are independent. The explicit
computation of t k (v) is as follows.
Theorem 2.1 For every
Proof: By induction on k. For note that the only path of length 0 to v, is v itself, i.e.
(v)g. Hence,
Assume that the Theorem holds for k - 0. From the recursion above we have that
By the inductive hypothesis,
which gives the desired result
The Performance Measures
The most important performance measures investigated in this paper are the completion
times t k (v), k - 0, v 2 V . A related performance measure of interest is the counting process
t (v) (or simply N t (v)), associated with processor v defined by
that is, N t (v) is the number of computation steps (minus 1) completed by v up to time t,
or the highest index of an M k message that has been sent by v up to time t. Similarly,
denotes the total number of processing steps (minus n) executed in the
network up to time t. The following claim indicates that no processor can advance (in terms
of executed processing steps) too far ahead of any other processor.
2.2 Let d be the diameter of a directed, strongly connected graph G. Then for all
Proof: Denote by l the length of a simple path from u to v. A simple inductive argument
on l, shows that the fact that the last message sent by u up to time t is MN t (u) , implies that
d. The same argument for a simple path from
v to u proves that N t (u) \Gamma N t (v) - d.
Another important performance measure is the computation rate, R G (v), (or simply R(v))
of processor v in network G, defined by
whenever the limit exists. Similarly, the computation rate of the network is defined by
R \Delta
2.2 implies that for every u;
3 GENERAL PROBABILITY DISTRIBUTIONS
In this section we compare the performance of different networks, with general distributions
of the processing times - k (v). We first show that adding edges to a network with an arbitrary
topology slows down the operation of each of the processors in the network. We show how
the theory of graph embedding can be used to compare the rates of different networks. As
an example we present graphs, which have the same rate (up to a constant factor) for general
distributions, although they have different topologies. Finally, we compare networks with the
same (arbitrary) topology but different distributions of the processing times. Specifically, we
show that determinism maximizes the rate, and exponential distributions minimize the rate,
among a large class of distributions.
3.1 Topology of the Network
3.1.1 Monotonicity
Here we show that adding edges to a network with an arbitrary topology slows down the operation
of each of the processors in the network. The basic methodology used is the sample
path comparison; that is, we compare the evolution of message transmissions in different networks
for every instance, or realization, of the random variables - k (v). This yields a stochastic
ordering between various networks [Ro83], [S84].
Theorem 3.1 Let G(V; E) be a graph, and E be a set of directed edges. Let
be the graph obtained from G by adding edges Assume that processor v,
awakens in both G and H at the same time t(v). For every realization of the
random variables - k (v), k - 0, 1 - v - n, the following inequalities hold
Proof: The proof is by induction on k. The basis of the induction is trivial since
The induction hypothesis is t G
(v). We need to show that t G
(v). ?From
equation (1) we have that
Since IN G (v) ' IN H (v), it follows that
and therefore it follows from (2) that t G
k+1 (v), for all v.
The previous theorem implies immediately
Corollary 3.2 Under the conditions of Theorem 3.1 we have that N G
t (v) and
R G (v) - R H (v) (when the limits exist) for all v 2 V . Also N G
t .
Remark 1 Notice that no assumption was made about the random variables - k (v). In partic-
ular, they need not be independent.
Remark 2 The sample path proof above implies that the random variable N G
t is stochastically
larger than the random variable N H
t , denoted N G
all ff.
Remark 3 The above implies that if one starts with a simple, directed cycle (a strongly connected
graph with the least number of edges) and successively adds edges, a complete graph is
obtained, without ever increasing the rate.
3.1.2 Embedding
The theory of graph embedding has been used to model the notion of one network simulating
another on a general computational task (see for example [R88]). Here we show how the
notion of graph embedding can be helpful in comparing the behavior and the rates of different
networks controlled by the synchronizer.
An embedding of graph G in graph H is specified by a one-to-one assignment ff
of the nodes of G to the nodes of H , and a routing ae : EG ! Paths(H) of each edge of G along
a distinct path in H . The dilation of the embedding is the maximum amount that the routing
ae "stretches" any edge of G:
The dilation is a measure of the delay incurred by the simulation according to the embedding.
The following theorem is a generalization of Theorem 3.1.
Theorem 3.3 Let (ff; ae) be an embedding with dilation D, of a graph G(VG ; EG ) in a graph
have the same distribution. For every realization of the random variables
the following inequalities hold
Proof: For each path of length k - 0 in G,
one can use ae to construct a path in H of length less than or equal to k \Delta D from ff(v 0 ) to ff(v k ),
passing through ff(v 1
Moreover, there is such a path of length exactly k \Delta D, since one can revisit vertices (each vertex
has a selfloop) each time between a pair of vertices ff(v i ), and ff(v i+1 ), there are less than D
edges in the path ae(v there is a path in H :
where
We are assuming a realization - k for every u 2 VG . It follows that for
every path P G
k , there exists a path P H
kD , such that
and thus
By Theorem 2.1, t G
kD (ff(v)).
Corollary 3.4 Under the conditions of Theorem 3.3 we have that D \Delta N G
t (ff(v)) and
D (when the limits exist) for all v 2 VG .
Remarks 1-3 hold in this case too.
A simple corollary of Theorem 3.3 is that if G is a subgraph of H , N G
This is because if G is a subgraph of H , then there is an embedding from G in H with dilation 1.
In addition, if the number of vertices in G and H are equal, and the dilation of the embedding
is D, then G is a D-spanner of H (e.g. [PS89], [PU89]), and we have the following.
Corollary 3.5 If H has a D-spanner G, then R G =D - R H - R G .
A motivation for the the theory of embedding is simulation. Namely, one expects that
if there is an embedding (ff; ae) from G in H with dilation D, then the architecture H can
simulate T steps of the architecture G on a general computation in order of D \Delta T steps, by
routing messages according to ae. In our approach, we compare the performance of G and of H
under the synchronizer, without using ae; the embedding is used only for the purpose of proving
statements about the performance of the networks. Consider for example the following two
results of the theory of embedding [R88].
Proposition 3.6 For all One can embed the order n Shuffle-Exchange graph in the
order n deBruijn graph with dilation 2. One can embed the order n deBruijn graph in the order
n Shuffle-Exchange graph with dilation 2.
Proposition 3.7 For all One can embed the order n Cube-Connected-Cycles graph in
the order n Butterfly graph with dilation 2. One can embed the order n Butterfly graph in the
order n Cube-Connected-Cycles graph with dilation 2.
By Theorem 3.3, the average rate of the graphs of Proposition 3.6 (3.7) are equal up to a
constant factor of 2, provided that the processing times of corresponding processors have the
same distributions (regardless of what these distributions are).
3.2 Probability Distributions
3.2.1 Deterministic Processing Times
Now we compare networks, say G(V; E)and H(V; E), having the same (arbitrary) topology, but
operate with different distributions of the random variables - k (v). To that end, we assume that
the processing times - G
are independent and have finite mean E[- G
v .
We say that - v is the potential rate of v, as this would be the rate of v if it would not have
to wait for messages from its in-neighbors. The processing times in H are distributed as in
G except for a subset V 0 ' V of processors, for which the processing times are assumed to
be deterministic, i.e. - H
specific realization of the random variables in G. Again, it is assumed that the
processors are awakened at the same time in both networks.
Theorem 3.8 Under the above conditions we have that
for all processors v, and k - 0. The expectation is taken over the respective distributions of
processing times of processors of G in V 0 .
Proof: The proof is by induction on k. For the basis, observe that
for
for
The induction hypothesis is t H
k (v)], and we need to show that t H
for all
?From (1) we have that
for . Jensen's inequality implies
By the induction hypothesis,
since E[- G
Remark 4 Theorem 3.8 holds also if the processing times - H
k (v) of processors v of H in V 0 ,
are deterministic, but not necessarily the same for every k.
When all processing times in the network H are deterministic, the computation of the
network rate is no longer a stochastic problem, but a combinatorial one. Thus, a conclusion
of Theorem 3.8 is that in this case, the computation rate of H , obtained via combinatorial
techniques ([ER1] and [ER2]), yields an upper bound on the average rate of G. Furthermore,
if the times t H
k (v) are computed, they give a lower bound on E[t G
k (v)], for every k - 0.
3.2.2 More Variable Processing Times
More generally, we study the effect of substituting a random variable in the network (e.g. the
processing time of a given processor, for a given computational step) with a given distribution,
for a random variable with another distribution on the rate of the network, and define an
ordering among probability distributions.
Recall that a function h is convex if for all
distribution FX is said to be more variable than
a random variable Y with distribution F Y , denoted X- c Y or FX- c F Y , if E[h(X)] - E[h(Y )]
for all increasing convex functions h. The partial order - c is called convex order (e.g. [Ro83],
[S84]). Intuitively X will be more variable than Y if FX gives more weight to the extreme
values than F Y ; for instance, if E[X
convex function.
Here we compare networks, say G(V; E) and H(V; E) having the same arbitrary topology,
but some of the processing times in G are more variable than the corresponding processing
times in H , i.e., for some k's and some v's, - G
k (v), while all other processing times
have the same distributions in both graphs. When t G processing times in
G (H) are independent of each other, the following holds.
Theorem 3.9 Under the above conditions the following holds for all processors v, and k - 0
Proof: ?From Theorem 2.1 we have
where is a directed path of length k ending in v, and
?From the fact that the - 's are positive and max and
are convex increasing functions,
it follows that t k (v) is a convex increasing function of its arguments f-
(v)g. Now we can use Proposition 8.5.4 in [Ro83]:
Proposition 8.5.4: If are independent r.v., and Y are independent
r.v., and
increasing convex function g which are convex in each of its arguments.
The proof of the theorem now follows since by assumption the - 's in G are independent, the
- 's in H are independent, and - H
. Note that the random variables
are not independent.
Corollary 3.10 Under the above conditions N G
t (v), R G (v) - R H (v) and R G - R H .
In the next section we show that if the processing times are independent and have the same
exponential distribution with mean - \Gamma1 , then the rate of any network is at least -jV log jV j.
We conclude this subsection by characterizing a set of distributions for which the same lower
bound holds.
Assume that the expected time until a processor finishes a processing step given that it
has already been working on that step for ff time units is less or equal to the original expected
processing time for that step. Namely, we assume that the distributions of the processing
times - k (v), for all are new better than used in expectation (NBUE) (e.g. [Ro83],
[S84]), so that if - is a processing time, then
Let G d (V; E) be a network with deterministic processing times, let G e (V; E) be a network
with corresponding processing times with the same mean, but independent, exponentially
distributed, and let G(V; E) be a network with corresponding processing times with the same
mean and independent, but with any NBUE distribution. The following theorem follows from
the fact that the deterministic distribution is the minimum, while the exponential distribution
is the maximum with respect to the ordering - c , among all NBUE distributions [Ro83], [S84].
Theorem 3.11 For every holds that t Gd
(v).
Some examples of distributions which are less variable than the exponential (with appropriate
parameters) are the Gamma, Weibull, Uniform and Normal.
We should conclude this section by pointing out that the interested reader can find similar
results for rather general stochastic petri-nets in [Ba89] and [BL91].
In this section we assume that the processing times - k (v), k - 0, are independent and
exponentially distributed with mean - \Gamma1 . We first consider general topologies and derive upper
and lower bounds on the expected values of t k (v), and thus obtain upper and lower bounds on
the rate of the network. These bounds depend on the in-degrees and out-degrees of processors
in the network, but not on the number of processors itself. Then, exploring the Markov chain
of the underlying process, we derive the exact rates of two extreme topologies: the directed
ring and the fully connected (complete) network. For these two topologies we study also the
effect of having a single slower processor within the network.
4.1 Upper and Lower Bounds
Denote by d out (v) (d in (v)) the number of edges going out of (into) v in G, and let
d out
d in (v);
d out (v) ; ffi in = min v2V
d in (v):
Lemma 4.1 (Lower Bound)
(i) For every k - 0 there exists a processor v 2 V for which
(ii) For every k - 0, and every v 2 V , the following holds
log ffi in ]:
Proof: We present a detailed proof for part (i) only; the proof of part (ii) is discussed at the
end. We start by proving that for every k - 0, there exists a (not necessarily simple) path
, such that
We assume the statement holds for k - 0, and prove it for k + 1. The proof of the basis
is identical. Let v k+1 be the processor for which the processing time during the
computational step is maximum, among the out-neighbors of v k , i.e.,
not start the 1)st computational step before v k finishes the kth computational
step, we have that t k+1 (v k+1
to the maximum of at least ffi out independent and identically distributed exponential random
variables with mean - \Gamma1 . It is well known (e.g. [BT89], [D70]) that the mean of the maximum
of c such random variables is at least - \Gamma1 log c. It follows that
We can chose v 0 to be the one with latest waking time t(v 0 ), and thus E[t 0 (v
Therefore, for every k - 0, there exists a processor v such that
completing the proof of (i). The proof of part (ii) evolves along the same lines, except that we
start from v k and move backwards along a path.
Remark 5 ?From its proof, one can see that Lemma 4.1 holds for any distribution F of the
processing times, for which the expected value m c of the maximum of c independent r.v. with
distribution F exists. In this case it implies that R v - 1=m c , with
Remark 6 Lemma 4.1 implies that for the exponential case, the slowdown of the rate is at
least logarithmic in the maximum degree of G. By Remark 5, there are distributions (not
NBUE, by Theorem 3.11) for which the slowdown is larger; an example is F
which the slowdown is at least the square root of the maximum degree of G ([D70]
pp. 58).
Lemma 4.2 (Upper Bound)
(i) For every k - 1, for every processor v,
log \Delta in
(ii) For every k - 1, for every processor v,
Proof: Again we restrict ourselves to the proof of part (i). Recall that Theorem 2.1 states
that for every (v)). Also, for a path P
for the moment let
By Proposition D.2 of the Appendix,
Pr
ck
log \Delta in
ck
log \Delta in ;
for every c ? 4, since log 2= log \Delta in - 1. It follows that
Pr
ck
log \Delta in
in ck
log \Delta in = e \Gammak( c
log \Delta in ;
for every c ? 4, and
Z 4k
log
log \Delta in
e \Gammak( c
log \Delta in
log
Combining Lemma 4.1 and Lemma 4.2 we obtain:
Theorem 4.3
log
log
4.2 Exact Computations
Theorem 4.3 implies the following bounds for the rate of a directed cycle C n
of a complete graph K n is the number of processors:
0:36- R Cn (v) -;
4 log n
log n
In this section we shall compute the exact values for the rates of C n and K n . To that end
we consider the Markov chain associated with the network This Markov chain is denoted by
is the number of messages stored in the buffer
of edge i at time t, and m is the number of edges in the network. Note that a processor with
positive number of messages on each of its in-coming edges is in a processing state. When such
a processor completes its processing (after an exponential time), one message is deleted from
each of its in-coming edges and one message is put on each of its out-going edges. We denote
by s 0 the state in which X i m. Thus, the network can be represented as a
Marked Graph (e.g. [CHEP]).
The number of states in the Markov chain is finite, say N , because a transition of the chain
does not change the total number of messages in a circuit in the network. Moreover, if the
network is strongly connected, then the Markov chain is irreducible. Therefore, the limiting
probabilities of the states s i of the chain exist, they are all positive and their
sum is equal to 1 (e.g. [C67],[Ro83]). However, as we shall see, N can be exponential in n,
therefore it is infeasible to compute the rate by directly solving the Markov chain. Here we
show how to solve the Markov chain for two network classes, without having to produce the
entire chain. We hope this combinatorial approach could be applied to other networks as well.
Let GX denote the transition diagram (directed graph) of the Markov chain X . Consider a
BFS (breadth first search) tree of GX , rooted at s 0 . The level L(v) of a vertex v will be equal
to the distance from s 0 to v. Thus, L(s 0 0, the set of vertices at level
i, and by L the number of levels of GX .
4.2.1 A Simple Directed Cycle
We study the performance of a simple directed cycle of n processors C
It is not difficult to observe that the Markov chain associated with C n , corresponds
to that of a closed queuing network; we return to this approach later. Here we choose to use
a combinatorial approach.
Theorem 4.4
(i) All the states associated with C n , have the same limiting probability.
(ii) For any graph G which is not a simple directed cycle (i) does not hold.
Proof: (i) The proof follows from two observations. First, by symmetry, all the states in
one level have the same probability. Second, the indegree of any state in the transition diagram
is equal to its outdegree. Then a simple inductive argument can be used to prove part (i).
(ii) If G is not a cycle, then it has a node v, s.t. d in (v) ? 1. Let be two nodes with
edges to v. Consider the state s, reached from s 0 , by the processing completion (or in marked
graphs terminology, firing) of vertex v. The outdegree of s is equal to n \Gamma 1, because apart
from v, all vertices are still enabled. But the indegree of s is at most n \Gamma 2, because by the
firing of v 1 , or of v 2 it is not possible to reach s, since there are no messages on the edges from
to v, in s. Therefore, we have proved that d in (s) 6= d out (s).
Consider the balance equation that holds at state s: P
are the limiting probabilities of the states that have an edge to s,
is the limiting probability of s, and We have just proved that k 6= n \Gamma 1. It
follows that it is not possible that all the probabilities of the last equation are equal.
The next theorem states that each processor of C n works at least at half of its potential
rate -, regardless of the value of n.
Theorem 4.5 The rate R(v) of a processor in C n is
and the limiting probability of each state is 1=N , where N is the number of states in the
associated chain.
Proof: If M is the number of states in which at least one message is in an edge, going into
a processor, say v, then the running rate will be M=N times the expected firing rate. This
is because v will be enabled when it has more than 0 messages in its input edge, and since
all states have the same probability (Theorem 4.4), the percent of the time that is enabled is
simply, M=N .
The number of ways of putting n objects in k places is
It is not difficult to see that
which gives the desired results.
4.2.2 A Complete Graph
Let K n be a complete graph with n processors. Recall that N is the number of states in the
associated Markov chain, and let s 0 be the state in which each edge has one token. A state is
at level l, 0 - l - can be reached from s 0 by the firing of l processors. The limiting
probability of a state at level l is denoted by P (l).
Theorem 4.6 : The rate of a processor in K n is
log n
Proof: A simpler proof can be derived as in the proof of Theorem 4.10; here we give a
combinatorial proof which also yields the number and the limiting probabilities of the states
of the associated Markov chain.
We consider a Markov chain T , similar to the Markov chain associated with network K n .
The root of T , s 0 , is the state with a message in each edge. A state s will have one son for
each one of the enabled processors at state s; a son of s corresponds to the state arrived from
s by the firing (completion of a processing step) of one of the enabled processors in state s.
Note that in chain T there are several vertices corresponding to the same state of the chain
associated with K n .
In T , the number of states in level l is n!=(n \Gamma l)!, because each time a processor fires it
can not fire again until the rest of the processors have fired. Thus, the number N T , of states
in T is
The number of states in which a given processor is enabled at level l, en(l) (edges from level l
to level l + 1), is
because at level l there are n!=(n \Gamma l \Gamma 1)! enabled processors, and by symmetry, each processor
is enabled the same number of times at each level.
Let us denote by P T
l , the limiting probability of a state of T in level l. One can show that
It follows that the percent of time that a processor is enabled is
l , and its rate is - \Delta ut.
Corollary 4.7 For a network K n ,
Proof: As noted before, it may be that two states of T correspond to the same state, say s,
of K n . In fact, if a state of T is reached from s 0 by firing a sequence of processors of length k,
then all k! permutations of the processors in this sequence constitute a valid firing sequence,
which leads to the same state s. Thus, the limiting probability of a state s at level l is
The number of different states at level l is n!=l!(n \Gamma l)!, and the total number of different
states is
Corollary 4.8 Asymptotically, the rate of any network of n processors is between -n=2 and
-n= log n.
Observe that the best possible rate of a processor is 2=3 of the potential rate, in the case
of a cycle of two processors; adding more processors can only lower this rate, but not below
1=2. Yet, the rate of the network grows linearly with n. In the case of a complete graph, the
rate of a processor reduces as n grows, but also here the total number of computational steps
executed per unit time (n= log n) grows with n.
4.3 Bottlenecks
Suppose that the potential rate of all processors of a graph is -, except for one, which has a
lower rate -. We shall now show that such a bottleneck has a stronger effect in a network
which is a directed cycle, than in one which is a complete graph.
Consider the case of a simple directed cycle with n vertices CB n , where processors
have rate -, and one processor has rate -. Using standard techniques of Queuing Theory, we
prove the following.
Theorem 4.9 The rate of a processor in CB n is
ae n
be the number of messages in the buffer of the incoming edge
to processor i. The total number of messages in the cycle is equal to n. Since this is a
closed queueing system, we have that the limiting probability of the system being in state
given by the following product form [Ro]:
if
and is equal to 0 otherwise, where K is a normalization constant that guarantees that the sum
of all the above probabilities is equal to 1. Thus, the probability of having l messages on the
incoming edge to processor n is
lg. Hence,
where K 0 is the normalization factor determined by the condition
Finally,
ae l
Now, observe that the rate is simply as the rate of the processor is -
while there are messages in the incoming edge to processor n.
Several conclusions can be derived from Theorem 4.9. First, observe that the rate of the
cycle cannot exceed -n, thus the slow processor bounds the rate of the network. Moreover,
for a fixed n and a very slow processor (- ! 0 or ae ! 1), the rate of the network is
ae \Gamman ], namely, as ae increases, the rate approaches its upper bound -n.
Next, we consider the case where the graph is a complete graph KB n . We continue to
assume that the rate of processors is - and the n-th processor is slower, operating at
rate -. We shall show that, for fixed - and -, as the number of processors n grows to infinity,
the influence of the slow processor diminishes, and in the limit, the rate of the network is the
same as that of a network with all processors running with the same rate -.
Theorem 4.10 The rate of a processor in KB n is at least -
Proof: Suppose that the network is in state s 0 at a given time, and after some time T 1 it returns
to that state; then after some time T 2 it returns again, and so on. Then, fT
is a sequence of non-negative independent random variables with a common distribution F ,
and expected value E[T i ].
Denote by N(t) the number of events (returning to s 0 ) by time t. The counting process
is a renewal process. Therefore, with probability 1,
(See, for example [Ro83]). Moreover, since each time the process returns to s 0 , each processor
of the network has completed exactly one computational step, it follows that the rate of the
network is 1=E[T i ]. We proceed to bound E[T i ].
The expected time of T i , that takes to return to s 0 is of the form:
for some 1 - j - n, depending on when the slow processor completes a computational step. If
the system leaves s 0 because the slow processor completed a computational step, E[T i ] is ff 1 .
In general, if the j-th (1 - j - n) processor to complete a computational step, after leaving
is the slow processor, then E[T i ] is ff j .
The probability of E[T j ] being equal to ff j , is not necessarily the same for every j, but for
the case -, it holds that ff j ! ff j+1 . Thus,
gives an upper bound on the time E[T ] that takes to return to s 0 , and 1=ff n , is a lower bound
on the rate of a processor in the network.
We have
--
log n:
We see that E[T i
log n) and thus R(KB n ) is at least 1-
For fixed -, R v is \Theta(-= log n), but observe that the rate of the network cannot exceed
-. However, when the number of processors n increases to infinity, the rate of the network
decreases in proportion to 1= log n, as if the slower processor is not in the network.
We briefly discuss the case of non-negligible transmission delays. In this model the processing
times are random, as before, but the transmission delays are also random. Denote the transmission
delay of message M k , k - 0, along edge
It follows that the behavior of the system is described by the recursions
Note that this system is not equal to the one of [BT89], in which the processing times are
negligible , and the delays non-negligible, with a self-loop in each processor (to model its
processing delay).
v) be a path of length k. It is easy to see how to modify
the definition of T (P k
Thus a theorem similar to Theorem 2.1 holds, and the corresponding results for general distributions
Consider the case in which the processing times, as well as the transmission delays are
exponentially distributed, with the same mean, say 1. It is easy to see that Lemma 4.1 still
holds, and that Lemma 4.2 holds up to a factor of 2. Namely, by Theorem 3.9, a regular
network with non-negligible delays runs at the same rate that the same network, up to a
constant factor, provided that the delays are less or equal (in the convex order) than the
processing times. In [ER1] we show that for a network with negligible delays and deterministic
processing times equal to 1, the rate of any network is equal to 1. Thus, in this case, random
processing times degrade the rate by at most a logarithmic factor in the maximum degree of
a processor.
Now, consider the case in which all processing times have mean 1, but the delays have
exponentially distributed. The rate in the deterministic case is
equal to - [ER2], and thus, by Theorem 3.3, in our case the rate is at most -. One can prove
(using Proposition D.2 ), that also in the case of non-negligible delays, the rate is degraded
by at most a logarithmic factor in the maximum degree of a processor, with respect to the
(optimal) deterministic case, for any NBUE distribution.
As for the exact computations for networks with average processing times and delays exponentially
distributed with mean 1, the rate of a simple cycle can be computed using the
same tools of Queuing Theory that we used in the case of negligible delays. To compute the
rate of a complete network K n things are not as straightforward; the structure of the Markov
process is more complicated, but by the arguments above, we have that the rate is between
1=8 log n and 1= log n. However, using the ideas of embedding, let us show that the rate of K n ,
is at least 1=4 log n. Let K 0
n be a complete network with negligible delays. Construct G, from
n by inserting one vertex in each of its edges. By Theorem 3.1 (or also 3.9), the rate of any
processor in G is at leastlog(n
One can show that the rate of any processor v in K n is greater or equal than half the rate of
the corresponding processor in G, using using the fact that there is an embedding of K n into
G of dilation 2. Therefore, the rate of v in K n is at least 1=4 log n.
6 CONCLUSIONS
In this paper we have studied the behavior of synchronizers in networks with random transmission
delays and processing times. We attempted to present a self-contained, general study
of the synchronizer performance, from the view point of distributed algorithms, rather than
providing a deep mathematical study of the underlying stochastic process. In particular, we
were interested in comparing the behavior of synchronizers with random delays as opposed
to the usual approach of analyzing distributed algorithms with bounded delays. Our main
conclusion is that if the delays belong to the natural class of NBUE distributions, the rate of
the network is only degraded by a small, local (vertex degree) factor.
We presented several properties of the behavior of the synchronizer for general probability
distributions, and described techniques useful to compare the rate of the synchronizer running
in networks with different topologies.
For exponential distributions we showed that the expected duration of a round of computation
depends on the logarithm of a vertex degree, and hence, the rate of computation does
not diminishes with the number of processors in the network. We presented techniques to
prove upper and lower bounds on the rate, and to obtain exact computations. We hope the
combinatorial approach of these techniques, which was applied to rings, complete networks
and regular degree networks, will be used in the future to obtain results for other topologies
as well.
The following proposition (similar to pp. 672 in [BT89]) is used to prove the lower bounds on
the rate of a network.
Proposition 7.1 (D.2) Let X i be a sequence of independent exponential random variables
with mean - \Gamma1 . For every positive integer k and any c ? 4 log 2,
Pr
ck
Proof: Fix fi 2 (0; -), and let fl be a positive scalar. A direct calculation yields
Z 1e fi(x\Gammafl) -e \Gamma-x
In particular, we can choose fl sufficiently large so that E
\Theta
- 1. If
satisfies the last equation . Using the independence of the random variables X i ,
we obtain
e fi
Using the Markov inequality, we obtain
e fikm Pr
e fi
This in turn implies that
Pr
Pr
Pr
fi=2. For our choose of fi, we have 2.
Acknowledgments
We would like to thank Gurdip Singh and Gil Sideman for helpful comments.
--R
"Complexity of Network Synchronization,"
"Reducing Complexities of Distributed Max-Flow and Breadth-First- Search Algorithms by Means of Network Synchronization,"
"Network Synchronization with Polylogarithmic Over- head,"
"Sojourn Times in Cyclic Queues - the Influence of the Slowest Server,"
"Ergodic Theory of Stochastic Petri Networks,"
"Concurrency in Heavily Loaded Neighborhood- Constrained Systems,"
"Estimates of Cycle Times in Stochastic Petri Nets,"
"Comparison Properties of Stochastic Decision Free Petri Nets,"
"Queueing Models for Systems with Synchronization Constraints,"
"The Fork-Join Queue and Related Systems with Synchronization Constraints: Stochastic Ordering and Computable Bounds,"
"Acyclic Fork-Join Queueing Networks,"
"Investigations of Fault-Tolerant Networks of Computers,"
Parallel and Distributed Computation
Markov Chains With Stationary Transition Probabilities
"Marked Directed Graphs,"
"Distributed Snapshots: Determining Global States of Distributed Systems,"
Order Statistics
"Lack of Global Clock Does Not Slow Down the Computation in Distributed Networks,"
"The Use of a Synchronizer Yields Maximum Rate in Distributed Networks,"
"Tentative and Definite Distributed Computations: An Optimistic Approach to Network Synchronization,"
"Subadditive Ergodic Theory,"
"Stochastic Petri Nets: An Elementary Introduction,"
"Analysis of a Distributed Scheduler for Communication Networks,"
"Performance Analysis Using Stochastic Petri Nets,"
"Fast Bounds for Stochastic Petri Nets,"
"Generating a Global Clock in a Distributed System,"
"Graph Spanners,"
"An Optimal Synchronizer for the Hypercube,"
"Stochastic Bounds on Execution Times of Task Graphs,"
"Shuffle-Oriented Interconnection Networks,"
"Stochastic Marked Graphs,"
"Analysis of Distributed Algorithms based on Recurrence Relations,"
Comparison Methods for Queues and Other Stochastic Models
"The effect of Increasing Service Rates in a Closed Queueing Network,"
--TR
Complexity of network synchronization
Parallel and distributed computation: numerical methods
Investigations of fault-tolerant networks of computers
Acyclic fork-join queuing networks
Concurrency in heavily loaded neighborhood-constrained systems
An optimal synchronizer for the hypercube
Stochastic Petri nets: an elementary introduction
Unison in distributed networks
The use of a synchronizer yields maximum computation rate in distributed networks
Upper and lower bounds for stochastic marked graphs
Distributed snapshots
Fast Bounds for Stochastic Petri Nets
Analysis of Distributed Algorithms based on Recurrence Relations (Preliminary Version)
Tentative and Definite Distributed Computations
Analysis of a Distributed Scheduler for Communication Networks
Shuffle-Oriented Interconnection Networks
--CTR
Julia Lipman , Quentin F. Stout, A performance analysis of local synchronization, Proceedings of the eighteenth annual ACM symposium on Parallelism in algorithms and architectures, July 30-August 02, 2006, Cambridge, Massachusetts, USA
Omar Bakr , Idit Keidar, Evaluating the running time of a communication round over the internet, Proceedings of the twenty-first annual symposium on Principles of distributed computing, July 21-24, 2002, Monterey, California
Jeremy Gunawardena, From max-plus algebra to nonexpansive mappings: a nonlinear theory for discrete event systems, Theoretical Computer Science, v.293 n.1, p.141-167, 3 February | message passing;distributed computing;exponential distribution;synchronisation;graph theory;probabilitydistributions;parallel programming;marked graphs;compiler;asynchronous network;transmission delays;bottleneck processor;index termsprogram compilers;processing times;performance measure;synchronized programs;distributed networks;computational steps;random variables;performance evaluation;message transmissiondelays;synchronous network;randomprocessing times;synchronizer |
629351 | Asynchronous Problems on SIMD Parallel Computers. | AbstractOne of the essential problems in parallel computing is: Can SIMD machines handle asynchronous problems? This is a difficult, unsolved problem because of the mismatch between asynchronous problems and SIMD architectures. We propose a solution to let SIMD machines handle general asynchronous problems. Our approach is to implement a runtime support system which can run MIMD-like software on SIMD hardware. The runtime support system, named P kernel, is thread-based. There are two major advantages of the thread-based model. First, for application problems with irregular and/or unpredictable features, automatic scheduling can move some threads from overloaded processors to underloaded processors. Second, and more importantly, the granularity of threads can be controlled to reduce system overhead. The P kernel is also able to handle bookkeeping and message management, as well as to make these low-level tasks transparent to users. Substantial performance has been obtained on Maspar MP-1. | Introduction
1.1. Can SIMD Machines Handle Asynchronous Problems?
The current parallel supercomputers have been developed as two major architectures: the
SIMD (Single Instruction Multiple Data) architecture and the MIMD (Multiple Instruction Multiple
architecture. The SIMD architecture consists of a central control unit and many
processing units. Only one instruction can be executed at a time and every processor executes
the same instruction. Advantages of a SIMD machine include its simple architecture, which makes
the machine potentially inexpensive, and its synchronous control structure, which makes programming
easy [4, 34] and communication overhead low [26]. The designers of SIMD architectures have
been motivated by the fact that an important, though limited, class of problems fit the SIMD
architecture extremely well [35]. The MIMD architecture is based on the duplication of control
units for each individual processor. Different processors can execute different instructions at the
same time [15]. It is more flexible for different problem structures and can be applied to general
applications. However, the complex control structure of MIMD architecture makes the machine
expensive and the system overhead large.
Application problems can be classified into three categories: synchronous, loosely synchronous,
and asynchronous. Table I shows a few application problems in each of the three categories [13].
ffl The synchronous problems have a uniform problem structure. In each time step, every
processor executes the same operation over different data, resulting in a naturally balanced
load.
ffl The loosely synchronous problems can be structured iteratively with two phases: the computation
phase and the synchronization phase. In the synchronization phase, processors
exchange information and synchronize with each other. The computation load can also
be redistributed in this phase. In the computation phase, different processors can operate
independently.
ffl The asynchronous problems have no synchronous structure. Processors may communicate
with each other at any time. The computation structure can be very irregular and the load
imbalanced.
Table
I: Classification of Problem Structures
Synchronous Loosely synchronous Asynchronous
Matrix algebra Molecular Dynamics N-queen problem
Finite difference Irregular finite elements Region growing
QCD Unstructured mesh Event-driven simulation
The synchronous problems can be naturally implemented on a SIMD machine and the loosely
synchronous problems on an MIMD machine. Implementation of the loosely synchronous problems
on SIMD machines is not easy; computation load must be balanced and the load balance activity
is essentially irregular. As an example, the simple O(n 2 ) algorithm for N-body simulation is
synchronous and easy to implement on a SIMD machine [12]. But the Bernut-Hut algorithm
(O(n log n)) for N-body simulation is loosely synchronous and difficult to implement on a SIMD
machine [3].
Solving the asynchronous problems is more difficult. First, a direct implementation on MIMD
machines is nontrivial. The user must handle the synchronization and load balance issues at
the same time, which could be extremely difficult for some application problems. In general, a
runtime support system, such as LINDA [2, 5], reactive kernel [27, 33], or chare kernel [30], is
necessary for solving asynchronous problems. Implementation of the asynchronous problems on
SIMD machines is even more difficult because it needs a runtime support system, and the support
system itself is asynchronous. In particular, the support system must arrange the code in such
a way that all processors execute the same instruction at the same time. Taking the N-queen
problem as an example, since it is not known prior to execution time how many processes will be
generated and how large each computation is, a runtime system is necessary to establish balanced
computation for efficient execution. We summarize the above discussion in Table II.
Various application problems require different programming methodologies. Two essential
Table
II: Implementation of Problems on MIMD and SIMD Machines
Synchronous Loosely synchronous Asynchronous
MIMD easy natural need runtime support
SIMD natural difficult difficult, need runtime support
programming methodologies are array-based and thread-based. The problem domain of most
synchronous applications can be naturally mapped onto an array, resulting in the array-based
programming methodology. Some other problems do not lend themselves to efficient programming
in an array-based methodology because of mismatch between the model and the problem structure.
The solution to asynchronous problems cannot be easily organized into aggregate operations on a
data domain that is uniformly structured. It naturally demands the thread-based programming
methodology, in which threads are individually executed and where information exchange can
happen at any time. For the loosely synchronous problems, either the array-based or the thread-based
programming methodology can be applied.
1.2. Let SIMD Machines Handle Asynchronous Problems
To make a SIMD machine serve as a general purpose machine, we must be able to solve
asynchronous problems in addition to solving synchronous and loosely synchronous problems.
The major difficulties in executing asynchronous applications on SIMD machines are:
ffl the gap between the synchronous machines and asynchronous applications; and
ffl the gap between the array processors and thread-based programming.
One solution, called the application-oriented approach, lets the user fill the gap between application
problems and architectures. With this approach, the user must study each problem and look
for a specific method to solve it [7, 36, 37, 39]. An alternative to the application-oriented approach
is the system-oriented approach, which provides a system support to run MIMD-like software on
SIMD hardware. The system-oriented approach is superior to the application-oriented approach
for three reasons:
ffl the system is usually more knowledgeable about the architecture details, as well as its
dynamic states;
ffl it is more efficient to develop a sophisticated solution in system, instead of writing similar
code repeatedly in the user programs; and
ffl it is a general approach, and enhances the portability and readability of application programs
The system-oriented approach can be carried out in two levels: instruction-level and thread-
level. Both of them share the same underlying idea: if one were to treat a program as data, and
a SIMD machine could interpret the data, just like a machine-language instruction interpreted
by the machine's instruction cycle, then an MIMD-like program could efficiently execute on the
SIMD machine [17]. The instruction-level approach implements this idea directly. That is, the
instructions are interpreted in parallel across all of the processors by control signals emanating
from the central control unit [41]. The major constraint of this approach is that the central
control unit has to cycle through almost the entire instruction set for each instruction execution
because each processor may execute different instructions. Furthermore, this approach must insert
proper synchronization to ensure correct execution sequence for programs with communication.
The synchronization could suspend a large number of processors. Finally, this approach is unable
to balance load between processors and unlikely to produce good performance for general
applications.
We propose a thread-based model for a runtime system which can support loosely synchronous
and asynchronous problems on SIMD machines. The thread-level implementation offers great
flexibility. Compared to the instruction-level approach, it has at least two advantages:
ffl The execution order of threads can be exchanged to avoid processor suspension. The load
can be balanced for application problems with irregular and dynamic features.
ffl System overhead will not be overwhelming, since granularity can be controlled with the
thread-based approach.
The runtime support system is named as Process kernel or P kernel. The P kernel is thread-based
as we assign computation at the thread-level. The P kernel is able to handle the bookkeep-
ing, scheduling, and message management, as well as to make these low-level tasks transparent
to users.
1.3. Related research
Existing work on solving loosely synchronous and asynchronous problems on SIMD machines
was mostly application oriented [7, 36, 37, 39]. The region growing algorithm is an asynchronous,
irregular problem and difficult to run on SIMD machines [39]. The merge phase, a major part of
the algorithm, performs two to three orders of magnitude worse than its counterpart in MIMD
machines. That result is due to the communication cost and lack of a load balancing mechanism.
The authors concluded that "the behavior of the region growing algorithm, like other asynchronous
problems, is very difficult to characterize and further work is needed in developing a better model."
The two other implementations, the Mandelbrot Set algorithm [36] and the Molecular Dynamics
algorithm [7, 37], contain a parallelizable outer loop, but have an inner loop for which the number
of iterations varies between different iterations of the outer loop. This structure occurs in several
different problems. An iteration advance method was employed to rearrange the iterations for
SIMD execution, which was called SIMLAD (SIMD model with local indirect addressing) in [36],
and loop flattening in [37]. Recent studies have been conducted on the simulation of logic circuits
on a SIMD machine [6, 20]. The major difference between these works and our approach is that
we handle general asynchronous and loosely synchronous problems instead of studying individual
problems.
The instruction-level approach has been studied by a number of researchers [8, 9, 22, 25, 41,
40]. As mentioned above, the major restriction of this approach is that the entire instruction set
must be cycled through to execute one instruction step for every processor. A common method
to reduce the average number of instructions emanated in each execution cycle is to perform
global or's to determine whether the instruction is needed by any processor [9, 41]. It might
be necessary to insert barrier synchronizations at some points to limit the degree of divergence
for certain applications. Having a barrier at the end of each WHERE statement (as well as each
FORALL statement) is a good idea [8]. Other work includes an adaptive algorithm which changes
the order of instructions emanated to maximize the expected number of active processors [25].
Besides this issue, load balancing and processor suspension are also unsolved problems. The
applications implemented in these systems are non-communicating [41] or have a barrier at the
end of the program [9]. Collins has discussed the communication issue and proposed a scheme to
delay execution of communication instructions [8]. A similar technique is used in [9] for expensive
operations (including communication). However, it is not clear whether the method will work.
Many applications have dependences and their loads are imbalanced, inevitably leading to high
communication overhead and processor suspension. In summary, this instruction-level approach
has large interpreter overhead. A technique that literally transforms pure MIMD code into pure
SIMD code to eliminate this overhead has been proposed in [10].
Perhaps the works that are closest to our approach are Graphinators [17], early work on
combinators [21, 16], and the work on the interpretation of Prolog and FLAT GHC [19, 23, 24].
The Graphinators implementation is an MIMD simulator on SIMD machines. It was achieved by
having each SIMD processor repeatedly cycle through the entire set of possible "instructions."
Our work is distinguished from their work in terms of granularity control. The Graphinator
model suffered because of its fine granularity. The authors mentioned that "each communication
requires roughly a millisecond, unacceptably long for our fine-grained application" [17]. In the P
kernel, the user can control the grainsize of the process and the grainsize can be well beyond one
millisecond. Thus, the P kernel implementation can obtain acceptable performance. Besides, our
model is general, instead of dedicating it merely to the functional language, as in the Graphinator
model.
Our model is similar to the Large-Grain Data Flow (LGDF) model [18]. It is a model of
computation that combines sequential programming with dataflow-like program activation. The
LGDF model was implemented on shared memory machines. It can be implemented on MIMD
distributed memory machines as well [42, 30, 11, 14]. Now we show that the model can be
implemented on a SIMD distributed memory machine, too.
2. The P Kernel Approach - Computation Model and Language
The computation model for the P kernel, which originated from the Chare Kernel [30], is a
message-driven, nonpreemptive, thread-based model. Here, a parallel computation will be viewed
as a collection of processes, each of which in turn consists of a set of threads, called atomic
computations. Processes communicate with each other via messages. Each atomic computation
is then the result of processing a message. During its execution, it can create new processes or
generate new messages [1]. A message can trigger an atomic computation, whereas an atomic
computation cannot wait for messages. All atomic computations of the same process share one
common data area. Thus, a process P k consists of a set of atomic computations A k i
and one
common data area
Once a process has been scheduled to a processor, all of its atomic computations are executed on
the same processor. There is no presumed sequence to indicate which atomic computation will
be carried out first. Instead, it depends on the order of arrival of messages. Figure 1 shows the
general organization of processes, atomic computations, and common data areas. In general, the
number of processes is much larger than the number of processors, so that the processes can be
moved around to balance the load.
The P kernel is a runtime support system on a SIMD machine built to manipulate and schedule
processes, as well as messages. A program written in the P kernel language consists mainly of a
Common data area
Process
Atomic computation
Figure
1: Process, atomic computation, and common data area.
collection of process definitions and subroutine definitions. A process definition includes a process
name preceded by the keyword process, and followed by the process body, as shown below.
process ProcName f !Common Data Area Declarations?
entry Label1: (message msg1) !Code1?
entry Label2: (message msg2) !Code2?
. g
Here, bold-face letters denote the keywords of the language. The process body, which is enclosed
in braces, consists of declarations of private variables that constitute the common data area
of the process, followed by a group of atomic computation definitions. Each atomic computation
definition starts with a keyword entry and its label, followed by a declaration of the corresponding
message and arbitrary user code. One of the process definitions must be the main process.
The first entry point in the main process is the place the user program starts.
The overall structure of the P kernel language differs from that of the traditional programming
languages mainly in explicit declarations of (a) basic units of allocation - processes, (b) basic
units of indivisible computation - atomic computations, and (c) communication media - mes-
sages. With these fundamental structures available, computation can be carried out in parallel
with the assistance of primitive functions provided by the P kernel, such as OsCreateProc(), Os-
etc. The user can write a program in the P kernel language, deal with the creation of
processes, and send messages between them. For details of the computation model and language,
refer to [29].
In the following, we will illustrate how to write a program in the P kernel language using
the N-queen problem as an example. The algorithm used here attempts to place queens on
the board one row at a time if the particular position is valid. Once a queen is placed on
the board, the other positions in its row, column, and diagonals, will be marked invalid for
any further queen placement. The program is sketched in Figure 2. The atomic computation
QueenInit in the Main process creates N processes of type SubQueen, each with an empty
board and one candidate queen in a column of the first row. There are two types of atomic
computations in process SubQueen: ParallelQueen and ResponseQueen. A common data area
consists of solutionCount and responseCount. Each atomic computation ParallelQueen receives
a message that represents the current placement of queens and a position for the next queen
to be placed. Following the invalidation processing, it creates new SubQueen or SeqQueen
processes by placing one queen in every valid position in the next row. The atomic computation
ResponseQueen in processes SubQueen and Main counts the total number of successful queen
configurations. It can be triggered any number of times until there is no more response expected
from its child process. The atomic computation SequentialQueen is invoked when the rest of
rows are to be manipulated sequentially. This is how granularity can be controlled. In this
example, there are two process definitions besides that of process Main. Atomic computations
that share the same common data area should be in a single process, such as ParallelQueen and
ResponseQueen. The atomic computation SequentialQueen does not share a common data area
with other atomic computations, and therefore, is in a separate process to preserve good data
encapsulation and to save memory space. In general, only the atomic computations that are
logically coherent and share the same common data area should be in the same process.
3. Design and Implementation
The main loop of the P kernel system is shown in Figure 3. It starts with a system phase,
which includes placing processes, transferring data messages, and selecting atomic computations
to execute. It is followed by a user program phase to execute the selected atomic computation. The
iteration will continue until all the computations are completed. The P kernel software consists
of three major components: computation selection, communication, and memory management.
3.1. Computation selection
A fundamental difference between the MIMD and SIMD systems is the degree of synchronization
required. In an MIMD system, different processors can execute different threads of code, but
not in a SIMD system. When the P kernel system is implemented on an MIMD machine, which
atomic computation will be executed next is the individual decision of each processor. Whereas,
due to the lock-step synchronization in a SIMD machine, the same issue becomes a global decision.
Let's assume that there are K atomic computation types, corresponding to the atomic computation
definitions, represented by a During the lifetime of execution, the total
number of atomic computations executed is far more than K. These atomic computations are
dynamically distributed among processors. At iteration t, a computation selection function F is
applied to select an atomic computation type a k , where K. In the following
user program phase at the same iteration, a processor will be active if it has at least one atomic
computation of the selected type a k . Let the function num(i; p; t) record the number of atomic
computation with type a i at processor p in iteration t. Let the function act(i; t) count the number
Process Main
entry QueenInit: (message MSG1()) f int k;
read N from input
OsCreateProc(SubQueen,ParallelQueen,MSG2(1,k,empty board));
entry ResponseQueen: (message MSG3(m)) f
if
print "# of solutions =", solutionCount;
Process SubQueen
entry ParallelQueen: (message MSG2(i,j,board)) f int k;
invalidate row i, column j, and diagonals of (i,j)
if (position (i+1,k) is marked valid) f
if ((N-i) is larger than the grainsize)
else
if
entry ResponseQueen: (message MSG3(m)) f
if
Process SeqQueen f
entry SequentialQueen: (message MSG2(i,j,board)) f int k, count;
call sequential routine, recursively generating all valid configurations.
Figure
2: The N-queen program.
SYSTEM PHASE
process placement
message transmission
computation selection
USER PROGRAM
atomic computations
Start
Figure
3: Flow chart of the P kernel system.
of active processors at iteration t if the atomic computation type a i is selected,
where N is the number of processors.
We present three computation selection algorithms here. The first one, F cyc , is a simple
algorithm.
Algorithm I: Cyclic algorithm. Basically, it repeatedly cycles through all atomic computation
types. However, if act(i; t) is equal to zero, the type i will be skipped.
it is not always necessary to carry out K reductions to
compute act(i; t), since as long as the first nonzero act(i; t) is found, the value of function F cyc is
determined.
This algorithm is similar to the method used in the instruction-level approach in which processors
repeatedly cycle through the entire set of possible instructions. The global reduction is
essentially similar to the "global-or" method, which is used to reduce the number of instructions
that are emanated for each execution iteration. However, in the instruction-level approach, each
processor executes exactly one instruction per cycle; while in the thread-level approach, each
processor may execute many threads per cycle. Also, the execution order of a program is fixed in
the instruction-level approach, but the order can be exchanged in the thread-based approach.
To complete the computation in the shortest time, the number of iterations has to be mini-
mized. Maximizing the processor utilization at each iteration is one of the possible heuristics. If
is 900, a computation selection function F(t) selecting k 2 is intuitively
better, leading to an immediate good processor utilization. An auction algorithm, F auc is
proposed based on this observation:
Algorithm II: Auction algorithm. For each atomic computation i, calculate act(i; t) at
iteration t. Then, the atomic computation with the maximum value of act(i; t) is chosen to execute
next:
F auc
The cyclic algorithm is nonadaptive in the sense that the selection is made almost independent
of the distribution of atomic computations. In this way, it could be the case that a few processors
are executing one atomic computation type while many processors are waiting for execution of the
other atomic computation types. The auction algorithm is runtime adaptive. It will maximize
utilization in most cases. An adaptive algorithm is more sophisticated in general. However,
experimental results show that in most cases, the cyclic algorithm performs better than the
auction algorithm. It has been observed that when an auction algorithm is applied, at the near
end of execution, the parallelism becomes smaller and smaller, and the program takes a long time
to finish. This low parallelism phenomenon degrades performance seriously, which is characterized
as the tailing effect.
We propose an improved adaptive algorithm to overcome the tailing effect. To retain the
advantage of the auction algorithm, we intend to maximize the processor utilization as long as
there is a large pool of atomic computations available. On the other hand, when the available
parallelism falls to a certain degree, we try to exploit large parallelism by assigning priorities
to different atomic computation types. An atomic computation whose execution increases the
parallelism gets a higher priority, and vice versa. The priority can either be assigned by the
programmer or be automatically generated with dependency analysis.
Algorithm III: Priority auction algorithm. For simplicity, we assume that the atomic
computations a 0 ; a presorted according to their priorities, a 0 with highest
priority and aK \Gamma1 with the lowest. Use as a gauge of available parallelism, where c is a
constant and N is the number of processors.
F pri
When max 0i!K act(i; t) is larger than m, indicating that the degree of available parallelism
is high, the auction algorithm will be applied. Otherwise, among the atomic computation types
with the one with the highest priority will be executed next. The constant c is set
to be 0:5. If more than half of processors are active, the auction algorithm is used to maximize
the processor utilization. Otherwise, the priority is considered in favor of parallelism increase and
tailing effect prevention.
Table
III: Execution Time of the 12-queen Problem
Cyclic Auction Priority Auction
10.4 Seconds 11.1 Seconds 10.1 Seconds
This algorithm can constantly provide better performance than that provided by the cyclic
algorithm. Table III shows the performance for different computation selection algorithms with
the 12-queen problem on the 1K-processor MP-1.
3.2. Communication
There are two kinds of messages to be transferred. One is the data message, which is specifically
addressed to the existing process. The other kind is the process message, which represents the
newly generated process. Where these process messages are to be transferred depends on the
scheduling strategy used. The two kinds of messages are handled separately.
Transfer of data messages. Assume each processor initially holds d 0 (p) data messages to
be sent out at the end of the computation phase. Because of the SIMD characteristics, only one
message can be transferred each time. Thus, the message transfer step must be repeated at least
The real situation is even more complicated. During each time of the message transfer, a collision
may occur when two or more messages from different processors have the same destination
processor. Therefore, we need to prevent the message loss due to the collision. Let dest(p) be the
destination of a message from processor p and src(q) be the source from which processor q is going
to receive a message. There is a collision if two processors p 1 and p 2 are sending messages to the
same processor q, so that dest(p 1 q. The processor q can receive only one of them,
say from successfully deliver its message
to the destination. In general, we perform a parallel assignment operation for all processors
If any collision happens, the send-with-overwrite semantics are applied. When this collision prevention
scheme is applied, the processors p i with must wait for the next time to
compete again. After the first transfer of data messages, there may still be some unsent messages.
Some processor p has more than one message to send (d 0 (p) ? 1), and not every processor is able
to send out its message during the first transfer due to collisions. Hence,
d
d
As long as D k ? 0, we need to continue on with D k , D k+1 , . , Dm , where The message
transfer process can be illustrated in Figure 4. Many optimizations could be applied to reduce the
number of time steps to send messages. One of them is to let
and dest(p 1
dest(p)
dest(p)
dest(p)
Figure
4: The message transfer with collision.
Notice that for later transfers, it is most likely that only a few processors are actively sending
messages, resulting in a low utilization in the SIMD system. To avoid such a case, we do not
require all messages to be transferred. Instead, we attempt to send out only a majority of data
messages. The residual messages are buffered and will be sent again during the next iteration.
Because of the atomic execution model, the processors that fail to send messages will not be
stalled. Instead, a processor can continue execution as long as there are some messages in its
queue.
We use \Theta k to measure the percentage of data messages that is left after k times of data
transfers,
Thus, a constant ' can be used to control the number of transfers by limiting \Theta k '. The
algorithm for the data message transfer is summarized in Figure 5. By experiment, the value of
' has been determined to be around 0.2. Figure 6 shows an example for the 12-queen problem on
MP-1 in which the minimal execution time can be reached when
while \Theta k '
for each processor p with d k (p) ? 0
assign dest(p) and src(p)
send data messages
d
else
d
assign \Theta k
while
for any processor with d k (p) ? 0
buffer the residual data messages
Figure
5: Data message transfer procedure
Process placement. The handling of process messages is almost the same as that of data
messages, except that we need to assign a destination processor ID to each process message. The
assignment is called process placement. The Random Placement algorithm has been implemented
in the P kernel.
Random Placement algorithm. It is a simple, moderate performance algorithm:
where N is the number of processors. Once dest(p) has been assigned, we can follow the same
procedure as in the transfer of data messages. However, the two kinds of messages are different
from each other in that the dest(p) of a data message is fixed, whereas that of a process message
can be varied. In taking advantage of this, we are able to reschedule the process message if
the destination processor could not accept the newly generated process because of some resource
Execution time
Figure
The execution time for different values of ' for 12-queen.
constraint or collision. Here, rescheduling is simply a task that assigns another random number
as the destination processor ID. We can repeat this rescheduling until all process messages are
assigned the destination processor IDs. However, it is a better choice that we only offer one or
two chances for rescheduling, instead of repeating it until satisfaction. Thus, the unsuccessfully
scheduled process messages are buffered similar to the residual data messages, and then wait for
the next communication phase.
A load balancing strategy called Runtime Incremental Parallel Scheduling has been implemented
for the MIMD version of the P kernel [31]. In this scheduling strategy, the system
scheduling activity alternates with the underlying computation work. Its implementation on a
SIMD machine is expected to deliver a better performance than random placement.
3.3. Memory Management
Most SIMD machines are massively parallel processors. In such a system, any one of the
thousands of processors could easily run out of memory, resulting in a system failure. It is
certainly an undesired situation. Ideally, a system failure can be avoided unless the memory
space on all processors is exhausted. Memory management provides features to improve the
system robustness. When the available memory space on a specific processor becomes tight,
we should restrict the new resource consumption, or release memory space by moving out some
unprocessed processes to other processors.
In the P kernel system implemented on SIMD machines, we use two marks, m1 and m2, to
identify the state of memory space usage. The function j(p) is used to measure the current usage
of memory space at processor p,
the allocated memory space at processor p
the total memory space at processor p
A processor p is said to be in its normal state when 0 j(p) ! m1, in its nearly-full state when
its full state when m2 j(p) ! 1, and in its emergency state when running
out of memory, i.e.
The nearly-full state. We need to limit the new resource consumption since the available
memory space is getting tight. It is accomplished by preventing newly generated process messages
from being scheduled. Thus, when a process message is scheduled to processor p with m1 j(p),
the rescheduling has to be performed to find out another destination processor.
The full state. In addition to the action taken in the nearly-full state, a more restricted
scheme is applied such that any data messages addressed to the processor p with m2 j(p)
are deferred. They are buffered at the original processor and wait for change of the destination
processor's state. Although these data messages are eventually sent to their destination processor,
the delay in sending can help the processor relax its memory tightness. Note that these deferred
data messages will be buffered separately and not be counted when calculating \Theta k .
The emergency state. If processor p runs out of memory, several actions can be taken
before we declare its failure. One is to clear up all residual data messages and process messages,
if there are any. Another is to redistribute the unprocessed process messages that have been
previously placed in this processor. If any memory space can be released at this time, we will
rescue the processor from its emergency state, and let the system continue on.
4. Performance
The P kernel is currently written in MPL, running on a 16K-processor Maspar MP-1 with 32K
bytes memory per processor. MPL is a C-based data parallel programming language. We have
tested the P kernel system using two sample programs: the N-queen problem and the GROMOS
Molecular Dynamics program [38, 37]. GROMOS is a loosely synchronous problem. The test
data is the bovine superoxide dismutase molecule (SOD), which has 6968 atoms [28]. The cutoff
radius is predefined to 8 A, 12 A, and 16 A.
The total execution time of a P kernel program consists of two parts, the time to execute the
system program, T sys , and the time to execute the user program, T usr . The system efficiency is
defined as follows:
where T is the total execution time. The system efficiency of the example shown in Figure 7 is:
0:4 3
The system efficiency depends on the ratio of the system overhead to the grainsize of atomic
computations. Table IV shows the system efficiency on the 1K-processor MP-1. The high efficiency
results from the high ratio of granularity to system overhead. The P kernel executes in a loosely
synchronous fashion and can be divided into iterations. Each iteration of execution consists of
a system program phase and a user program phase. The execution time of a system program
varies between 8 and 20 milliseconds on MP-1. The average grainsizes of a user phase in the test
programs are between 55 and 330 milliseconds.
In the system phase, every processor participates in the global actions. On the other hand,
in the user program phase, not all processors are involved in the execution of the selected atomic
idle
system phase
user computation0.10.40.30.30.4phase 1
phase 2
phase 3
Figure
7: Illustration of Efficiencies
computation. In each iteration t, the ratio of the number of participating processors (N active (t))
to the total number of processors (N) is defined as utilization u(t):
active (t)
For example, the utilization of iteration 1 in Figure 7 is 75%, since three out of four processors
are active.
The utilization efficiency is defined as follows:
active (t)
Table
IV: System Efficiencies ( sys )
N-queen GROMOS
12-queen 13-queen 14-queen 8 A 12 A 16 A
93.4% 92.2% 90.3% 98.0% 98.6% 98.5%
where T (t) is the execution time of tth iteration and
. The utilization efficiency
depends on the computation selection strategy and the load balancing scheme. Table V shows the
utilization efficiencies for different problem sizes with the priority auction algorithm and random
placement load balancing.
Table
V: Utilization Efficiencies ( sys )
N-queen GROMOS
12-queen 13-queen 14-queen 8 A 12 A 16 A
77.0% 89.2% 96.2% 80.2% 87.3% 89.6%
In irregular problems, the grainsizes of atomic computations may vary substantially. The
grainsize variation heavily depends on how irregular the problem is and how the program is
partitioned. At each iteration, the execution time of the user phase depends on the largest
atomic computation among all active processors. The other processors that execute the atomic
computation of smaller grainsizes will not fully occupy that time period. We use an index, called
fullness, to measure the grainsize variation of atomic computation for each iteration:
PN active (t)
active (t)
is the user computation time of the kth participated processor at iteration t. For
example, the fullness of iteration 1 in Figure 7 is
0:4
The fullness efficiency is defined as:
active (t)
active (t)
PN active (t)
active (t)
Table
VI shows the fullness efficiencies for different problem sizes. The N-queen is an extremely
irregular problem and has a substantial grainsize variation and a low fullness efficiency. The
GROMOS program is loosely synchronous and the grainsize variation is small.
Table
VI: Fullness Efficiencies ( sys )
N-queen GROMOS
12-queen 13-queen 14-queen 8 A 12 A 16 A
19.2% 18.0% 17.6% 94.6% 96.1% 98.2%
Now we define the overall efficiency as follows:
full
The system efficiency can be increased by reducing system overhead or increasing the grainsize.
It is not realistic to expect a very low system overhead because the system overhead is dominated
by communication overhead. The major technique of increasing system efficiency is the grainsize
control. That is, the average grainsize of atomic computations should be much larger than the
system overhead. The utilization efficiency depends on the computation selection and load balancing
algorithms. The algorithms discussed in this paper deliver satisfactory performance, as
long as the number of processes is much larger than the number of processors. The most difficult
task is to reduce the grainsize variation which determines the fullness efficiency. The grainsize
variation depends on the characteristics of the application problem. However, it is still possible
to reduce the grainsize variation. The first method is to select a proper algorithm. The second
method is to select a good program partition. For a given application, there may be different
algorithms to solve it and different partitioning patterns. Some of them may have small grainsize
variation and some may not. A carefully selected algorithm and partitioning pattern can result
in a higher fullness efficiency.
The overall efficiencies of the N-queen problem and the GROMOS program on the 1K-
processor MP-1 are shown in Table VII. Compared to the N-queen problem, GROMOS has
a much higher efficiency mainly because of its small grainsize variation.
Table
VII: The Efficiencies ()
N-queen GROMOS
12-queen 13-queen 14-queen 8 A 12 A 16 A
13.8% 15.2% 15.3% 74.4% 82.7% 86.7%
Tables
VIII and IX show the execution times and speedups of the N-queen problem and the
GROMOS program, respectively. A high speedup has been obtained from the GROMOS program.
The N-queen is a difficult problem to solve, but a fairly good performance has been achieved.
Table
VIII: The N-queen Execution Times and Speedups (MP-1)
Number of 12-queen 13-queen 14-queen
processors Time (sec.) Speedup Time (sec.) Speedup Time (sec.) Speedup
Table
IX: The GROMOS Execution Times and Speedups (MP-1)
Number of 8 A 12 A 16 A
processors Time (sec.) Speedup Time (sec.) Speedup Time (sec.) Speedup
1K 13.3 761 34.3 867 68.6 887
8K 2.53 4001 6.90 4209 10.7 5688
5. Concluding Remarks
The motivation of this research is twofold: to prove whether a SIMD machine can handle
asynchronous application problems, serving as a general-purpose machine, and to study the feasibility
of providing a truly portable parallel programming environment between SIMD and MIMD
machines.
The first version of the P kernel was written in CM Fortran, running on a 4K-processor TMC
CM-2 in 1991. The performance was reported in [32]. Fortran is not the best language for system
programs. We used the multiple dimension array with indirect addressing to implement queues.
Hence, not only was the indirect addressing extremely slow, but accessing different addresses in
CM-2 costed much more [9]. In 1993, the P kernel was rewritten in MPL and ported to Maspar
MP-1. The MPL version reduces the system overhead and improves the performance substantially.
Experimental results have shown that the P kernel is able to balance load fairly well on SIMD
machines for nonuniform applications. System overhead can be reduced to a minimum with
granularity control.
Acknowledgments
We are very grateful to Reinhard Hanxleden for providing the GROMOS program, and Terry
Clark for providing the SOD data. We also thank Alan Karp, Guy Steele, Hank Dietz, and Jerry
Roth for their comments. We would like to thank the anonymous reviewers for their constructive
comments.
This research was partially supported by NSF grants CCR-9109114 and CCR-8809615. The
performance data was gathered on the MP-1 at NPAC, Syracuse University.
--R
A Model of Concurrent Computation in Distributed Systems.
Linda and friends.
A hierarchical O(NlogN) force calculation algorithm.
Vector Models for Data-Parallel Computing
Linda in context.
Data parallel simulation using time-warp on the Connection Machine
Evaluating parallel languages for molecular dynamics computations.
Multiple instruction multiple data emulation on the Connection Machine.
A massively parallel MIMD implemented by SIMD hardware.
Scheduling parallel program tasks onto arbitrary target machines.
Solving Problems on Concurrent Processors
The architecture of problems and portable parallel software systems.
A comparison of clustering heuristics for scheduling DAGs on multiprocessors.
A microprocessor-based hypercube supercomputer
Data parallel algorithms.
Graphinators and the duality of SIMD and MIMD.
Babb II and David C.
DAP prolog: A set-oriented approach to prolog
Logic simulation on massively parallel architectures.
Simulating applicative architectures on the connection machine.
An exploration of asynchronous data-parallelism
A flat GHC implementation for supercomputers.
Massively parallel implementation of flat GHC on the Connection Machine.
MIMD execution by SIMD computers.
Array processor supercomputers.
The reactive kernel.
Molecular dynamics simulation of superoxide interacting with superoxide dismutase.
Chare Kernel and its Implementation on Multicomputers.
Chare Kernel - a runtime support system for parallel computations
Runtime Incremental Parallel Scheduling (RIPS) on distributed memory computers.
Solving dynamic and irregular problems on SIMD architectures with runtime support.
A portable multicomputer communication library atop the reactive kernel.
Thinking Machines Corp.
Thinking Machines Corp.
Indirect addressing and load balancing for faster solution to mandelbrot set on SIMD architectures.
Relaxing SIMD control flow constraints using loop transformations.
GROMOS: GROningen MOlecular Simulation software.
Solving nonuniform problems on SIMD computers: Case study on region growing.
Exploiting SIMD computers for general purpose computation.
The concurrent execution of non-communicating programs on SIMD processors
A programming aid for message-passing systems
--TR
--CTR
Nael B. Abu-Ghazaleh , Philip A. Wilsey , Xianzhi Fan , Debra A. Hensgen, Synthesizing Variable Instruction Issue Interpreters for Implementing Functional Parallelism on SIMD Computers, IEEE Transactions on Parallel and Distributed Systems, v.8 n.4, p.412-423, April 1997
Andrea Di Blas , Arun Jagota , Richard Hughey, Optimizing neural networks on SIMD parallel computers, Parallel Computing, v.31 n.1, p.97-115, January 2005
Mattan Erez , Jung Ho Ahn , Jayanth Gummaraju , Mendel Rosenblum , William J. Dally, Executing irregular scientific applications on stream architectures, Proceedings of the 21st annual international conference on Supercomputing, June 17-21, 2007, Seattle, Washington | thread model;scalability;irregular and dynamic applications;load balancing;portable programming environment;SIMD parallel computers |
629361 | Executing Algorithms with Hypercube Topology on Torus Multicomputers. | AbstractMany parallel algorithms use hypercubes as the communication topology among their processes. When such algorithms are executed on hypercube multicomputers the communication cost is kept minimum since processes can be allocated to processors in such a way that only communication between neighbor processors is required. However, the scalability of hypercube multicomputers is constrained by the fact that the interconnection cost-per-node increases with the total number of nodes. From scalability point of view, meshes and toruses are more interesting classes of interconnection topologies. This paper focuses on the execution of algorithms with hypercube communication topology on multicomputers with mesh or torus interconnection topologies. The proposed approach is based on looking at different embeddings of hypercube graphs onto mesh or torus graphs. The paper concentrates on toruses since an already known embedding, which is called standard embedding, is optimal for meshes. In this paper, an embedding of hypercubes onto toruses of any given dimension is proposed. This novel embedding is called xor embedding. The paper presents a set of performance figures for both the standard and the xor embeddings and shows that the latter outperforms the former for any torus. In addition, it is proven that for a one-dimensional torus (a ring) the xor embedding is optimal in the sense that it minimizes the execution time of a class of parallel algorithms with hypercube topology. This class of algorithms is frequently found in real applications, such as FFT and some class of sorting algorithms. | Introduction
A hypercube communication topology is frequently found in real parallel applications. Some
examples include parallel algorithms for FFT, sorts, etc. [3], [11]. These algorithms will be called
hypercube algorithms or d-cube algorithms, where d is the number of dimensions of the
hypercube. A hypercube algorithm of dimension d or d-cube algorithm, consists of 2 d processes
labeled from 0 to 2 d -1 such that every process communicates only with its d neighbors, one in
each dimension of the d-cube.
In this paper, the problem of executing d-cube algorithms on multicomputers [1] is
considered. A multicomputer is a distributed memory multiprocessor in which the nodes
(processor local memory) are interconnected through point to point links.
The nodes of a multicomputer are interconnected according to a given pattern or
interconnection topology. If this topology is a hypercube of dimension d (d-cube multicomputer)
then the d-cube algorithm can be executed on the multicomputer in such a way that neighbor
processes are mapped onto adjacent nodes (nodes directly connected through a point to point
link). In this case, it is said that each process of the d-cube algorithm has all its d neighbors at
distance 1 in the multicomputer (i.e., all required communication is between neighbor nodes). In
this way, the cost of the communication component of the d-cube algorithm is kept minimum
when it is executed on a hypercube multicomputer.
An important drawback of hypercube as interconnection topology for multicomputers is that
it is not scalable. In a d-cube multicomputer each of the 2 d nodes is directly connected to other d
nodes through point to point links. Therefore, the cost (and the complexity) of the interconnection
hardware per node increases with the number of nodes. Other interconnection topologies, such
as meshes or toruses are considered more suitable for multicomputers with a large number of
nodes, since the interconnection cost per node does not depend on the total number of nodes [13].
For instance, each node of a two-dimensional torus multicomputer is directly connected to other
4 nodes, it does not matter the number of nodes of the multicomputer.
To execute a d-cube algorithm on a multicomputer with topology other than hypercube, the
first step is to find a mapping function that allocates each process of the parallel program onto a
given processor of the multicomputer. The problem can be formulated as finding an embedding
of the graph that represents the topology of the program (a hypercube) onto the graph that
represents the topology of the multicomputer (a mesh or a torus).
The problem of embedding a given source graph into a destination graph has been
extensively studied in the literature. In particular, embedding any type of graph into a hypercube
is a widely studied topic (see, for instance, [2],[7],[11],[12], just to mention a few recent works).
However, the problem of embedding hypercubes onto a mesh or a torus has not been so
extensively studied. In section 2.3. we review the most relevant works in this subject.
When the topology of the algorithm and the multicomputer are different, it may be
impossible to allocate neighbor processes to neighbor processors. For instance, in a two-dimensional
torus multicomputer, every process of a d-cube algorithm has at most 4 of its d
neighbors at distance 1. It has at least d-4 neighbors at a distance greater than 1. A message to
any of these "far" neighbors is routed through the point to point links and nodes which are found
along the path to the destination node. A good mapping of a parallel algorithm onto a
multicomputer will keep the neighbor processes as close as possible in the multicomputer,
minimizing in this way the communication cost of the execution.
This paper begins by reviewing some related work on embeddings and then, it concentrates
on a particular type of embeddings that is called embeddings with constant distances. It will be
shown that these embeddings are more adequate for our purposes, that is, for executing d-cube
algorithms onto meshes or toruses. A well known embedding of hypercubes onto meshes is the
so called standard embedding [8]. It is an embedding with constant distances and it is optimal for
meshes of any given dimension. In consequence, the contribution of this paper centers on
embeddings of hypercubes onto toruses.
A new embedding, called xor embedding, is proposed. The paper presents a set of
performance figures and shows that this embedding outperforms the standard embedding when
it is used as the mapping function of a d-cube algorithm onto a torus multicomputer. In addition,
it is proven that the xor embedding is optimal for one-dimensional toruses (also called rings).
This paper is organized as follows. In section 2, we introduce some notation and describe
more precisely the contribution of this paper as well as some related work. Sections 3 presents
the xor embedding. Section 4 compares the performance of the xor embedding with that of the
standard embedding using a set of different performance metrics. In section 5, it is proven that
the proposed embedding is optimal for rings in the sense that it results in the shortest execution
time of a class of d-cube algorithms. Finally, some concluding remarks are presented.
2. Preliminaries and related work
2.1. Definitions
A d-cube algorithm is a parallel algorithm that consists of 2 d processes such that every process
communicates with exactly other d processes. These d processes are called its neighbors. We also
say that the communication topology of the algorithm is a hypercube. That means that the 2 d
processes can be labeled from 0 to 2 d -1 in such a way that processes n and m are neighbor (i.e.
they communicate) if the binary codes for n and m differ in a single bit. If this bit is the i-th bit
then m is the neighbor of n in dimension i, and n is the neighbor of m in the same dimension.
Then, it is written:
In this paper, we focus on d-cube algorithms in which every process has the following
structure:
do i=0,d-1
compute
communicate with neighbor in dimension i
In this algorithm every process consists of d stages, each of them composed of a computation
phase followed by a communication phase. In each stage, every process uses a different
dimension to exchange information with one of its neighbors.
The duration of the computation phase and the amount of information to be exchanged is
assumed to be the same for all the stages and all the processes of the d-cube algorithm. A d-cube
algorithm with the above features will be called a compute-and-communicate d-cube algorithm,
or a CC d-cube algorithm for short. This kind of d-cube algorithms are common in real
applications like FFT, some type of sorts, etc. [3], [11].
Parallel algorithms can be modelled by graphs. The vertices of the graph represent the
processes of the algorithm and the edges of the graph represent the neighbor relationship among
processes. A multicomputer can also be modelled by a graph. The vertices of the graph represent
the nodes of the multicomputer and the edges of the graph represent the point to point links which
interconnect these nodes. The terms edge and link will be used indistinctly in this paper.
Multicomputers can be classified according to their interconnection topology. The work
presented in this paper focuses on mesh and torus multicomputers, since they have scalable
interconnection topologies.
c-dimensional torus is an undirected graph in which the nodes can be labeled
as c-tuples (i 1 . Every node (i 1 ,i 2 ,.,i c ) of the graph has two neighbors in each
dimension of the torus. Its left neighbor in dimension j is (i 1 ,.,(i j -1) mod k j ,.,i c ) and its right
neighbor in this dimension is (i 1 ,.,(i j +1) mod k j ,.,i c ).
c-dimensional mesh is an undirected graph in which the nodes can be labeled
as c-tuples (i 1 . Every node of the graph has two neighbors in each dimension
j of the mesh if 0 < Its left neighbor is (i 1 ,.,i j -1,.,i c ) and its right neighbor is
the node has only a right neighbor and if i j =k j -1 then it only has a left
neighbor.
A line is a one-dimensional mesh while a one-dimensional torus is called a ring.
Figure
shows some examples and illustrates how their nodes are labeled.
The distance in a graph between two vertices is the minimum number of edges that join those
vertices. In the particular case of the graph that models a d-cube, the distance between two
vertices is known as the Hamming distance (number of different bits in their binary
representations).
An embedding of graph G into graph H is an injection from the vertices of G to the vertices
of H. In this paper, our attention is restricted to embeddings in which G and H have the same
number of vertices, and therefore the mapping is given by a bijective function.
The problem of executing a CC d-cube algorithm on a multicomputer can be restated as the
embedding of graph G, which represents the CC d-cube algorithm, onto graph H, which
represents the multicomputer.
The dilation of an edge (n,m) of G (edge joining vertices n and m) is the distance in H
between f(n) and f(m).
If G models a CC d-cube algorithm, an edge exists between vertices n and m if m=N i (n), for
some i - [0,d-1]. The dilation of this edge will be denoted by D i (n). Obviously, since n= N i (m),
(m). When a CC d-cube algorithm is executed on a multicomputer, as defined by a
given embedding f, a communication between processes n and N i (n) (required in iteration i of the
CC d-cube algorithm) is implemented by a message which is routed through D i (n) point to point
links and D i (n)-1 nodes of the multicomputer represented by H, which are found in the shortest
path between nodes f(n) and f(N i (n)). In the following, a store and forward routing strategy is
assumed. Therefore, the cost of sending a message from f(n) to f(N i (n)) is proportional to D i (n).
2.2. Contributions
As it was mentioned in the introduction, this paper focuses on executing CC d-cube algorithms
on scalable multicomputers. The function that maps processes onto processors is an embedding
of the graph defined by the communication topology of the algorithm (hypercube) onto the graph
defined by the interconnection topology of the multicomputer. In particular, we are interested in
torus multicomputers since for meshes, an already known embedding, called standard
embedding and described in the next section, is optimal for CC d-cube algorithms.
0,2 0,3
(c)
0,2 0,3
(d)
Figure
1: Different types of multicomputers: a) line, b) ring, c) (4,4) mesh and
d) (4,4) torus.The picture also shows how their nodes are labeled.
The work presented in this paper centers on those embeddings in which D i
[0,d-1] and n - [0,2 d -1]). This means that every process has its neighbor in dimension i at the
same distance in the target multicomputer. In the following, an embedding with this feature is
called embedding with constant distances and the values of D i (i - [0,d-1]) are called the
distances of the embedding.
Embeddings with constant distances have the property that every process takes the same
time to communicate in any given stage of the CC d-cube algorithm. Because the duration of the
compute phase is also the same for every process, waiting intervals are avoided since neighbor
processes arrive at the same time at the point where they have to communicate. This fact will be
illustrated later through an example.
In this paper, an embedding with constant distances of hypercubes onto toruses of any
arbitrary dimension is proposed. The embedding is called xor embedding. It will be shown that
this embedding outperforms the standard embedding using a set of different performance
metrics. Moreover, we prove that the proposed embedding is optimal for rings (one-dimensional
toruses) in the sense that it minimizes the execution time of CC d-cube algorithms when they are
executed on a ring multicomputer. Another additional property of the proposed embeddings is
their simplicity, which means a negligible cost to compute the location of any process in the
multicomputer. Some preliminary results about the xor embedding were presented in [4].
2.3. Related work
The problem of embedding d-cubes onto meshes and toruses has been previously considered by
other authors. Here, a review of the most related work is presented.
Matic presents in [10] a study of the standard embedding (defined below) of d-cubes onto
two-dimensional meshes and toruses. To define the standard embedding (which will be denoted
by f std ) of a d-cube onto a line or a ring, the nodes of the target multicomputer are numbered from
0 to 2 d -1 (see figures 1.a and 1.b). Then, the standard embedding is defined by (see figure 2.a):
In general, the standard embedding of a d-cube onto a c-dimensional mesh or
torus is defined as follows:
where:
f std n
f std n
Figure
2.b shows an example in which c=2 and k 1 =k 2 =4. Obviously, the standard
embedding is an embedding with constant distances. For the particular case in which k i =2 d/c ,
i-[1,c], the distances of the standard embedding are:
It can be shown that the standard embedding is optimal for meshes, in the sense that it
minimizes the average distance [5], which in turns results in the shortest execution time.
However, it is not optimal for toruses, as it will be shown later in this paper.
Harper in [6] and Lai and Spague in [8] solve the problem of embedding d-cubes onto
meshes to minimize the dilation of the embedding (the maximum dilation of any edge). Both
proposals use the byweight embedding, denoted by f bw , which is not an embedding with constant
distances. Next, this embedding is briefly described.
In the case of a line, the labels of the vertices which represent the processes of the d-cube
algorithm are ordered by their weights. The weight of a label is the number of 1's in its binary
representation. Labels with the same weight are ordered in descending order. Then, the processes
of the d-cube ordered in that way are allocated to the nodes of the line, from left to right. Figure
3.a shows an example. The byweight embedding can be extended to meshes of any dimension.
In particular, Lai and Spague extend this embedding to two-dimensional meshes in [8]. Figure
3.b shows an example.
The byweight embedding minimizes the dilation of the embedding for lines and it has a
lower dilation than the standard embedding for two-dimensional meshes. This is an interesting
property in some particular applications of embeddings. For instance, Lai and Spague propose
this embedding to solve the problem of placing the processors of a hypercube on a printed circuit
board or a chip (which can be modelled as a two-dimensional mesh). However, the byweight
embedding is not an embedding with constant distances, which is an important property in the
(b)
Figure
2: Standard embeddings of: a) a 3-cube onto a line or a ring and b) a
4-cube onto a (4,4) mesh or torus. Each label indicates which vertex
of the d-cube is mapped onto each node of the multicomputer.
Wrap-around links are not shown for clarity.
context of executing CC d-cube algorithms onto multicomputers. Variable distances result in
waiting intervals during the execution of the CC d-cube algorithm. These are due to the fact that
two neighbors that are going to communicate finish their respective previous computation at
different time. The one that finish earlier must wait for the other to finish. These waiting intervals
contribute to increase the execution time. To illustrate this fact, figure 4 shows an example in
which the execution times of a CC 3-cube algorithm on a line for both the standard embedding
and the byweight embedding are compared. The waiting intervals which contribute to make the
byweight embedding run slower than the standard embedding are also shown.
. In [9], E. Ma and L. Tao proposed several embeddings among toruses and meshes of
different dimensions. Their proposals are based on generalizing the concept of gray code from
numbering system to mix-radix numbering systems. Since a d-cube can also be seen as a
d-dimensional mesh or torus with two elements in each dimension, their embedding can also be
applied to solve the problem addressed in this paper. However, they focus on minimizing the
dilation (the longest dilation of any link of the d-cube) and therefore the resulting embeddings in
general do not have constant distances, which is a desirable property for our objective. However,
if one starts with a d-cube represented by means of a (2,2,.,2) d-dimensional mesh or torus, then
the resulting embedding onto a ring or a two-dimensional torus has constant distances.
Nevertheless, its average distance and therefore its performance for executing our target
algorithm is worse than the embedding proposed in this paper.
3. The xor embedding
Since the standard embedding is optimal for meshes, we focus just on toruses. The proposed
embedding is called xor embedding and it is denoted by f xor . It belongs to the class of embeddings
with constant distances. In this section, the xor embedding for the case of a one-dimensional torus
(a ring) is first described and then it is generalized for any dimension.
28 26
(b)
Figure
3: Byweight embeddings of: a) a 3-cube onto a line and b) a 5-cube onto
a
3.1. One-dimensional Torus (Ring)
Given a positive integer x, let x(i) denote the ith bit of the binary representation of x. The least
significant bit is considered to be the 0th bit. Let G be the graph which represents the CC d-cube
algorithm and R be the graph which represents the ring multicomputer. Assume that the vertices
of R are labeled from 0 to 2 d -1clockwise (see figure 1.b). Let (n(d-1), n(d-2),.,n(1),n(0)) be the
label (in binary code) of vertex n in G. This vertex is mapped onto vertex m=f xor (n) in R, whose
label in binary code (m(d-1),.,m(0)) is:
where XOR (a,b) is the exclusive-or of bits a and b. Figure 5 shows an example for d=4.
node
(b) (c)
Computation
Communication
Waiting interval
Figure
4: a) Dilations for the standard and byweight embeddings (d=3).
Executing a CC 3-cube algorithm on a line using: b) the standard
embedding and c) the byweight embedding.
3.2. General Case
The xor embedding of a d-dimensional hypercube onto a (2 d1 ,2 d2 ,.,2 dc ) c-dimensional torus
such that d 1 +d 2 +.+d c =d is now presented.
Let us first define K j in the following way: K 1 =0, and for every 1<j-c+1 we have that:
Let G be the graph which represents the d-cube and T be the graph which represents the torus.
Then, vertex n of G is mapped onto vertex (m 1 ,m 2 ,.,m c )=f xor (n) in T as follows:
Figure
6 shows an example for d=6. It can be noted that both the standard and the xor
embedding of a d-cube onto a c-dimension torus can be viewed as multiple embeddings of
smaller hypercubes onto rings. For instance, in figure 6, nodes from 8 to 13 of the 6-cube
constitute a 3-cube that is mapped onto the 8 nodes of the second row of the torus which
constitute a ring. This embedding is again a xor embedding.
Note the simplicity of function f xor (n). This function, which is used very frequently for
routing messages during the execution of the CC d-cube algorithm, consists of simple bit
operations and its computational cost is negligible.
4. Performance analysis
In this section, the performance of the xor embedding and the standard embedding are compared
using a set of different performance metrics. Most of these metrics were used by Matic in [10] to
evaluate the standard embedding for two-dimensional meshes and toruses. Here, the
corresponding expressions for both the standard and the xor embeddings of a d-cube onto a
Figure
5: A xor embedding of a 4-cube onto a ring. The labels indicate which node
of the d-cube is mapped onto the corresponding node of the ring.
d-cube nodes
c-dimensional torus are derived. In some cases where the general expressions are
not easy to compare, we derive the expression corresponding to the particular case of a squared
torus. A squared torus is a (2 d/c ,2 d/c ,.,2 d/c ) c-dimensional torus, that is, a torus whose all
dimensions have the same size. These list of metrics is the following:
. The execution time (T f (d 1 ,d 2 ,.,d c )). This represents the execution time of a CC
(d 1 +d 2 +.+d c )-cube algorithm onto a (2 d1 ,2 d2 ,.,2 dc ) c-dimensional torus when the
embedding f is used as the mapping function.
. The links dilation spectrum . This gives the number of links with
dilation D when a (d 1 +d 2 +.+d c )-cube is embedded onto a (2 d1 ,2 d2 ,.,2 dc )
c-dimensional torus as defined by the mapping function f.
. The longest dilation . This is the maximum dilation of any link of
the hypercube when it is embedded onto a (2 d1 ,2 d2 ,.,2 dc ) c-dimensional torus as defined
by the embedding f.
. The total dilation . This represents the sum of dilations of all links
of the hypercube
Figure
xor embedding of a 6-cube onto a (8,8) torus. The wrap-around
links are not shown, for clarity.
48
d-cube nodes
A d 1 d 2 . d c
f D
. The maximum load and minimum load .
The load of a node due to communication tasks is measured as the number of links of the
hypercube that traverse that particular node (those links that begin or finish at that node
are not considered). These parameters give the maximum and minimum value of the load
of any node as a result of using the embedding f.
. The average load . This is the average load of a node due to
communication tasks.
4.1. Execution time
When using an embedding as a mapping function of a parallel algorithm onto a multicomputer,
the most important performance measure of the embedding is the time that the execution of the
algorithm takes as a result of using such a mapping.
Let T a be the duration of the arithmetic computation phase in every stage of the CC d-cube
algorithm, when it is executed on the target multicomputer. Let T c be the cost of sending a
message through a point to point link on the multicomputer.
The time to execute a CC d-cube algorithm on a multicomputer with 2 d nodes, using the
embedding f can be expressed as:
where T cf is the cost of the communication component of the CC d-cube algorithm. T cf can be
expressed as follows:
In the above expressions, T i (n) is the cost of the communication component for process n
from the beginning of the execution to the end of stage i. Expression (a) indicates that T cf is equal
to the highest communication component cost of any process at the end of the d stages of the CC
d-cube algorithm. Expression (b) gives the communication component cost for process n at the
end of stage i. In this stage process n must exchange information with its neighbor N i (n). The cost
of exchanging this information is D i (n)T c . (since a store and forward routing is assumed).
However, this exchange cannot start until both processes n and N i (n) are ready to do it. In general,
either process n or process N i (n) will have to wait for its neighbor to arrive to the point in which
communication can be started. This is why the term "max" appears in expression (b). These idle
intervals were called waiting intervals in figure 4.
ave
Obviously, if the multicomputer has a d-cube interconnection topology then the best
embedding is embedding). In this case D i n) and the
execution time is
If the embedding has constant distances then D i every n. In this case, the time to
execute a CC d-cube algorithm onto a multicomputer, as defined by an embedding f is:
where D a is the average distance of the embedding:
For a (2 d1 , .,2 dc ) c-dimensional torus, the average distance of the standard embedding is:
and the average distance corresponding to the xor embedding is:
Since the execution time of the CC d-cube algorithm is proportional to the average distance
of the embedding we can conclude that the standard embedding results in about a 33% increase
in the execution time when compared with the xor embedding.
Obviously, the embedding with constant distances which minimizes the execution time of
the CC d-cube algorithm is that whose average distance D a is minimum. An embedding with such
property is said to be optimal. The standard embedding is optimal for meshes of any dimension
but not for toruses since we have just seen that the xor embedding outperforms it. In addition, it
will be proven in section 5 that the xor embedding is optimal for one-dimensional toruses.
- dT a T c D i
D a
d
averagedistance f
D a
std
c
c
d
D a
xor
c
c
d
4.2. Links dilation spectrum
4.2.1. Standard embedding on rings
Here, the links dilation spectrum for the standard embedding in the case of a one-dimensional
torus is derived.
Notice that any node of the hypercube has its neighbor in dimension i at distance 2 i in the
torus. Since there are 2 d nodes, we have 2 d-1 links with dilation 2 i for each i-[0, d-1]. We can
then conclude that the links dilation spectrum is:
4.2.2. Xor embedding on rings
In the xor embedding for rings, every node has a neighbor at distance 2 i for each i-[0, d-2] and
two neighbors at distance 2 d-2 . In consequence, the links dilation spectrum is as follows:
4.2.3. General case
The links dilation spectrum for both the standard and xor embeddings can be computed from the
spectrum of the one-dimensional case using the following expression:
In the particular case of a squared c-dimensional torus, for the standard embedding we have
that
and, assuming d/c - 2, for the xor embedding the corresponding expression is
A d
A d
A d 1 d 2 . d c
c
c
A d 1 d 2 . d c
A d 1 d 2
. d c
4.3. Longest dilation
The longest dilation can be obtained from the links dilation spectrum functions previously
developed. For the standard embedding we have that
and for the xor embedding the corresponding expression is (assuming )
It can be seen that the longest dilation of the xor embedding is 50% shorter than that of the
standard embedding.
4.4. Total dilation
This parameter can also be computed using the links dilation spectrum. It is given by the
following
where . Next, this expression is further developed for both the standard and the xor
embeddings and for some particular toruses.
In the case of the standard embedding on rings we have that
whereas for the xor embedding on rings the total dilation is
In the case of a squared c-dimensional torus we have that
std d 1 d 2 . d c
xor d 1 d 2 . d c
x
std d
xor d
std d c
/ . d c
xor d c
/ . d c
d c 3
Notice that the total dilation of the standard embedding is about 33% higher than that of the
xor embedding in both cases.
4.5. Maximum and minimum load
In this section, the load due to communication tasks of every node is analyzed. The objective is
to determine the value for the most loaded node and the least loaded one.
4.5.1. Standard embedding on rings
Assume that a d-dimensional hypercube is to be embedded onto a one-dimensional torus with 2 d
nodes. Let be the load of node n due to the links whose dilation is 2 i . We have that
Notice that is a periodic function with period and it is defined in the interval
.
Figure
7 illustrates an example when d=4. The figure shows the load of every
node due to links whose dilation is 2 2 .
Let be the load of node n due to links whose dilation is either 2 i or 2 i-1 , that is,
It can be shown that
Again is a periodic function with period and it is defined in the interval
.
Figure
8 shows graphically how these expressions were obtained.
std n
std n
std n
Figure
7: Load of each node due to links with dilation equal to 4 for the standard
embedding of a 4-cube onto a ring.
std n
std n
std n
std n
std n
std n
Now, the total load of a given node, which is denoted by , can be computed. If d
is even then
and if d is odd we have that
and obviously,
Due to the fact that the period of is four times the period of , there
are always two periods of where is maximum for every n inside these
two periods. In consequence, there is always at least one node n such that both and
get its maximum value for this node (see figure 9).Therefore, if d is even then
and if d is odd
Since
std d n
std d n
std n
std n
std n
std n
std d n
std n
std n
std n
std d
std d n
std n
std n
std n
std n
std n
std n
std d
std n
std n
std n
Figure
8: Computing from and
std n
std n
std n
std n
std n
std n
std n
std n
std d
std n
std n
std n
std n
the maximum load of the standard embedding on rings is given by the following expression:
Regarding the minimum load, it can be seen that nodes 0 and 2 d -1 have a null load; then,
4.5.2. Xor embeddings on rings .
Notice that the load of a node due to links whose dilation is less than 2 d-2 is the same for both the
standard and the xor embedding, that is
In consequence
The load due to links whose dilation is 2 d-2 is equal to . This is
illustrated in figure 10 by means of a particular example. Since this is a constant function, we can
conclude that
std d
std d
std n
std n
is maximum for every n belonging to this range
(two periods of ).
std n
std n
Figure
9: This figure illustrates that
std n
std n
std n
std n
.
xor n
std n
xor d n
xor n
std d 2
xor n
xor d
std d 2
which results in
Regarding the minimum load, we have that
since .
4.5.3. General case
Since both the standard and xor embeddings of a hypercube onto a c-dimensional torus can be
regarded as several embeddings of smaller hypercubes onto rings, there is always at least one
node for which the load is maximum in all the dimensions of the torus, and at least one other node
for which the load is minimum in all the dimensions. Then, it follows that
xor n
Figure
10: Load of each node due to links with dilation equal to 4 for the xor embedding
of a 4-cube onto a ring.
xor d
std d 2
std d 2
Table
1 compares the maximum and minimum load of both embeddings onto different
toruses. We can conclude that the xor embedding has a higher minimum load but a lower
maximum load. That is, the load of the nodes with communication tasks is more evenly
distributed, which is a desirable property.
4.6. Average load
Taking into account that a link with dilation D results in a unitary additional load to D-1 nodes,
the average load of nodes due to communication tasks can be computed from the links dilation
spectrum using the following expression for both embeddings:
where .
For the particular case of a ring, the average load is
Table
1: Maximum and minimum load for both the standard and the xor embeddings.
Size of the torus
ave d 1 d 2 . d c
x
ave
std d
ave
xor d
For a c-dimensional squared torus the average load is
In both cases, the average load of the standard embedding is about 33% higher than that of
the xor embedding. The difference is even higher for small hypercubes. We can then conclude
that for the execution of any parallel algorithm with a hypercube communication topology the
xor embedding will result in a quite less number of communication conflicts. Notice that in the
case of the CC d-cube algorithms analyzed in this paper, due to their particular structure, conflicts
never occur for the two embeddings.
5. Proof of optimality of f xor for rings
The average distance as defined in section 4.1, will be used as the main criterion to measure the
goodness of any embedding with constant distances, since minimizing the average distance
implies minimizing the execution time of CC d-cube algorithms. In this section, it is proven that
the xor embedding has the minimum average distance for embeddings with constant distances of
hypercubes onto rings.
To show that the xor embedding is optimal for rings, we will prove that the average distance
of any embedding with constant distances is higher than or equal to the average distance of the
f xor embedding. This is stated by theorem 10. Before this theorem, several lemmas and
corollaries that are needed to prove that result are presented. First, a lower bound for the sum of
any set of d-1distances corresponding to any embedding with constant distances is found. Then,
a lower bound for the highest distance of the embedding is computed. Both together give a lower
bound for the average distance of any embedding with constant distances. This lower bound is
the average distance of the f xor embedding, which proves its optimality.
Given any node of a hypercube, we define N D (n), where D is any subset of
dimensions of the hypercube, as the node that is reached by starting at node n and moving through
every dimension in D, one after another, using each dimension exactly once (as we know, the
order in which the dimensions are used does not matter, the result will be the same). For instance,
if D={1,3}, then N D
ave
std d 1 d 2 . d c
ave
xor d 1 d 2 . d c
d c
In the following, N i (N j (n)) will be written as N i N j (n). The parenthesis are removed for the
sake of clarity, but the meaning referring the order in which dimensions are used is preserved.
That is, N i N j (n) means that we move from node n first using dimension j and then dimension i.
The first lemma of this section proves that the sum of any subset of d-1 distances
corresponding to d-1 dimensions must be at least 2 d-1 -1.
Lemma 1: Let f d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring. Let V be any subset with d-1 of the dimensions of the d-cube, that is, V contains all the
dimensions of the d-cube excepting one. Then,
Proof: Let H(d,n,V) be the subset of nodes of a d-cube that consists of nodes n and N W (n) for
every W-V. Obviously, the number of elements in H(d,n,V) is two to the power of the number
of elements in V. In particular, if V has d-1 elements, then H(d,n,V) consists of 2 d-1 elements.
Given any set of 2 d-1 nodes of a ring, there will always be two nodes in this set whose distance is
at least 2 d-1 -1. Since it is possible to go from any node in H(d,n,V) to any other node in the same
set, using each dimension in V at most once, the distances corresponding to the dimensions in V
must add up to at least 2 d-1 -1. q
The next lemma states that if two distances are equal when embedding a d-cube onto a ring
then these distances must be equal to 2 d-2
Lemma 2: Let f d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring with 2 d nodes. If D i =D j =K (i - (in the following, x - n y means that x
n=d we will just write x - y).
Proof: Suppose the nodes of the ring are labeled clockwise from 0 to 2 d -1. Let us take any
node n of the hypercube and let (n). Suppose that D i =D j =K (i - j). Then, y= f d (N i (n)) is
equal to either (x+K) mod 2 d or (x-K) mod 2 d . For short we will write f d (N i (n))=(x-K) mod 2 d .
We have also that z=f d (N j (n), the only possible solution is
either y=(x+K) mod 2 d and z =(x-K) mod 2 d or y=(x-K) mod 2 d and z=(x+K) mod 2 d . Since both
situations are symmetrical, let us suppose the first one holds without loss of generality.
We have also that N j N i (n) is the same as N i N j (n). Let w=f d (N j N i (n)). Then w=(y+K) mod 2 d
(it cannot be equal to (y-K) mod 2 d since (y-K) mod 2 must be different). Since
w is also equal to f d (N i N j (n)), w=(z-K) mod 2 d . Therefore, y+K - z-K, that is, x+2K - x-2K. This
means that 4K - 0 which implies that K - d-2 0. Then K=k.2 d-2 for some integer k>0 (distances
must be positive integers). However, K cannot be a multiple of 2 d-1 because this would imply that
y=z. In consequence,
Given two nodes x and y of a ring we say that y is clockwise in relation to x if the
shortest path from x to y is clockwise. Otherwise we say that y is counterclockwise in relation to
x. Obviously, if y is clockwise in relation to x, then x is counterclockwise in relation to y.
d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring. Let us define that is, S (it stands for Short) is the set of
dimensions whose corresponding distances are less than 2 d-2 when a hypercube is embedded
onto a ring. Given any node n of the hypercube, we define
clockwise in relation to f d (n)} and is counterclockwise in relation
to f d (n)}. Obviously, C(n) -
Given any node, its neighbor in a given dimension of the hypercube is at a fixed distance in
the ring, but it can be clockwise or counterclockwise. The next two lemmas prove that, if we take
into account only those dimensions of the hypercube such that their corresponding D i are less
than 2 d-2 , there is always a node which has all its neighbors in those dimensions clockwise in the
ring (there is another node with all the neighbors in those dimensions being counterclockwise).
Lemma 3: Let f d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring with 2 d nodes. Let n be any node of the hypercube. Then, for any j - C(n), C(n) - {j} -
Proof: It is obvious that j - C(N j (n)) because N j N j
in relation to n then, N j N j is clockwise in relation to N j (n).
It only remains to be proved that for every k - C(n), k - C(N j (n)). Let suppose that there is
a k such that k - C(n), k - C(N j (n)). Assume that the nodes of the ring are labeled clockwise
from 0 to 2 d -1 and let x=f d (n). Since N j (n) is counterclockwise in relation to n, then f d (N j (n))
. By hypothesis N k N j (n) is counterclockwise in relation to N j (n), so f d (N k N j (n))
is clockwise in relation to n, then f d (N k (n)) =(x+D k ) mod 2 d and
. Because N k N j (n) and N j N k (n) are the same node, x-D j -D k -
This implies that either D j +D k - d-1 0 or D k - d-1 0; but none of these can hold since 0
the hypothesis was wrong and then k - C(N j (n)). q
Lemma 4: Let f d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring with 2 d nodes. Then, there is a node n of the hypercube such that
us an algorithm to find this node n. We can start from any node m of
the hypercube. If not, take any i - C(m) and move to N i (m). Lemma 3
states that the number of elements in C(N i (m)) is strictly less than the number of elements in
C(m). Repeating this step we will finally find a node n such that C(n)=-, that is,
From now on we will refer to the node designated by lemma 4 as node c of the hypercube.
The nodes of the ring can be labeled in the most convenient way for us. From now on, the node
f d (c) will be labeled as node 0, and the rest of the nodes of the ring will be labeled clockwise from
0 to 2 d -1. By the above lemma, f d (N i S. The next lemma states that for any
the neighbors of N i (c) in every dimension j - S - {i} are clockwise. Obviously, the
neighbor of N i (c) in dimension i is counterclockwise, since it is c.
Lemma 5: Let f d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring with 2 d nodes. Then, for any i - S, C(N i This is equivalent to say that
Proof: Suppose there is some j - S - {i} such that j - C(N i (c)). That means that
implies that either
. D i - d-1 D j , which is not possible because D i -D j (by lemma 2) and D i ,D j < 2 d-2 , or
. D j - d-1 0, which cannot hold since 0 < D j < 2 d-2 .
In consequence, for every j - S - {i}, j - C(N i (c)) and then C(N i
Next, it is proven that given any subset of dimensions W-S, the neighbors of N W (c) in every
are clockwise
Lemma d be an embedding with constant distances (D i , i-[0, d-1]) of a d-cube onto
a ring with 2 d nodes. Then, for any W - S, C(N W (c)) - S - W.
Proof: The lemma will be proved by induction over the number of elements in W.
If W has just one element the lemma holds (it has been proved in lemma 5, which can be seen
as a particular case of lemma 6).
Assume that the lemma holds for any set with less than N elements and let us suppose that it
does not hold for a set Wwith N elements. This means that there is a dimension k-S-W such that
f d (N k (N W (c))) is counterclockwise in relation to f d (N W (c)).
The fact that the lemma holds for any subset with less than N elements implies that
for any subset D-S with N elements. Therefore, since W has N elements
Let V be equal to the set W after taking any element of it and being replaced by k, that is, let
i-W be any element of W, then V=(W-{i}) - {k}. Since V has also N elements
f d N D c
f d N W c
f d N V c
We know that N k (N W
since N k is counterclockwise in relation to N W (c), then the left part of the equality must be equal
to (r-D k ) mod 2 d . The right part is equal to s-D i mod 2 d . In consequence, r-D k -s-D i .
Substituting r and s by their corresponding expressions and simplifying we obtain that
This equation can be satisfied just in two ways: either D k - d-1 0 or D i - d-1 D k .
None of them can hold since 0 < D i , D k < 2 d-2 , and as lemma 2 states, D i and D j cannot be equal.
We then conclude that for every k-S-W, f d (N k (N W (c))) is clockwise in relation to f d (N W (c))
and therefore, C(N W (c)) - S-W. q
Corollary 7: If we start from node c and we want to move to node N W (c) for any W-S, using
each dimension in W exactly once, any time we move through a dimension in W we will be
moving clockwise in the ring, no matter in which order we use the dimensions in W, that is,
Proof: It is a direct implication of lemmas 4 and 6. The former states that node c has all its
neighbors in S clockwise, so the first hop must be necessarily clockwise. Then lemma 6 says that
if we have moved from node c to a node r using a subset W of dimensions of S, making use of
each dimension just once, all the neighbors of node r in any dimension not used yet (i.e.
belonging to S-W) are clockwise, so the next hop must also be clockwise. q
Next, it is proven that, when embedding a hypercube onto a ring with constant distances, if
all distances are lower than 2 d-2 , the sum of all distances must be at least 2 d -1.
Lemma 8: Let f d be an embedding with constant distances (D i , i - [0, d-1]) of a d-cube onto
a ring with 2 d nodes such that every D i < 2 d-2 . If such embedding exists, then
Proof: Since all distances are less than 2 d-2 , S (set of dimensions whose distance is less than
consists of all dimensions of the d-cube. Then, lemma 4 states that there must be a node c
such that all its neighbors are clockwise. In addition, corollary 7 says that it is possible to go from
node c to any node of the hypercube given at most d hopes (each one corresponding to a different
dimension) and going always clockwise. In particular, we can go from node c to the node just
next to it counterclockwise. Moving always clockwise, the distance between these two nodes is
so the sum of all distances must be at least equal to this amount. q
Corollary 9: An embedding with constant distances such that all distances are less than 2 d-2
is not optimal, if it exists. In this context, to be optimal means that it has the lowest average
distance for embeddings with constant distances. In other words, the optimal embedding must
have at least one distance greater than or equal to 2 d-2 .
f d N W c
states that, if such embedding exists, then the sum of its distances is at least
To prove this corollary it suffices to find an embedding whose distances add up to less than
this amount. Such embedding can be the xor embedding, the one proposed in section 3. q
We are now ready to prove that the xor embedding is optimal. This is proved in the next
theorem and its corollary.
Theorem 10: Let f d be an embedding with constant distances (D i , i - [0, d-1]) of a d-cube
onto a ring with 2 d nodes. Then the average distance of f d is at least (3.2 d-2 -1)/d.
Proof: The proof is based on lemma 1 and corollary 9. The lemma says that the sum of any
subset of d-1 distances is at least 2 d-1 -1, so in particular, the sum of the d-1 lowest distances of
the embedding must be at least equal to this amount. Corollary 9 states that the optimal
embedding has at least one distance that is higher than or equal to 2 d-2 , so in particular, the
highest distance of the embedding must be higher than or equal to 2 d-2 Both together imply that
Corollary 11: The xor embedding of a d-cube onto a ring proposed in section 3 is optimal in
the sense that it has the lowest average distance for embeddings with constant distances.
Proof: The average distance of the xor embedding is equal to the lower bound introduced in
theorem 10. q
6. Conclusions
This paper focuses on the execution of algorithms with a hypercube communication topology
onto multicomputers with a torus interconnection topology. The problem is tackled by means of
graph embeddings. An embedding of hypercubes onto toruses of any arbitrary dimension has
been presented. This embedding, called xor embedding, belongs to a class of embeddings whose
distinguishing property is that all the links of the same dimension of the hypercube have the same
dilation on the torus. This class of embeddings are called embeddings with constant distances.
Many parallel algorithms with hypercube topology have the property that all the processes
perform the same activity with different data. This activity consists of a number of stages (usually
as many as number of dimensions of the hypercube) and each stage is composed of a computing
phase followed by a communication phase in which data is interchanged with one of its
neighbors. This structure is found in parallel algorithms for FFT and sorting among others. For
this type of algorithms, called CC d-cube algorithms, constant distances may be desirable
because they imply that the communication phase has the same duration for every process,
avoiding waiting intervals which can degrade performance.
averagedistance f d
The xor embedding has been compared to the standard embedding using a set of different
performance metrics (execution time of a CC d-cube algorithm, links dilation spectrum, longest
dilation, total dilation, maximum and minimum load, and average load). For all of them the
performance of the xor embedding is significantly better than that of the standard embedding.
For CC d-cube algorithms, the embedding with constant distances that results in the shortest
execution time is that whose average distance is minimum. It has been proven that the average
distance of the xor embedding is minimum for rings (one-dimensional torus), and therefore, it
maximizes the performance of the multicomputer for those algorithms.
Another important property of the xor embedding is the simplicity of the function which
determines the location where a node of the d-cube is found in the target multicomputer.
We are currently working on the generalization of this work in two different directions. First,
we are looking at more general hypercube algorithms. Second, we are considering more general
torus multicomputers in which the number of processing elements is not necessarily a power of
two.
Acknowledgments
This work has been supported by the Ministry of Education and Science of Spain (CICYT
TIC-92/880 and TIC-91/1036) and the CEPBA (European Center for Parallelism in Barcelona).
--R
"Multicomputers: Message-Passing Concurrent Computers"
" Embedding Networks with Ring Connections in Hypercube Machines"
Solving Problems on Concurrent Processors
"The Xor Embedding: Embedding Hypercubes onto Rings and Toruses"
"Optimal Assignments of Numbers to Vertices"
"Optimal Numbering and Isoperimetric Problems on Graphs"
" Embedding Three-Dimensional Meshes in Boolean Cubes by Graph Decomposition"
"Placement of the Processors of a Hypercube"
"Embeddings among Meshes and Tori"
"Emulation of Hypercube Architecture on Nearest-Neighbor Mesh-Connected Processing Elements"
" Introduction to Parallel Algorithms and Architectures: Arrays, Trees, Hypercubes"
" Contention-Free 2D-Mesh Cluster Allocation in Hypercubes"
Interconnection Networks for Multiprocessors and Multicomputers: Theory and Practice
--TR
--CTR
Mikhail S. Tarkov , Youngsong Mun , Jaeyoung Choi , Hyung-Il Choi, Mapping adaptive fuzzy Kohonen clustering network onto distributed image processing system, Parallel Computing, v.28 n.9, p.1239-1256, September 2002
Luis Daz de Cerio , Miguel Valero-Garca , Antonio Gonzlez, Hypercube Algorithms on Mesh Connected Multicomputers, IEEE Transactions on Parallel and Distributed Systems, v.13 n.12, p.1247-1260, December 2002
Otto Wohlmuth , Friedrich Mayer-Lindenberg, A method for them embedding of arbitrary communication topologies into configurable parallel computers, Proceedings of the 1998 ACM symposium on Applied Computing, p.569-574, February 27-March 01, 1998, Atlanta, Georgia, United States
Ali Karci, Generalized parallel divide and conquer on 3D mesh and torus, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.5, p.281-295, May 2005 | graph embeddings;torus multicomputers;hypercubes;mapping of parallel algorithms;scalable distributed memory multiprocessors |
629396 | Performance Considerations of Shared Virtual Memory Machines. | AbstractGeneralized speedup is defined as parallel speed over sequential speed. In this paper the generalized speedup and its relation with other existing performance metrics, such as traditional speedup, efficiency, scalability, etc., are carefully studied. In terms of the introduced asymptotic speed, we show that the difference between the generalized speedup and the traditional speedup lies in the definition of the efficiency of uniprocessor processing, which is a very important issue in shared virtual memory machines. A scientific application has been implemented on a KSR-1 parallel computer. Experimental and theoretical results show that the generalized speedup is distinct from the traditional speedup and provides a more reasonable measurement. In the study of different speedups, an interesting relation between fixed-time and memory-bounded speedup is revealed. Various causes of superlinear speedup are also presented. | Introduction
In recent years parallel processing has enjoyed unprecedented attention from researchers, government
agencies, and industries. This attention is mainly due to the fact that, with the current circuit
technology, parallel processing seems to be the only remaining way to achieve higher performance.
However, while various parallel computers and algorithms have been developed, their performance
evaluation is still elusive. In fact, the more advanced the hardware and software, the more difficult
it is to evaluate the parallel performance. In this paper, targeting recent development of shared
virtual memory machines, we study the generalized speedup [1] performance metric, its relation
with other existing performance metrics, and the implementation issues.
Distributed-memory parallel computers dominate today's parallel computing arena. These
machines, such as the Kendall Square KSR-1, Intel Paragon, TMC CM-5, and IBM SP2, have
successfully delivered high performance computing power for solving some of the so-called "grand-
challenge" problems. From the viewpoint of processes, there are two basic process synchronization
and communication models. One is the shared-memory model in which processes communicate
through shared variables. The other is the message-passing model in which processes communicate
through explicit message passing. The shared-memory model provides a sequential-like program
paradigm. Virtual address space separates the user logical memory from physical memory. This
separation allows an extremely large virtual memory to be provided on a sequential machine when
only a small physical memory is available. Shared virtual address combines the private virtual address
spaces distributed over the nodes of a parallel computer into a globally shared virtual memory
[2]. With shared virtual address space, the shared-memory model supports shared virtual memory,
but requires sophisticated hardware and system support. An example of a distributed-memory machine
which supports shared virtual address space is the Kendall Square KSR-1 1 . Shared virtual
memory simplifies the software development and porting process by enabling even extremely large
programs to run on a single processor before being partitioned and distributed across multiple
processors. However, the memory access of the shared virtual memory is non-uniform [2]. The
access time of local memory and remote memory is different. Running a large program on a small
number of processors is possible but could be very inefficient. The inefficient sequential processing
will lead to a misleading high performance in terms of speedup or efficiency.
Generalized speedup, defined as parallel speed over sequential speed, is a newly proposed performance
metric [1]. In this paper, through both theoretical proofs and experimental results, we
show that generalized speedup provides a more reasonable measurement than traditional speedup.
In the process of studying generalized speedup, the relation between the generalized speedup and
many other metrics, such as efficiency, scaled speedup, scalability, are also studied. The relation
1 Traditionally, the message-passing model is bounded by the local memory of the processing processors. With
recent technology advancement, the message-passing model has extended the ability to support shared virtual memory.
between fixed-time and memory-bounded scaled speedup is analyzed. Various reasons for superlin-
in different speedups are also discussed. Results show that the main difference between the
traditional speedup and the generalized speedup is how to evaluate the efficiency of the sequential
processing on a single processor.
The paper is organized as follows. In section 2 we study traditional speedup, including the
scaled speedup concept, and introduce some terminology. Analysis shows that the traditional
speedup, fixed-size or scaled size, may achieve superlinearity on shared virtual memory machines.
Furthermore, with the traditional speedup metric, the slower the remote memory access is, the
larger the speedup. Generalized speedup is studied in Section 3. The term asymptotic speed
is introduced for the measurement of generalized speedup. Analysis shows the differences and
the similarities between the generalized speedup and the traditional speedup. Relations between
different performance metrics are also discussed. Experimental results of a production application
on a Kendall Square KSR-1 parallel computer are given in Section 4. Section 5 contains a summary.
2 The Traditional Speedup
One of the most frequently used performance metrics in parallel processing is speedup. It is defined
as sequential execution time over parallel execution time. Parallel algorithms often exploit
parallelism by sacrificing mathematical efficiency. To measure the true parallel processing gain, the
sequential execution time should be based on a commonly used sequential algorithm. To distinguish
it from other interpretations of speedup, the speedup measured with a commonly used sequential
algorithm has been called absolute speedup [3]. Another widely used interpretation is the relative
speedup [3], which uses the uniprocessor execution time of the parallel algorithm as the sequential
time. There are several reasons to use the relative speedup. First, the performance of an algorithm
varies with the number of processors. Relative speedup measures the variation. Second, relative
speedup avoids the difficulty of choosing the practical sequential algorithm, implementing the sequential
algorithm, and matching the implementation/programming skill between the sequential
algorithm and the parallel algorithm. Also, when problem size is fixed, the time ratio of the chosen
sequential algorithm and the uniprocessor execution of the parallel algorithm is fixed. Therefore,
the relative speedup is proportional to the absolute speedup. Relative speedup is the speedup
commonly used in performance study. In this study we will focus on relative speedup and reserve
the terms traditional speedup and speedup for relative speedup. The concepts and results of this
study can be extended to absolute speedup.
From the problem size point of view, speedup can be divided into the fixed-size speedup and
the scaled speedup. Fixed-size speedup emphasizes how much execution time can be reduced with
parallel processing. Amdahl's law [4] is based on the fixed-size speedup. The scaled speedup is
concentrated on exploring the computational power of parallel computers for solving otherwise
intractable large problems. Depending on the scaling restrictions of the problem size, the scaled
speedup can be classified as the fixed-time speedup [5] and the memory-bounded speedup [6]. As the
number of processors increases, fixed-time speedup scales problem size to meet the fixed execution
time. Then the scaled problem is also solved on an uniprocessor to get the speedup. As the number
of processors increases, memory-bounded speedup scales problem size to utilize the associated
memory increase. A detailed study of the memory-bounded speedup can be found in [6].
Let p and S p be the number of processors and the speedup with p processors.
ffl Unitary speedup: S
ffl Linear speedup: S
It is debatable if any machine-algorithm pair can achieve "truly" superlinear speedup. Seven
possible causes of superlinear speedup are listed in Fig. 1. The first four causes in Fig. 1 are
patterned from [7].
1. cache size increased in parallel processing
2. overhead reduced in parallel processing
3. latency hidden in parallel processing
4. randomized algorithms
5. mathematical inefficiency of the serial algorithm
6. higher memory access latency in the sequential processing
7. profile shifting
Figure
1. Causes of Superlinear Speedup.
Cause 1 is unlikely applicable for scaled speedup, since when problem size scales up, by memory
or by time constraint, the cache hit ratio is unlikely to increase. Cause 2 in Fig. 1 can be considered
theoretically [8], there is no measured superlinear speedup ever attributed to it. Cause 3 does not
exist for relative speedup since both the sequential and parallel execution use the same algorithm.
Since parallel algorithms are often mathematically inefficient, cause 5 is a likely source of superlinear
speedup of relative speedup. A good example of superlinear speedup based on 5 can be found in
[9]. Cause 7 will be explained in the end of Section 3, after the generalized speedup is introduced.
With the virtual memory and shared virtual memory architecture, cause 6 can lead to an
extremely high speedup, especially for scaled speedup where an extremely large problem has to be
run on a single processor. Figure 5 shows a measured superlinear speedup on a KSR-1 machine.
The measured superlinear speedup is due to the inherent deficiency of the traditional speedup
metric. To analyze the deficiency of the traditional speedup, we need to introduce the following
definition.
2 The cost of parallelism i is the ratio of the total number of processor cycles consumed
in order to perform one unit operation of work when i processors are active to the machine clock
rate.
The sequential execution time can be written in terms of work:
Sequential execution Amount of work \Theta
Processor cycles per unit of work
Machine clock rate
The ratio in the right hand side of Eq. (1), processor cycles per unit of work over machine clock
rate, is the cost of sequential processing.
Work can be defined as arithmetic operations, instructions, transitions, or whatever is needed to
complete the application. In scientific computing the number of floating-point operations
is commonly used to measure work. In general, work may be of different types, and units of different
operations may require different numbers of instruction cycles to finish. (For example, the times
consumed by one division and one multiplication may be different depending on the underlying
machine, and operation and memory reference ratio may be different for different computations.)
The influence of work type on the performance is one of the topics studied in [1]. In this paper, we
study the influence of inefficient memory access on the performance. We assume that there is only
one work type and that any increase in the number of processor cycles is due to inefficient memory
access.
In a shared virtual memory environment, the memory available depends on the system size.
Let W i be the amount of work executed when i processors are active (work performed in all steps
that use i processors), and let
work. The cost of parallelism i in
a p processor system, denoted as c p (i; W ), is the elapsed time for one unit operation of work when
i processors are active. Then, W i \Delta c p (i; W ) gives the accumulated elapsed time where i processors
are active. c p (i; W ) contains both computation time and remote memory access time.
The uniprocessor execution time can be represented in terms of uniprocessor cost.
where c p (s; W ) is the cost of sequential processing on a parallel system with p processors. It is
different from c p (1; W ) which is the cost of the sequential portion of the parallel processing. Parallel
execution time can be represented in terms of parallel cost,
The traditional speedup is defined as
Depending on architecture memory hierarchy, in general c p (i; W ) may not equal
[10]. If c p (i; W
The first ratio of Eq. (3) is the cost ratio, which gives the influence of memory access delay. The
second ratio,
is the simple analytic model based on degree of parallelism [6]. It assumes that memory access
time is constant as problem size and system size vary. The cost ratio distinguishes the different
performance analysis methods with or without consideration of the memory influence. In general,
cost ratio depends on memory miss ratio, page replacement policy, data reference pattern, etc. Let
remote access ratio be the quotient of the number of remote memory accesses and the number
of local memory accesses. For a simple case, if we assume there is no remote access in parallel
processing and the remote access ratio of the sequential processing is (p \Gamma 1)=p, then
time of per remote access
time of per local access : (5)
Equation (5) approximately equals the time of per remote access over the time of per local access.
Since the remote memory access is much slower than the local memory access under the current
technology, the speedup given by Eq. (3) could be considerably larger than the simple analytic
model (4). In fact, the slower the remote access is, the larger the difference. For the KSR-1, the
time ratio of remote and local access is about 7.5 (see Section 4). Therefore, for the cost
ratio is 7.3. For any W=
under the assumed remote access ratio, we will have a
superlinear speedup.
3 The Generalized Speedup
While parallel computers are designed for solving large problems, a single processor of a parallel
computer is not designed to solve a very large problem. A uniprocessor does not have the computing
power that the parallel system has. While solving a small problem is inappropriate on a parallel
system, solving a large problem on a single processor is not appropriate either. To create a useful
comparison, we need a metric that can vary problem sizes for uniprocessor and multiple processors.
Generalized speedup [1] is one such metric.
Generalized
Sequential Speed
Speed is defined as the quotient of work and elapsed time. Parallel speed might be based on scaled
parallel work. Sequential speed might be based on the unscaled uniprocessor work. By definition,
generalized speedup measures the speed improvement of parallel processing over sequential pro-
cessing. In contrast, the traditional speedup (2) measures time reduction of parallel processing. If
the problem size (work) for both parallel and sequential processing are the same, the generalized
speedup is the same as the traditional speedup. From this point of view, the traditional speedup is
a special case of the generalized speedup. For this and for historical reasons, we sometimes call the
traditional speedup the speedup, and call the speedup given in Eq. (6) the generalized speedup.
Like the traditional speedup, the generalized speedup can also be further divided into fixed-
size, fixed-time, and memory-bounded speedup. Unlike the traditional speedup, for the generalized
speedup, the scaled problem is solved only on multiple processors. The fixed-time generalized
speedup is sizeup [1]. The fixed-time benchmark SLALOM [11] is based on sizeup.
If memory access time is fixed, one might always assume that the uniprocessor cost c p
be stablized after some initial decrease (due to initialization, loop overhead, etc.), assuming the
memory is large enough. When cache and remote memory access are considered, cost will increase
when a slower memory has to be accessed. Figure 2 depicts the typical cost changing pattern.
From Eq. (1), we can see that uniprocessor speed is the reciprocal of uniprocessor cost. When
the cost reaches its lowest value, the speed reaches its highest value. The uniprocessor speed corresponding
to the stablized main memory cost is called the asymptotic speed (of uniprocessor).
Asymptotic speed represents the performance of the sequential processing with efficient memory
access. The asymptotic speed is the appropriate sequential speed for Eq. (6). For memory-bounded
speedup, the appropriate memory bound is the largest problem size which can maintain
the asymptotic speed. After choosing the asymptotic speed as the sequential speed, the corresponding
asymptotic cost has only local access and is independent of the problem size. We use
to denote the corresponding asymptotic cost, where W 0 is a problem size which achieves
the asymptotic speed. If there is no remote access in parallel processing, as assumed in Section 2,
then c(s; W 0 )=c p (p; W 0 (3), the corresponding speedup equals the simple speedup
Fits in
Cache
Cost
Problem Size
Fits in Main Memory
Fits in Remote
Memory
Execution Time
Increases Sequential
Insufficient Memory
Figure
2. Cost Variation Pattern.
which does not consider the influence of memory access time. In general, parallel work W is not
the same as W 0 , and c p (i; W ) may not equal in general, we have
Generalized
Equation (7) is another form of the generalized speedup. It is a quotient of sequential and parallel
time as is traditional speedup (2). The difference is that, in Eq. (7), the sequential time is based
on the asymptotic speed. When remote memory is needed for sequential processing, c(s; W 0 ) is
smaller than c p (s; W ). Therefore, the generalized speedup gives a smaller speedup than traditional
speedup.
Parallel efficiency is defined as
number of processors : (8)
The Generalized efficiency can be defined similarly as
Generalized Efficiency = generalized speedup
number of processors :
By definition,
and
Generalized Efficiency
Equations (10) and (11) show the difference between the two efficiencies. Traditional speedup compares
parallel processing with the measured sequential processing. Generalized speedup compares
parallel processing with the sequential processing based on the asymptotic cost. From this point
of view, generalized speedup is a reform of traditional speedup. The following lemmas are direct
results of Eq.(7).
independent of problem size, traditional speedup is the same as generalized
speedup.
Lemma 2 If the parallel work, W , achieves the asymptotic speed, that is then the
fixed-size traditional speedup is the same as the fixed-size generalized speedup.
By Lemma 1, if the simple analytic model (4) is used to analyze performance, there is no difference
between the traditional and the generalized speedup. If the problem size W is larger than the
suggested initial problem size W 0 , then the single processor speedup S 1 may not equal to one. S 1
measures the sequential inefficiency due to the difference in memory access.
The generalized speedup is also closely related to the scalability study. Isospeed scalability
has been proposed recently in [12]. The isospeed scalability measures the ability of an algorithm-
machine combination maintaining the average (unit) speed, where the average speed is defined as
the speed over the number of processors. When the system size is increased, the problem size is
scaled up accordingly to maintain the average speed. If the average speed can be maintained, we
say the algorithm-machine combination is scalable and the scalability is
where W 0 is the amount of work needed to maintain the average speed when the system size has
been changed from p to p 0 , and W is the problem size solved when p processors were used. By
definition
Since the sequential cost is fixed in Eq. (11), fixing average speed is equivalent to fixing generalized
efficiency. Therefore the isospeed scalability can be seen as the iso-generalized-efficiency scalability.
When the memory influence is not consedered, i.e. c p (s; W ) is independent of the problem size, the
iso-generalized-efficiency will be the same as the iso-traditional-efficiency. In this case, the isospeed
scalability is the same as the isoefficiency scalability proposed by Kumar [13, 2].
Lemma 3 If the sequential cost c p (s; W ) is independent of problem size or if the simple analysis
model (4) is used for speedup, the isoefficiency and isospeed scalability are equivalent to each other.
The following theorem gives the relation between the scalability and the fixed-time speedup.
Theorem 1 Scalability (12) equals one if and only if the fixed-time generalized speedup is unitary.
Proof: Let c(s; W 0 be as defined in Eq. (7). If scalability (12) equals 1, let
be as defined in Eq. (12) and define W 0
similarly as W i , we have
for any number of processors p and p 0 . By definition, generalized speedup
With some arithmetic manipulation, we have
Similarly, we have
By Eq. (13) and the above two equations,
For fixed speed,
By equation (13),
Substituting Eq. (15) into Eq. (14), we have
For
Equation (16) is the corresponding unitary speedup when G S 1 is not equal to one. If the work W
which is the unitary speedup defined in definition 1 .
If the fixed-time generalized speedup is unitary, then for any number of processors, p and p 0 ,
and the corresponding problem sizes, W and W 0 , where W 0 is the scaled problem size under the
fixed-time constraint, we have
and
Therefore,
The average speed is maintained. Also since
we have the equality
The scalability (12) equals one. 2
The following theorem gives the relation between memory-bounded speedup and fixed-time
speedup. The theorem is for generalized speedup. However, based on Lemma 1, the result is true
for traditional speedup when uniprocessor cost is fixed or the simple analysis model is used.
Theorem 2 If problem size increases proportionally to the number of processors in memory-bounded
scaleup, then memory-bounded generalized speedup is linear if and only if fixed-time generalized
speedup is linear.
Proof: Let c(s; W 0 ); c p (i; W ), W and W i be as defined in Theorem 1. Let W 0 ; W be the scaled
problem size of fixed-time and memory-bounded scaleup respectively, and W 0
i and W
i be defined
accordingly.
If memory-bounded speedup is linear, we have
and
for some constant a ? 0. Combine the two equations, we have the equation
By assumption, W is proportional to the number of processors available,
Substituting Eq. (18) into Eq. (17), we get the fixed-time equality:
That is W and the fixed-time generalized speedup is linear.
If fixed-time speedup is linear, then, following similar deductions as used for Eq. (17), we have
Applying the fixed-time equality Eq. (19) to Eq. (20), we have the reduced equation
With the assumption Eq. (18), Eq. (21) leads to
and memory-bounded generalized speedup is linear. 2
The assumption of Theorem 2 is problem size (work) increases proportionally to the number of
processors. The assumption is true for many applications. However, it is not true for dense matrix
computation where the memory requirement is a square function of the order of the matrix and the
computation is a cubic function of the order of the matrix. For this kind of computational intensive
applications, in general, memory-bounded speedup will lead to a large speedup. The following
corollaries are direct results of Theorem 1 and Theorem 2.
Corollary 1 If problem size increases proportionally to the number of processors in memory-bounded
scaleup, then memory-bounded generalized speedup is unitary if and only if fixed-time
generalized speedup is unitary.
Corollary 2 If work increases proportionally with the number of processors, then scalability (12)
equals one if and only if the memory-bounded generalized speedup is unitary.
Since uniprocessor cost varies on shared virtual memory machines, the above theoretical results
are not applicable to traditional speedup on shared virtual memory machines.
Finally, to complete our discussion on the superlinear speedup, there is a new cause of superlin-
for generalized speedup. The new source of superlinear speedup is called profile shifting [11],
and is due to the problem size difference between sequential and parallel processing (see Figure
1). An application may contain different work types. While problem size increases, some work
types may increase faster than the others. When the work types with lower costs increase faster,
superlinear speedup may occur. A superlinear speedup due to profile shifting was studied in [11].
4 Experimental Results
In this section, we discuss the timing results for solving a scientific application on KSR-1 parallel
computers. We first give a brief description of the architecture and the application, and then
present the timing results and analyses.
4.1 The Machine
The KSR-1 computer discussed here is a representative of parallel computers with shared virtual
memory. Figure 3 shows the architecture of the KSR-1 parallel computer [14]. Each processor
on the KSR-1 has 32 Mbytes of local memory. The CPU is a super-scalar processor with a peak
performance of 40 Mflops in double precision. Processors are organized into different rings. The
local ring (ring:0) can connect up to 32 processors, and a higher level ring of rings (ring:1) can
contain up to 34 local rings with a maximum of 1088 processors.
If a non-local data element is needed, the local search engine (SE:0) will search the processors
in the local ring (ring:0). If the search engine SE:0 can not locate the data element within the local
ring, the request will be passed to the search engine at the next level (SE:1) to locate the data.
This is done automatically by a hierarchy of search engines connected in a fat-tree-like structure
[14, 15]. The memory hierarchy of KSR-1 is shown in Fig. 4.
Each processor has 512 Kbytes of fast subcache which is similar to the normal cache on other
parallel computers. This subcache is divided into two equal parts: an instruction subcache and a
data subcache. The 32 Mbytes of local memory on each processor is called a local cache. A local
ring (ring:0) with up to 32 processors can have 1 Gbytes total of local cache which is called Group:0
cache. Access to the Group:0 cache is provided by Search Engine:0. Finally, a higher level ring
ring:1
connecting up to 34 ring:0's
ring:0
connecting up
to processers
ring:0 ring:0
Figure
3. Configuration of KSR-1 parallel computers.
Mbytes of local memory
Group:0 Cache
34 GB
Group:1 Cache
512 KB Subcache
Processor
Search Engine:0
Search Engine:1
Figure
4. Memory hierarchy of KSR-1.
of rings (ring:1) connects up to 34 local rings with 34 Gbytes of total local cache which is called
Group:1 cache. Access to the Group:1 cache is provided by Search Engine:1. The entire memory
hierarchy is called ALLCACHE memory by the Kendall Square Research. Access by a processor
to the ALLCACHE memory system is accomplished by going through different Search Engines as
shown in Fig. 4. The latencies for different memory locations [16] are: 2 cycles for subcache, 20
cycles for local cache, 150 cycles for Group:0 cache, and 570 cycles for Group:1 cache.
4.2 The Application
Regularized least squares problems (RLSP) [17] are frequently encountered in scientific and engineering
applications [18]. The major work is to solve the equation
by orthogonal factorization schemes (Householder Transformations and Givens rotations). Efficient
Householder algorithms have been discussed in [19] for shared memory supercomputers, and in [20]
for distributed memory parallel computers.
Note that Eq. (22) can also be written as
ffI)@ A
ffI)@ b1
A (23)
or
so that the major task is to carry out the QR factorization for matrix B which is neither a complete
full matrix nor a sparse matrix. The upper part is full and the lower part is sparse (in diagonal
form). Because of the special structure in B, not all elements in the matrix are affected in a
particular step. Only a submatrix of B will be transformed in each step. If the columns of the
at step i are denoted by B
then the Householder Transformation
can be described as:
Householder Transformation
Initialize matrix B
1: ff
2:
3:
ii
4: b i
end for
The calculation of fi j 's and updating of b i
j 's can be done in parallel for different index j.
4.3 Timing Results
The numerical experiments reported here were conducted on the KSR-1 parallel computer installed
at the Cornell Theory Center. There are 128 processors altogether on the machine. During the
period when our experiments were performed, however the computer was configured as two stand-alone
machines with 64 processors each. Therefore, the numerical results were obtained using less
than 64 processors.
Figure
5 shows the traditional fixed-size speedup curves obtained by solving the regularized
least squares problem with different matrix sizes n. The matrix is of dimensions 2n \Theta n. We can
see clearly that as the matrix size n increases, the speedup is getting better and better. For the
case when the speedup is 76 on 56 processors. Although it is well known that on most
parallel computers, the speedup improves as the problem size increases, what is shown in Fig. 5 is
certainly too good to be a reasonable measurement of the real performance of the KSR-1.
The problem with the traditional speedup is that it is defined as the ratio of the sequential
time to the parallel time used for solving the same fixed-size problem. The complex memory
hierarchy on the KSR-1 makes the computational speed of a single processor highly dependent on
the problem size. When the problem is so big that not all data of the matrix can be put in the local
memory (32 Mbytes) of the single computing processor, part of the data must be put in the local
memory of other processors on the system. These data are accessed by the computing processor
through Search Engine:0. As a result, the computational speed on a single processor slows down
significantly due to the high latency of Group:0 cache. The sustained computational speed on a
single processor is 5.5 Mflops, 4.5 Mflops and 2.7 Mflops for problem sizes 1024, 1600 and 2048
respectively. On the other hand, with multiple processors, most of the data needed are in the local
memory of each processor, so the computational speed suffers less from the high Group:0 cache
Number of Processors
\Theta \Theta
\Theta
\Theta
\Theta
\Theta \Theta
Figure
5. Fixed-size (Traditional) Speedup on KSR-1
latency. Therefore, the excellent speedups shown in Fig. 5 are the results of significant uniprocessor
performance degradation when a large problem is solved on a single processor.
Figure
6 shows the measured single processor speed as a function of problem size n. The Householder
Transformation algorithm given before was implemented in KSR Fortran. The algorithm
has a numerical complexity of 26:5n, and the speed is calculated using
where t is the CPU time used to finish the computation.
As can be seen from Fig. 6, the three segments represent significantly different speeds for
different matrix sizes. When the whole matrix can be fit into the subcache, the performance is
close to 7 Mflops. The speed decreases to around 5.5 Mflops when the matrix can not be fit into
the subcache, but still can be accommodated in the local cache. Note, however, when the matrix is
so big that access to Group:0 cache through Search Engine:0 is needed, the performance degrades
significantly and there is no clear stable performance level as can be observed in the other two
segments. This is largely due to the high Group:0 cache latency and the contention for the Search
Engine which is used by all processors on the machine. Therefore, the access time of Group:0 cache
is less uniform as compared to that of the subcache and local cache.
To take the difference of single processing speeds for different problem sizes into consideration,
we have to use the generalized speedup to measure the performance of multiple processors on
the KSR-1. As can be seen from the definition of Eq. (6), the generalized speedup is defined
as the ratio of the parallel speed to the asymptotic sequential speed, where the parallel speed is
based on a scaled problem. In our numerical tests, the parallel problem was scaled in a memory-
dOrder of the Matrices
Subcache
Group:0 Cache \Theta
\Theta
\Theta
\Theta
Figure
6. Speed Variation of Uniprocessor Processing on KSR-1
bounded fashion as the number of processors increases. The initial problem was selected based
on the asymptotic speed (5.5 Mflops from Fig. 6) and then scaled proportionally according to the
number of processors, i.e. with p processors, the problem is scaled to a size that will fill M \Theta p
Mbytes of memory, where M is the memory required by the unscaled problem. Figure 7 shows
the comparisons of the traditional scaled speedup and the generalized speedup. For the traditional
scaled speedup, the scaled problem is solved on both one and p processors, and the value of the
speedup is calculated as the ratio of the time of one processor to that of p processors. While for
the generalized speedup, the scaled problem is solved only on multiple processors, not on a single
processor. The value of the speedup is calculated using Eq. (6), where the asymptotic speed is used
for the sequential speed. It is clear that Fig. 7 shows that the generalized speedup gives much more
reasonable performance measurement on KSR-1 than does the traditional scaled speedup. With
the traditional scaled speedup, the speedup is above 20 with only 10 processors. This excellent
superlinear speedup is a result of the severely degraded single processors speed, rather than the
perfect scalability of the machine and the algorithm.
Finally, table 1 gives the measured isospeed scalability (see Eq. (12)) of solving the regularized
least squares problem on a KSR-1 computer. The speed to be maintained on different number of
processors is 3.25 Mflops, which is 60% of the asymptotic speed of 5.5 Mflops. The size of the 2n \Theta n
matrix is increased as the number of processors increases. It starts as on one processor and
increases to processors. One may notice that /(2; table 1, which
means that the machine-algorithm pair scales better from 2 processors to 4 processors than it does
Number of Processors
Generalized Speedup \Theta
\Theta
\Theta \Theta
\Theta \Theta
\Theta
\Theta
\Theta
Traditional Speedup
Figure
7. Comparison of Generalized and Traditional Speedup on KSR-1
from one processor to two processors. This can be explained by the fact that on one processor,
the matrix is small enough that all data can be accommodated in the subcache. Once all the data
is loaded into the subcache, the whole computation process does not need data from local cache
and Group:0 cache. Therefore, the data access time on one processor is significantly shorter than
that on two processors which involves subcache, local cache and Group:0 cache to pass messages.
As a result, significant increase in the work W is necessary in the case of two processors to offset
the extra data access time involving different memory hierarchies. This is the major reason for the
low /(1; 2) value. When the number of processors increases from 2 to 4, the data access pattern
is the same for both cases with subcache, local cache and Group:0 cache all involved, so that the
work W does not need to be increased significantly to offset the extra communication cost when
going from 2 processors to 4 processors. It is interesting to notice, while the scalability of the
RLSP-KSR1 combination is relatively low, the data in Table 1 has a similar decreasing pattern
as the measured and computed scalability of Burg-nCUBE, SLALOM-nCUBE, Burg-MasPar and
SLALOM-MasPar combinations [12]. The scalabilities are all decreasing along columns and have
some irregular behavior at /(1; 2) and /(2; 4).
Interested readers may wonder how the measured scalability is related to the measured generalized
speedup given in Fig. 7. While Fig. 7 demonstrates a nearly linear generalized speedup, the
corresponding scalability given in Table 1 is far from ideal (the ideal scalability would be unity).
The low scalability is expected. Recall that the scaled speedup given in Fig. 7 is memory-bounded
speedup [6]. That is when the number of processors is doubled, the usage of memory is also doubled.
Table
1. Measured Scalability of RLSP-KSR1 combination.
As a result, the number of elements in the matrix is increased by a factor of 2. Corollary 2 shows
that if work W increases linearly with the number of processors, then unitary memory-bounded
speedup will lead to ideal scalability. For the regularized least squares application, however, the
work W is a cubic function of the matrix size n. When the memory usage is doubled, the number
of floating point operations is increased by a factor of eight. If a perfect generalized speedup is
achieved from p to p 0
2p, the average speed at p and p 0
should be the same. By Eq. (12) we have
With the measured speedup being a little lower than unitary as shown in Fig. 7, a less than 0:25
scalability is expected. Table 1 confirms this relation, except at /(2; 4) for the reason pointed out
earlier. The scalability in the last column is noticeably lower than other columns. It is because
when 56 nodes are involved in computations, communication has to pass through ring:1, which
slows down the communication significantly.
Computation intensive applications have often been used to achieve high flops. The RLSP
application is a computation intensive application. Table 1 shows that isospeed scalability does
not give credits for computation intensive applications. The computation intensive applications
may achieve a high speed on multiple processors, but the initial speed is also high. The isospeed
scalability measures the ability to maintain the speed, rather than to achieve a particular speed.
The implementation is conducted on a KSR-1 shared virtual memory machine. The theoretical
and analytical results given in Section 2 and Section 3, however, are general and can be applied
on different parallel platforms. For instance, for Intel Paragon parallel computers, where virtual
memory is supported to swap data in and out from memory to disk, we expect that inefficient sequential
processing will cause similar superlinear (traditional) speedup as demonstrated on KSR-1.
For distributed-memory machines which do not support virtual memory, such as CM-5, traditional
speedup has another draw back. Due to memory constraint, scaled problems often cannot be
solved on a single processor. Therefore, scaled speedup is unmeasurable. Defining asymptotic speed
similarly as given in Section 3, the generalized speedup can be applied to this kind of distributed-memory
machines to measure scalable computations. Generalized speedup is defined as parallel
speed over sequential speed. Given a reasonable initial sequential speed, it can be used on any
parallel platforms to measure the performance of scalable computations.
5 Conclusion
Since the scaled up principle was proposed in 1988 by Gustafson and other researchers at Sandia
National Laboratory [21], the principle has been widely used in performance measurement of parallel
algorithms and architectures. One difficulty of measuring scaled speedup is that vary large problems
have to be solved on uniprocessor, which is very inefficient if virtual memory is supported, or
is impossible otherwise. To overcome this shortcoming, generalized speedup was proposed [1].
Generalized speedup is defined as parallel speed over sequential speed and does not require solving
large problems on uniprocessor. The study [1] emphasized the fixed-time generalized speedup,
sizeup. To meet the need of the emerging shared virtual memory machines, the generalized speedup,
particularly implementation issues, has been carefully studied in the current research. It has shown
that traditional speedup is a special case of generalized speedup, and, on the other hand, generalized
speedup is a reform of traditional speedup. The main difference between generalized speedup and
traditional speedup is how to define the uniprocessor efficiency. When uniprocessor speed is fixed
these two speedups are the same. Extending these results to scalability study, we have found that
the difference between isospeed scalability [12] and isoefficiency scalability [13] is also due to the
uniprocessor efficiency. When the uniprocessor speed is independent of the problem size, these
two proposed scalabilities are the same. As part of the performance study, we have shown that
an algorithm-machine combination achieves a perfect scalability if and only if it achieves a perfect
speedup. An interesting relation between fixed-time and memory-bounded speedups is revealed.
Seven causes of superlinear speedup are also listed.
A scientific application has been implemented on a Kendall Square KSR-1 shared virtual memory
machine. Experimental results show that uniprocessor efficiency is an important issue for
virtual memory machines, and that the asymptotic speed provides a reasonable way to define the
uniprocessor efficiency.
The results in this paper on shared virtual memory can be extended to general parallel com-
puters. Since uniprocessor efficiency is directly related to parallel execution time, scalability, and
benchmark evaluations, the range of applicability of the uniprocessor efficiency study is wider than
speedups. The uniprocessor efficiency might be explored further in a number of contexts.
Acknowledgement
The authors are grateful to the Cornell Theory Center for providing access to its KSR-1 parallel
computer, and to the referees for their helpful comments on the revision of this paper.
--R
"Toward a better parallel performance metric,"
Advanced Computer Architecture: Parallelism
"Solution of partial differential equations on vector and parallel com- puters,"
"Validity of the single-processor approach to achieving large scale computing capabilities,"
"Scalable problems and memory-bounded speedup,"
"Modeling speedup(n) greater than n,"
"Parallel efficiency can be greater than unity,"
"Inflated speedups in parallel simulations via malloc(),"
"Performance prediction of scalable computing: A case study,"
"The design of a scalable, fixed-time computer benchmark,"
"Scalability of parallel algorithm-machine combinations,"
"Isoefficiency: Measuring the scalability of parallel algorithms and architectures,"
"KSR parallel programming."
"Fat-trees: Universal networks for hardware-efficient supercomputing,"
"KSR technical summary."
Solution of Ill-posed Problems
"GPST inversion algorithm for history matching in 3-d 2-phase simulators,"
Solving Linear Systems on Vector and Shared Memory Computers.
"Distributed orthogonal factorization: Givens and Householder algorithms,"
"Development of parallel methods for a 1024- processor hypercube,"
--TR
--CTR
Prasad Jogalekar , Murray Woodside, Evaluating the Scalability of Distributed Systems, IEEE Transactions on Parallel and Distributed Systems, v.11 n.6, p.589-603, June 2000
Xian-He Sun, Scalability versus execution time in scalable systems, Journal of Parallel and Distributed Computing, v.62 n.2, p.173-192, February 2002
Xian-He Sun , Jianping Zhu, Performance Prediction: A Case Study Using a Scalable Shared-Virtual-Memory Machine, IEEE Parallel & Distributed Technology: Systems & Technology, v.4 n.4, p.36-49, December 1996
Xian-He Sun , Wu Zhang, A Parallel Two-Level Hybrid Method for Tridiagonal Systems and Its Application to Fast Poisson Solvers, IEEE Transactions on Parallel and Distributed Systems, v.15 n.2, p.97-106, February 2004
Xian-He Sun , Mario Pantano , Thomas Fahringer, Integrated Range Comparison for Data-Parallel Compilation Systems, IEEE Transactions on Parallel and Distributed Systems, v.10 n.5, p.448-458, May 1999 | shared virtual memory;speedup;scalability;performance evaluation;performance metrics;high performance computing;parallel processing |
629399 | Square Meshes Are Not Optimal for Convex Hull Computation. | AbstractRecently it has been noticed that for semigroup computations and for selection rectangular meshes with multiple broadcasting yield faster algorithms than their square counterparts. The contribution of this paper is to provide yet another example of a fundamental problem for which this phenomenon occurs. Specifically, we show that the problem of computing the convex hull of a set of n sorted points in the plane can be solved in ${\rm O}(n^{{\textstyle{1 \over 8}}}\, {\rm log}^{{\textstyle{3 \over 4}}}\,n)$ time on a rectangular mesh with multiple broadcasting of size$$n^{{\textstyle{3 \over 8}}}\,{\rm log}^{{\textstyle{1 \over 4}}}\,n\times {\textstyle{{n^{{ \textstyle{5 \over 8}}}} \over {{\rm log}^{{\textstyle{1 \over 4}}}\,n}}}.$$The fastest previously-known algorithms on a square mesh of size $\sqrt n\times \sqrt n$ run in ${\rm O}(n^{{\textstyle{1 \over 6}}})$ time in case the n points are pixels in a binary image, and in ${\rm O}(n^{{\textstyle{1 \over 6}}}\,{\rm log}^{{\textstyle{2 \over 3}}}\,n).$ time for sorted points in the plane. | Introduction
One of the fundamental heuristics in pattern recognition, image processing, and robot navigation,
involves approximating real-world objects by convex sets. For obvious reasons, one is typically
interested in the convex hull of a set S of points, defined as the smallest convex set that contains S
[39, 40]. In robotics, for example, the convex hull is central to path planning and collision avoidance
tasks [26, 30]. In pattern recognition and image processing the convex hull appears in clustering,
and computing similarities between sets [4, 16, 20]. In computational geometry, the convex hull is
often a valuable tool in devising efficient algorithms for a number of seemingly unrelated problems
[39, 40]. Being central to so many application areas, the convex hull problem has been extensively
studied in the literature, both sequentially and in parallel [3, 4, 16, 24, 27, 39, 40].
Typical processing needs found today in industrial, medical, and military applications routinely
involve handling extremely large volumes of data. The enormous amount of data involved in these
applications, combined with real-time processing requirements have suggested massively parallel
architectures as the only way to achieve the level of performance required for many time- and
safety-critical tasks.
Amongst the massively parallel architectures, the mesh has emerged as one of the natural
platforms for solving a variety of problems in pattern recognition, image processing, computer
Work supported by NASA grant NAS1-19858, by NSF grant CCR-9407180 and by ONR grant N00014-95-1-0779
Address for Correspondence: Prof. Stephan Olariu, Department of Computer Science, Old Dominion
University, Norfolk, VA 23529-0162, U.S.A. email: [email protected]
y Department of Computer Science, Southern Illinois University, Edwardsville, IL 62026
z Department of Computer Science, Old Dominion University, Norfolk, VA 23529
x Department of Mathematics and Computer Science, Elizabeth City State University, Elizabeth City, NC 27909
vision, path planning, and computational geometry [3, 27, 33, 35]. In addition, due to its simple
and regular interconnection topology, the mesh is well suited for VLSI implementation [6, 42]. One
of the drawbacks of the mesh stems from its large computational diameter which makes the mesh
architecture less attractive in non-spatially organized contexts where the computation involves data
items spread over processing elements far apart [21].
A popular solution to this problem is to enhance the mesh architecture by the addition of
various types of bus systems [5, 14, 18, 24, 28, 32]. Early solutions involving the addition of one or
more global buses, shared by all the processors in the mesh, have been implemented on a number of
massively parallel machines [1, 6, 14]. Yet another popular way of enhancing the mesh architecture
involves endowing every row with its own bus. The resulting architecture is referred to as mesh
with row buses and has received a good deal of attention in the literature [14, 18, 22]. Recently, a
more powerful architecture, referred to as mesh with multiple broadcasting, has been obtained by
adding one bus to every row and to every column in the mesh [24, 38]. The mesh with multiple
broadcasting has proven to be feasible to implement in VLSI, and is used in the DAP family of
computers [38].
Being of theoretical interest as well as commercially available, the mesh with multiple broadcasting
has attracted a great deal of attention. In recent years, efficient algorithms to solve a
number of computational problems on meshes with multiple broadcasting and some of their variants
have been proposed in the literature. These include image processing [25, 29, 38], visibility [7],
computational geometry [10, 12, 11, 15, 24, 36, 37], semigroup computations [5, 13, 18, 24], sorting
[8], multiple-searching [10], optimization [19], and selection [9, 18, 24], among others.
In particular, in [24] it is shown that on such a mesh of size
n \Theta
semigroup operations
can be performed in O(n 1
Bar-Noy and Peleg [5] as well as Chen et al. [18] have
shown that semigroup computations can be computed faster if rectangular meshes with multiple
broadcasting are used instead of square ones. Specifically, they have shown that on a mesh with
multiple broadcasting of size n 3
8 \Theta n 5
8 , semigroup computations can be performed in O(n 1
A similar phenomenon occurs in selection. In [24], it has been shown that the task of selecting
the median of n items on a mesh with multiple broadcasting of size p
n \Theta
takes O(n 1
6 log 2
3 n) time.
Recently Chen et al. [19] have shown that computing the median of n numbers takes O(n 1
8 log n)
time on a mesh with multiple broadcasting of size n 3
8 \Theta n 5
8 . More recently, Bhagavathi et al. [9]
have shown that the problem can be solved even faster: specifically, they exhibited a selection
algorithm running in O(n 1
8 log 3
4 n) time on a mesh with multiple broadcasting of size n 3
Recently, a number of papers have reported efficient convex hull computations on massively
parallel architectures [3, 15, 17, 23, 33, 34, 36]. For example, Miller and Stout [33] proposed convex
hull algorithms for sorted points on pyramids, trees, mesh of trees, and the reconfigurable mesh.
They also provided algorithms for sorted and unsorted points on the hypercube. Chazelle [17]
solved a number of geometric problems including the convex hull computation on a systolic chip.
Miller and Stout [34] and Holey and Ibarra [23] solved the convex hull problem on meshes.
The purpose of this work is to show that just like semigroup computations and selection, the
task of computing the convex hull of a set of n points in the plane sorted by their x coordinates
can be performed much faster on suitably chosen rectangular meshes than on square ones. The
fastest previously-known algorithms solve the problem in O(n 1
in the special case where the
n points are pixels in a binary image, and in O(n 1
6 log 2
3 n) time for n sorted points in the plane [24],
both on a mesh with multiple broadcasting of size p
n \Theta
n.
Our contribution is to exhibit an algorithm that finds the convex hull of a set of n points in the
plane sorted by increasing x-coordinate in O(n 1
8 log 3
4 n) time on a mesh with multiple broadcasting
of size n 3
. Our algorithm offers yet another example of a fundamental computational
problem for which rectangular meshes with multiple broadcasting yield faster algorithms than their
square counterparts, while keeping the number of processors at the same level.
The remainder of the paper is organized as follows: section 2 discusses the computational model;
section 3 reviews basic geometric results that are key ingredients of our convex hull algorithm, along
with their implementation on meshes with multiple broadcasting; section 4 presents the details of
the proposed algorithm; finally, section 5 summarizes our findings and proposes a number of open
questions.
2 The Mesh with Multiple Broadcasting
A mesh with multiple broadcasting of size M \Theta N consists of MN identical processors positioned
on a rectangular array overlaid with a bus system. In every row of the mesh the processors are
connected to a horizontal bus; similarly, in every column the processors are connected to a vertical
bus as illustrated in Figure 1. We note that these buses are static and cannot be dynamically
reconfigured, in response to computational needs, as is the case in reconfigurable architectures
[27, 28, 32].
Figure
1: A mesh with multiple broadcasting of size 4 \Theta 5
The processor P (i; j) is located in row i and column j (1 -
in the north-west corner of the mesh. Every processor is connected to its four neighbors, provided
they exist. Throughout this paper we assume that the mesh with multiple broadcasting operates in
SIMD mode: in each time unit, the same instruction is broadcast to all processors, which execute
it and wait for the next instruction. Each processor is assumed to know its coordinates within the
mesh and to have a constant number of registers of size O(log MN ); in unit time, every processor
performs some arithmetic or boolean operation, communicates with one of its neighbors using a
local link, broadcasts a value on a bus or reads a value from a specified bus. These operations
involve handling at most O(log MN) bits of information.
For practical reasons, only one processor is allowed to broadcast on a given bus at any one
time. By contrast, all the processors on the bus can simultaneously read the value being broadcast.
In accord with other researchers [5, 14, 18, 24, 28, 32, 38], we assume that communications along
buses take O(1) time. Although inexact, recent experiments with the DAP [38], the YUPPIE
multiprocessor array system [32], and the PPA [31] seem to indicate that this is a reasonable
working hypothesis.
3 Preliminaries
The purpose of this section is to review, in the context of meshes with multiple broadcasting, a
number of basic geometric results that are key ingredients in our convex hull algorithm. Through-
out, we assume an arbitrary set S of n points in the plane sorted by increasing x-coordinate. We
assume that the points in S are in general position, with no three collinear and no two having the
same x or y coordinates.
The convex hull of a set of planar points is the smallest convex polygon containing the given set.
Given a convex polygon stand for the points of P with the smallest
and largest x-coordinate, respectively. It is customary (see [39, 40] for an excellent discussion) to
refer to the chain as the upper hull of P and to the chain as the lower
hull as illustrated in Figure 2.Upper Hull
Lower Hull
Figure
2: Illustrating upper and lower hulls
Our strategy involves computing the upper and lower hulls of S. We only describe the computation
of the upper hull, since computing the lower hull is perfectly similar. To be in a position to
make our algorithmic approach precise, we need to develop some terminology and to solve simpler
problems that will be instrumental in the overall solution.
Consider the upper hull P of S. A sample of P is simply a subset of points in P enumerated in
the same order as those in P . In the remainder of this paper we shall distinguish between points
that belong to an upper hull from those that do not. Specifically, the points that are known to
belong to the upper hull will be referred to as vertices. This terminology is consistent with [39] and
a i
a i-1
a
Figure
3: Convexity guarantees that qa i cuts a unique pocket
be an upper hull and let q be an arbitrary point outside P . A line qp i is
said to be the supporting line to P from q if the interior of P lies in one halfplane determined by qp i .
The first problem that needs to be addressed involves determining the supporting line to an upper
hull from a point. For this purpose consider an arbitrary sample
of P . The sample A partitions P into s pockets A 1 , A 2 , such that pocket A i involves the
vertices of P lying between a i\Gamma1 and a i (we assume that a i\Gamma1 belongs to A i and that a i does not).
Let oe be the size of the largest pocket and let t(oe) be the time needed to make information about q
available to all the vertices in the largest pocket in P . For further reference, we state the following
result that holds for meshes with row buses. (It is important to note that a mesh with multiple
broadcasting becomes a mesh with row buses if the column buses are ignored.)
Lemma 3.1. The supporting line to an upper hull from a point can be determined on a mesh with
row buses in time t(oe).
Proof. We assume, without loss of generality, that q lies to the left of P , that is, x(q)
and that no three points in P [ fqg are collinear. To begin, we compute a supporting line from q
to A, and then we extend this solution to obtain the desired supporting line to P . To see how the
computation proceeds, assume that both q and A are stored by the processors in row i of a mesh
with row buses. Now q broadcasts its coordinates on the bus in row i. Exactly one processor in row
detects that both its neighbors in A are to the same side of line emanating from q and passing
through the vertex of A it contains. Referring to Figure 3, assume that the supporting line from q
touches A at a i . In case the line qa i is not a supporting line to P , convexity guarantees that exactly
one of the pockets A i and A i+1 is intersected by the line qa i . One more comparison determines
the pocket intersected by the line qa i . Assume, without loss of generality that pocket A i+1 is the
one intersected. To compute the supporting line to P , we only need compute the supporting line
to A i+1 from q. Once the vertices in A i+1 are informed about the coordinates of q, exactly one
will determine that it achieves the desired supporting line. Therefore, the entire computation can
be performed in the time needed to make information about q available to all the vertices in the
largest pocket in P . By assumption, this is bounded by t(oe).U
Figure
4: Illustrating the supporting line of two upper hulls
The supporting line 1 of two upper hulls U and V is the (unique) line -(U; V ) with the following
properties: (1) -(U; V ) is determined by a pair of vertices of U and V , and (2) all the vertices of
U and V lie in the same halfplane determined by -(U; V ). Refer to Figure 4 for an illustration.
The second problem that is a key ingredient in our convex hull algorithm involves computing
the supporting line of two upper hulls l ), with all the
vertices of U to the left of V . We assume that U and V are stored in row i of a mesh with multiple
broadcasting. We further assume that the vertices of U and V know their rank within the hull of
which they are a part, as well as the coordinates of their left and right neighbors (if any) on that
upper hull.
For further reference we state the following result. Again, this result holds for meshes with row
buses, since no column buses are used in the corresponding algorithm.
Lemma 3.2. The task of computing the supporting line of two separable upper hulls U and V
stored in one row of a mesh with row buses can be performed in O(log minfj U
Proof. To begin, the processor storing vertex u kbroadcasts the coordinates of u khorizontally,
along the bus in row i. Every processor that holds a vertex v j of V checks whether both its left
and right neighbors are below the line determined by u kand v j . A processor
detecting this condition, broadcasts the coordinates of v j back to the sender. In other words, we
Also referred to as a common tangent.
U
Figure
5: Approximately half of the vertices are eliminated as shown
have detected a supporting line to V from u k. If this is a supporting line to U , then we are
done. Otherwise, convexity guarantees that half of the vertices in U are eliminated from further
consideration as illustrated in Figure 5. (In Figure 5 the vertices that are eliminated are hashed.)
This process continues for at most dlog j U je iterations, as claimed.
Next, we address the problem of finding the supporting line of two upper hulls
and separable in the x direction, with P lying to the left of Q. This time, we
assume a mesh with row buses of size y \Theta 2z with the vertices of P stored in column-major order in
the first z columns and with the vertices of Q stored in column-major order in the last z columns
of the platform. Notice that we cannot apply the result of Lemma 3.2 directly: there is simply not
enough bandwidth to do so. Instead, we shall solve the problem in two stages as we are about to
describe.
Consider samples
The two samples determine pockets A 1 , A 2 in P and Q, respectively.
Referring to Figure 6, let the supporting line -(A; B) of A and B be achieved by a i and b j , and let
the supporting line -(P; Q) of P and Q be achieved by p u and q v . The following technical result
has been established in [2].
Proposition 3.3. At least one of the following statements is true:
(c)
Proposition 3.3 suggests a simple algorithm for computing the supporting line of two upper
hulls P and Q with the properties mentioned above. Recall that we assume a mesh with row buses
of size y \Theta 2z with the vertices of P and Q stored in column-major order in the first and last z
columns, respectively, and that the samples A and B will be chosen to contain the topmost hull
a i
a
A i+2
Figure
Illustrating Proposition 3.3
vertex (if any) in every column of the mesh. For later reference we note that this ensures that no
pocket contains more than 2y vertices. We now state the following result that holds for meshes
with row buses.
Lemma 3.4. The task of computing the supporting line of two separable upper hulls stored in
column-major order in a mesh with row buses of size y \Theta 2z takes O(y log z) time.
Proof. By Lemma 3.2, computing the supporting line of A and B takes at most O(log z) time. As
before, let the supporting line -(A; B) of A and B be achieved by a i and b j .
Detecting which of the conditions (a)-(d) in Proposition 3.3 holds is easy. For example, condition
(b) holds only if p u lies to the right of a i and to the left of a i+1 . To check (b), the supporting
lines ffi and ffi 0 from a i and a i+1 to Q are computed in O(y) time using Lemma 3.1. Once these
supporting lines are available, the processor holding a i+1 detects in constant time whether the left
neighbor of a i+1 in P lies above ffi 0 . Similarly, the processor holding a i checks whether the right
neighbor of a i in P lies above ffi. It is easy to confirm that p u belongs to A i+1 if and only if both
these conditions hold. The other conditions are checked similarly.
Suppose, without loss of generality, that (b) holds. Our next task is to compute a supporting
line for A i+1 and Q. This is done in two steps as follows. First, the supporting line between A i+1
and B is computed. The main point to note is that in order to apply Lemma 3.2, the vertices in
pocket A i+1 have to be moved to one row (or column) of the mesh. Our way of defining samples
guarantees that this task takes O(y) time. Second, convexity guarantees that if the supporting line
of A i+1 and B is not a supporting line for P and Q, then in O(1) time we can determine a pocket
s such that the supporting line of A i+1 and B s is the desired supporting line. (In Figure 6, B s
is B j+1 .) Therefore, Lemma 3.1, Lemma 3.2, and Proposition 3.3 combined imply that computing
the supporting line of P and Q takes O(y log z) time, as claimed.
4 The Algorithm
The purpose of this section is to present the details of a general convex hull algorithm. Consider
a mesh R with multiple broadcasting of size M \Theta N with M - N . The input to the algorithm is
an arbitrary set S of n points in the plane sorted by increasing x-coordinate. As usual, we assume
that every point in the set is specified by its cartesian coordinates. The set S is stored in R, one
point per processor, in a way that we are about to explain. For the purpose of our convex hull
algorithm, we view the dimensions M and N of R as functions of n, initially only subject to the
constraint
Our goal is to determine the values of M and N such that the running time of the algorithm is
minimized, over all possible choices of rectangular meshes containing n processing elements.
During the course of the algorithm, the mesh R will be viewed as consisting of submeshes in
a way that suits various computational needs. Occasionally, we shall make use of algorithms that
were developed for meshes with no broadcasting feature. In particular, we make use of an optimal
convex hull algorithm [3], that we state below.
Proposition 4.1. The convex hull of a set on n points in the plane can be computed in \Theta( p
n)
time on a mesh-connected computer of size
n \Theta
n.
The following follows immediately from Proposition 4.1.
Corollary 4.2. The convex hull of a set on n points in the plane can be computed in O(maxfa; bg)
time on a mesh-connected computer of size a \Theta b, with a
Comment: A very simple information transfer argument shows that the task of computing the
convex hull of a set of unsorted points in the plane stored one per processor in a mesh with multiple
broadcasting of size p
n \Theta
must take \Omega\Gamma p
n) time. Thus, for the convex hull problem the mesh
with multiple broadcasting does not do "better" than the unenhanced mesh. This negative results
justifies looking at the problem of computing the convex hull of a sorted set of n points in the plane.
While the diameter of the unenhanced mesh still forces any algorithm to take \Omega\Gamma p
n) time, one can
do better on the mesh with multiple broadcasting. In fact, one does even better if the platform is
skewed. How much better, is just what we set out to explore in this paper.
We start out by setting our overall target running time to O(x), with x to be determined
later, along with M and N . Throughout the algorithm, we view the original mesh R as consisting
as a set of submeshes R j
y ) of size y \Theta N each, with the value of y (y - x) to be
specified later, along with that of the other parameters. We further view each R j as consisting of
a set of submeshes R j;k
x ) of size y \Theta x. It is easy to confirm that the
processors P (r; c) with (j
The input is distributed in block row-major order [18], one point per processor, as described
next: the points in each R j are stored in column-major order, while the points stored in R j occur,
in sorted order, before the points stored in R t whenever t. It is easy to see that, in this setup,
the points in R j;k occur in column-major order.
To avoid tedious details we assume, without loss of generality, that the points in S are in general
position, with no three collinear and no two having the same x or y coordinates. We only describe
the computation of the upper hull, for the task of computing the lower hull is perfectly similar.
Our algorithm is partitioned into three distinct stages that we outline next. Stage 1 is simply a
preprocessing stage in which the upper hulls of several subsets of the input are computed using an
optimal mesh algorithm [33]. A second goal of this stage is to establish two properties referred to
as (H) and (S) that will become basic invariants in our algorithm. Stage 2 involves partitioning the
mesh into a number of submeshes, each containing a suitably chosen number of rows of the original
mesh. The specific goal of this stage is to compute the upper hull in each of these submeshes.
Finally, Stage 3 proceeds to combine the upper hulls produced in Stage 2, to obtain the upper hull
of S. The detailed description of each of these stages follows.
Stage 1. fPreprocessingg At this point we view the original mesh R as consisting of the submeshes
R j;k described above. In each R j;k we compute the upper hull using an optimal convex hull algorithm
for meshes [3]. By virtue of Corollary 4.2, this task takes O(x) time.
In every R j;k , in addition to computing the upper hull, we also choose a sample, that is, a
subset of the vertices on the corresponding upper hull. The sample is chosen to contain the first
hull vertex in every column of R j;k in top-down order. As a technicality, the last sample vertex
coincides with the last hull vertex in R j;k . In addition to computing the upper hull and to selecting
the sample, the following information is computed in Stage 1.
(H) for every hull vertex, its rank on the upper hull, along with the identity and coordinates of
its left and right neighbors (if any) on the upper hull computed so
(S) for every sample vertex, its rank within the sample, along with the identity and coordinates
of its left and right neighbors (if any) in the sample.
Note that the information specified in (H) and (S) can be computed in time O(x) by using local
communications within every R j;k .
Stage 2. fHorizontal Stageg During this stage, we view the original mesh as consisting of submeshes
y
of size y \Theta N each, with R j involving the submeshes R j;1 , R j;2
x
The task specific to this stage involves computing the upper hull of the points in every R j
y maintaining the conditions (H) and (S) invariant. Basically, this stage consists
of repeatedly finding the supporting line of two neighboring upper hulls and merging them, until
only one upper hull remains. There are two crucial points to note: first, that the sampling strategy
remains the one defined in Stage 1 and, second, that our way of chosing the samples, along with
the definition of the submeshes R j guarantees that no pocket contains more than 2y vertices.
At the beginning of the i-th step of Stage 2, a generic submesh R j contains the upper hulls U 1 ,
, with U 1 being the upper hull of the points in R j;1 , R j;2 standing
for the upper hull of the points in R j;2 so on. In this step, we merge each of
the N
consecutive pairs of upper hulls. For further reference we state the following result.
Lemma 4.3. The i-th step of Stage 2 can be performed in O(y
the invariants (H) and (S), assumed to hold at the beginning of the i-th step, continue to hold at
the end of the step.
Proof. To make the subsequent analysis of the running time more transparent and easier to
understand, we shall partition the computation specific to the i-th step into four distinct substeps.
Let U
be two generic upper hulls that get merged in the i-th step. The
data movement and computations described for U 2r\Gamma1 and U 2r are being performed, in parallel, in
all the other pairs of upper hulls that get merged in the i-th step.
Substep 1. The sample vertices in U
are moved to row
of R j using local communications only. This task takes, altogether, O(y) time.
Substep 2. Compute the supporting line of the samples in U
Proceeding
as in Lemma 3.2, this task takes O(log 2
Substep 3. Using Lemma 3.4 compute the supporting line of U
Substep 4. Once the supporting line of U
available, eliminate from
U those vertices that no longer belong to the new upper hull.
The motivation for the data movement in Substep 1 is twofold. On the one hand we wish to
dedicate row buses to the computation of supporting lines and, on the other, we want to perform
all local movements necessary for the subsequent broadcast operations in all the submeshes at the
same time. This will ensure that only O(y) time is spent in local data movement altogether.
A similar trick bounds the time spent in local movement in Substep 4. For definiteness, assume
that the supporting line of U 2r\Gamma1 and U 2r is achieved by vertices u of U and v of U 2r . Note
that this information is available for every pair of merged upper hull at the end of Substep 3. At
this moment, we mandate the processor holding vertex u to send to row
packet containing the coordinates of u and v, along with the rank of u in U 2r\Gamma1 and the rank of v
in U 2r . This is done using local communications only. Proceeding in parallel in all submeshes, the
time spent on this data movement is bounded by O(y) altogether.
Next, to correctly update the upper hull of the union of U 2r\Gamma1 and U 2r we need to eliminate all
the vertices that are no longer on the upper hull. For this purpose, we use the information that is
available in row by virtue of the previous data movement, in conjunction with
broadcasting on the bus in that row. Specifically, the processor in row
that has received the packet containing information about u and v by local communications, will
broadcast the packet along the row bus. Every processor in this row holding a vertex of U 2r\Gamma1 or U 2r
retains the packet being sent. After all the broadcast operations are done, the processors that have
retained a packet, transmit the packet vertically in their own column, using local communications
only.
Upon receipt of this information, every processor in R j storing a vertex in U 2r\Gamma1 or U 2r can
decide in O(1) time whether the vertex it stores should remain on the upper hull or not. Therefore,
the update is correct and can be performed in O(y) time. We must also show that the invariants
(H) and (S), assumed to hold at the beginning of the i-th step, continue to hold at the end of the
step.
To preserve (H), every vertex on the upper hull of the union of U must be able
to compute its rank in the new hull and to identify its left and right neighbors. Clearly, every
vertex in the new upper hull keeps its own neighbors except for u and v, which become each other's
neighbors. To see that every vertex on the new convex hull is in a position to correctly update its
rank, note that all vertices on the hull to the left of u keep their own rank; all vertices to the right
of v update their ranks by first subtracting 1 plus the rank of v in U 2r from their own rank, and
then by adding the rank of u in U 1 to the result. All the required information was made available
as described previously. Thus, the invariant (H) is preserved.
To see that invariant (S) is also preserved, note that every sample vertex to the left of u is still
a sample vertex in the new hull. Similarly, every sample vertex to the right of v is a sample vertex
in the new hull and v itself becomes a sample vertex as illustrated in Figure 7, where vertices that
are no longer on the convex hull are hashed and samples are represented by dark circles. Therefore,
all sample vertices in the new upper hull can be correctly identified. In addition, all of them keep
Figure
7: Preserving (S)
their old neighbors in the sample set, except for two sample vertices: one is the sample vertex in
the column containing u and the other is v. In one more broadcast operation these sample vertices
can find their neighbors. In a perfectly similar way, the rank of every sample vertex within the new
sample set can be computed. Therefore, the invariant (S) is also preserved.
We are now in a position to clarify the reasons behind computing the supporting line of U
and U
. The intention is to dedicate the bus in the first
row to the first pair of upper hulls to be merged in this step, the second bus to the second pair of
upper hulls, and so on. Once the buses have been committed in the way described, the broadcasting
of data involving the first y pairs of hulls can be performed in parallel. Thus, once the relevant
information was moved to the prescribed row of R j as described in Substep 1, the supporting lines
in each group of y pairs of upper hulls in R j can be computed in O(log 2 time. Since there
are
l N
such groups, synchronizing the local movement in all the groups as described above
guarantees that the i-th step of Stage 2 takes O(y
This completes the proof of
Lemma 4.3.
To argue about the total running time of Stage 2 note that by (1)
x
and that
log n
(y
xy
log n
log x
Note that by our assumption y Therefore, we can
log n
log x
log n
Elementary manipulations show
log n
Therefore, by (2)-(5), as long as 2 - log n, the running time of Stage 2 is in O(y log n
xy ).
Since we want the overall running time to be restricted to O(x) we write
(y
xy
Stage 3. fVertical Stageg Recall that at the end of Stage 2, every submesh R j
contains the upper hull of the points stored by the processors in R j . The task specific to Stage 3
involves repeatedly merging pairs of two neighboring groups of R j 's as described below.
At the beginning of the i-th step of Stage 3, the upper hulls in adjacent pairs each involving
are being merged. For simplicity, we only show how the pair of upper hulls of
points in R 1 updated into a new upper hull of the
points in R 1 To simplify the notation, we shall refer to the submeshes R 1
and to R 2 What distinguishes Stage 3 from
Stage 2 is that we no longer need sampling. Indeed, as we shall describe, the availability of buses
within groups makes sampling unnecessary. We shall also prove that in the process the invariant
(H), assumed to hold at the beginning of the i-th step, continues to hold at the end of the step.
Let U 1 and U 2 be the upper hulls of the points stored in G 1 and G 2 , respectively. As a first step,
we wish to compute the supporting line of U 1 and U 2 ; once this supporting line is available, the
two upper hulls will be updated into the new upper hull of all the points in R 1 In the
context of Stage 3, the buses in the mesh will be used differently. Specifically, the horizontal buses
within every group will be used to broadcast information, making it unnecessary to move data to
a prescribed row. Without loss of generality write U
all the vertices in U 1 to the left of U 2 . We assume that (H) holds, that is, the vertices in U 1 and
know their rank within their own upper hull, as well as the coordinates of their left and right
neighbors (if any) in the corresponding upper hull.
To begin, the processor storing the vertex u pbroadcasts on the bus in its own row a packet
consisting of the coordinates of u pand its rank in U 1 . In turn, the corresponding processor in
the first column of the mesh will broadcast the packet along the bus in the first column. Every
processor in the first column of the mesh belonging to R 2 read the bus and
then broadcast the packet horizontally on the bus in their own row. Note that as a result of this
data movement, the processors in the group G 2 have enough information to detect whether the
vertices they store achieve the supporting line to U 2 from u p. Using the previous data movement
in reverse, the unique processor that detects this condition broadcasts a packet consisting of the
coordinates and rank of the point it stores back to the processor holding u pBy checking its neighbors on U 1 , this processor detects whether the supporting line to U 2 from
supporting for U 1 . In case it is, we are done. Otherwise, the convexity of U 1 guarantees that
half of the vertices in U 1 can be eliminated from further consideration. This process continues for
at most dlog 2 iterations. Consequently, the task of computing the supporting line of U 1
and U 2 runs in O(log 2 time. Once the supporting line is known, we need to eliminate from
U 1 and U 2 the points that no longer belong to the new upper hull. As we are about to show, all
this is done while preserving the invariant (H).
Assume, without loss of generality, that some vertex u in U 1 and v in U 2 are the touching
points of the supporting line. To correctly update the upper hull of the union of U 1 and U 2 , we
need to eliminate all the vertices that are no longer on the upper hull. As a first step, the processor
holding vertex u broadcasts on its own row a packet containing the coordinates of u and v, along
with the rank of u in U 1 and the rank of v in U 2 . The corresponding processor in the first column
will broadcast the packet along the column bus. Every processor in the first column belonging to
broadcast the packet horizontally on the bus in their own row. Upon receipt of
this information, every processor in R j;1 , R j;2 storing a vertex in U 1 or U 2 can decide
whether the vertex is stores should remain in the upper hull or not. Therefore, once the supporting
line is known, the task of eliminating vertices that no longer belong to the new upper hull can be
performed in O(1) time. To preserve (H), every point on the upper hull of the union of U 1 and U 2
must be able to compute its rank on the new hall and also identify its left and right neighbors.
Clearly, every vertex on the new upper hull keeps its own neighbors except for u and v, which
become each other's neighbors. To see that every vertex on the newly computed convex hull is in
a position to correctly update its rank, note that all vertices on the hull to the left of u keep their
own rank; all vertices to the right of v update their ranks by first subtracting 1 plus the rank of v in
U 2 from their own rank and by adding the rank of u in U 1 . All the required information was made
available in the packet previously broadcast. Thus, the invariant (H) is preserved. To summarize
our discussion we state the following result.
Lemma 4.4. The supporting line of U 1 and U 2 can be computed in O(log 2 using
vertical broadcasting in the first column of the mesh only. Furthermore, the invariant (H) is
preserved.
Notice that we have computed the supporting line of U 1 and U 2 restricting vertical broadcasting
to the first column of the mesh. The intention was to assign the first column bus to the first pair
of upper hulls, the second bus to the second pair of upper hulls, and so on. Once the buses have
been committed in the way described, the computation involving the first N pairs of hulls can be
performed in parallel. Therefore, by virtue of Lemma 4.4 the supporting lines in each of the first
N pairs of upper hulls in R j can be computed in O(log 2 time. Since there are
l M
such
pairs, the i-th step of Stage 3 takes O( M log 2
To assess the running time of Stage 3, we note that
log n
log n
log yN
As before, note that y imply that log yN ! log M log n. Also,
the number of iterations is at most log n, and so, log n. Therefore, we can write
log n
log yN
log n
log
log n
Equations (7) and (8), combined, guarantee that the overall running time of Stage 3 is in O( M log n
Since we want the overall running time to be restricted to O(x) we write
log n
It is a straightforward, albeit slightly tedious, to verify that the values of x, y, M , and N that
simultaneously satisfy constraints (1), (6), and (9) so as to minimize the value of x are:
8 log 3
To summarize our findings we state the following result.
Theorem 4.5. The problem of computing the convex hull of a set of n points in the plane sorted
by increasing x coordinate can be solved in O(n 1
8 log 3
4 n) time on a mesh with multiple broadcasting
of size n 3
5 Conclusions and Open Problems
Due to their large communication diameter, meshes tend to be slow when it comes to handling data
transfer operations over long distances. In an attempt to overcome this problem, mesh-connected
computers have recently been enhanced by the addition of various types of bus systems. Such a
system, referred to as mesh with multiple broadcasting, has been adopted by the DAP family of
computers [38] and involves enhancing the mesh architecture by the addition of row and column
buses.
Recently it has been noted that for semigroup computations and for selection, square meshes
are not optimal in the sense that for a problem of a given size one can devise much faster algorithms
on suitable chosen rectangular meshes than on square meshes.
The contribution of this paper is to show that the same phenomenon is present in the problem
of computing the convex hull of a sorted set of points in the plane. The fastest known convex hull
algorithm to detect the extreme points of the convex hull of a binary image of size p
n \Theta
runs
in O(n 1
6 ); for n sorted points in the plane the fastest known algorithm [25] runs in O(n 1
6 log 2
3 n)
time on a mesh with multiple broadcasting of size p
n \Theta
n. By contrast, we have shown that the
problem can be solved in O(n 1
8 log 3
4 n) time on a mesh with multiple broadcasting of size n 3
\Theta n 5log4 n
A number of problems remain open, however. In particular it would be interesting to know
whether the convex hull algorithm developed in this paper can be applied to other computational
geometry tasks such as triangulating a set of points in the plane. A second question is whether
sampling can be used to solve the convex hull problem in higher dimensions. Finally, it would be
interesting to know whether the sampling using in this paper yields fast convex hull algorithms
for sorted points on other popular massively parallel architectures. In particular, it is not known
whether the same approach works for the reconfigurable mesh that is, a mesh-connected machine
augmented with a dynamically reconfigurable bus system. To the best of our knowledge no such
results have been reported in the literature.
Acknowledgement
: The authors would like to thank Mark Merry and three anonymous referees
for many insightful comments that greatly improved the quality of the presentation. We also thank
Professor Batcher for his professional way of handling our submission.
--R
Optimal bounds for finding maximum on array of processors with k global buses
Parallel algorithms for some functions of two convex polygons
Parallel Computational Geometry
Computer Vision
Square meshes are not always optimal
Design of massively parallel processor
A fast selection algorithm on meshes with multiple broadcasting
Convex polygon problems on meshes with multiple broadcasting
Convexity problems on meshes with multiple broadcasting
A unifying look at semigroup computations on meshes with multiple broadcasting
Finding maximum on an array processor with a global bus
Time and VLSI-optimal convex hull computation on meshes with multiple broadcasting
Segmentation of Cervical Cell Images
Computational geometry on the systolic chip
Designing efficient parallel algorithms on mesh connected computers with multiple broadcasting
Pattern classification and scene analysis
Computer architecture for spatially distributed data
Leftmost one computation on meshes with row broadcasting
Iterative algorithms for planar convex hull on mesh-connected arrays
Array processor with multiple broadcasting
Image computations on meshes with multiple broadcast
Obstacle growing in a non-polygonal world
Introduction to parallel algorithms and architectures: arrays
IEEE Transactions on Computers
An efficient VLSI architecture for digital geometry
a configurational space approach
IEEE Transactions on Parallel and Distributed Systems
Connection autonomy and SIMD computers: a VLSI implementation
Efficient parallel convex hull algorithms
Mesh computer algorithms for computational geometry
Finding connected components and connected ones on a mesh-connected parallel computer
Optimal convex hull algorithms on enhanced meshes
The AMT DAP 500
Computational Geometry - An Introduction
Computational Geometry
Movable separability of sets
Computational aspects of VLSI
--TR
--CTR
Venkatavasu Bokka , Himabindu Gurla , Stephan Olariu , James L. Schwing , Larry Wilson, Time-Optimal Domain-Specific Querying on Enhanced Meshes, IEEE Transactions on Parallel and Distributed Systems, v.8 n.1, p.13-24, January 1997
Dharmavani Bhagavathi , Himabindu Gurla , Stephan Olariu , Larry Wilson , James L. Schwing , Jingyuan Zhang, Time- and VLSI-Optimal Sorting on Enhanced Meshes, IEEE Transactions on Parallel and Distributed Systems, v.9 n.10, p.929-937, October 1998
R. Lin , S. Olariu , J. L. Schwing , B.-F. Wang, The Mesh with Hybrid Buses: An Efficient Parallel Architecture for Digital Geometry, IEEE Transactions on Parallel and Distributed Systems, v.10 n.3, p.266-280, March 1999
Venkatavasu Bokka , Himabindu Gurla , Stephan Olariu , James L. Schwing, Podality-Based Time-Optimal Computations on Enhanced Meshes, IEEE Transactions on Parallel and Distributed Systems, v.8 n.10, p.1019-1035, October 1997
Venkatavasu Bokka , Himabindu Gurla , Stephan Olariu , James L. Schwing , Larry Wilson, Time-Optimal Domain-Specific Querying on Enhanced Meshes, IEEE Transactions on Parallel and Distributed Systems, v.8 n.1, p.13-24, January 1997 | parallel algorithms;image processing;computational geometry;convex hulls;pattern recognition;meshes with broadcasting |
629409 | Globally Consistent Event Ordering in One-Directional Distributed Environments. | AbstractWe consider communication structures for event ordering algorithms in distributed environments where information flows only in one direction. Example applications are multilevel security and hierarchically decomposed databases. Although the most general one-directional communication structure is a partial order, partial orders do not enjoy the property of being consistently ordered, a formalization of the notion that local ordering decisions are ensured to be globally consistent. Our main result is that the crown-free property is necessary and sufficient for a communication structure to be consistently ordered. We discuss the computational complexity of detecting crowns and sketch typical applications. | Introduction
We consider the ordering of events in a distributed environment. For our
purposes, a distributed environment is one where a set of events compose
some computation, the events occur at a network of sites, and sites communicate
by passing messages. Some applications impose restrictions on the
communication structure, and these restrictions can be exploited to make
more efficient ordering decisions. In this paper we consider restrictions that
force communication to occur in one direction only. Our particular focus is
to determine the largest class of such communication structures that are capable
of guaranteeing consistent ordering decisions without resort to further
global synchronization.
A common task in a distributed environment is to decide the ordering
of any pair of events. A standard approach to this problem is Lamport's
timestamp algorithm [Lam78], which can be used to impose a partial order
on events. The basic idea behind Lamport's algorithm is that each event
at a site is marked with a unique local timestamp. Timestamps are drawn
from some monotonically increasing sequence, such as an integer counter. If
site A sends a message to site B, then A includes the most recent timestamp
at A in the message. Upon receipt of the message, B increases its current
timestamp to the timestamp value in the message, if necessary. The ordering
of any pair of events is determined in part by consulting the corresponding
timestamps and in part by consulting the record of message receipts.
Given events e 1 and e 2 , Lamport's algorithm yields three possible out-
comes, namely that e 1 precedes e 2 , that e 2 precedes e 1 , or that e 1 and e 2 are
concurrent. The interpretation of the last possibility is that it is unknown,
and perhaps unimportant, which of e 1 or e 2 'really' happened first. If all
events must be ordered, concurrent events can be forced into a partial order
according to the corresponding timestamps. Resolving the case of two identical
timestamp values with a static precedence order on sites yields a total
order.
A motivation for this paper is the observation that some of the ordering
decisions made to obtain a total order are inherently artificial, and so,
unsurprisingly, no single algorithm suits all applications. For example, in
a transaction processing application, suppose that a site receives a message
about a transaction that a timestamping algorithm determines to be far in
the past at the receiving site. If the site is to accept the message, then the
site may be obliged to unwind all later transactions, incorporate the effect of
the remote transaction, and then redo the unwound transactions. From the
perspective of the receiving site, it would be more efficient for the ordering
of the event to be determined, at least in part, by the clock at the site that
receives the message rather than by the clock at the site where the event
occurred.
Allowing event ordering to be determined upon receipt of a corresponding
message at a given site can lead to inconsistencies. For example, suppose that
event e 1 occurs at site A and, concurrently, event e 2 occurs at site B. Let
A send a message about e 1 to B and B send a message about e 2 to A. The
ordering choices that involve the least rework are for A to order e 1 prior to
e 2 and for B to order e 2 prior to e 1 . Each local order is consistent, but there
is no global consistent order.
This simple example shows that if ordering of otherwise concurrent events
is to be done by the site receiving a message, the structure by which sites
communicate must be antisymmetric. Otherwise, in the absence of some
other global synchronization mechanism, the type of inconsistency exhibited
above can arise. It is not difficult to extend the above example to show
that the transitive closure of the communication structure must also be an-
tisymmetric, and hence an acyclic communication structure is necessary in
general.
Although a restriction to acyclic communication structures might seem
too strong to be useful, there are applications, such as multilevel security
[BL76, Den82] and hierarchical databases [HC86], that mandate such restrictions
on information flow. For example, in a multilevel security environment,
suppose that site A encapsulates a database at the 'Secret' level and site B
encapsulates a database at the 'Unclassified' level. Communication from A
to B is prohibited to prevent the leakage of classified information. It is satisfactory
for A to make an ordering decision about an event at B upon receipt
of the corresponding message since B has no opportunity to make a conflicting
decision. Indeed, the existence of events at A, as well as any information
that depends on events at A, must be hidden from B to satisfy the security
requirements.
A general acyclic structure is a partial order, but it is known that partial
orders by themselves do not lead to global consistent orderings, as is reviewed
by example later in the paper (c.f. fig. 1). In the multilevel security do-
main, a restriction on partial orders to lattices is common [BL76, Den82]. A
lattice is a partial order in which each pair of sites has a unique least upper
bound and and a unique greatest lower bound. Unfortunately, lattices by
themselves also do not lead to consistent orderings. In [AJ93b], it was shown
that for a variety of concurrency control algorithms for multilevel, replicated,
secure databases [JK90, Cos92, KK92], lattices do permit inconsistent order-
ings, i.e. unserializable execution histories. Serializability is a major concern
in database concurrency control, and conditions that lead to serialization
problems are correspondingly important. A restriction to planar lattices is
sufficient to avoid the identified serializability problem for the cited concurrency
control algorithms, but planarity is not a necessary condition. In a
related paper, it was shown that for component-based timestamp generation
[AJ93a], a planar lattice is sufficient, but again unnecessary, for consistent
ordering.
In this paper, we develop a formal notion called the consistently-ordered
property to describe communication structures that ensure globally consistent
ordering decisions without resort to further global synchronization. Our
main result is showing that the consistently-ordered property is equivalent to
a standard characterization of partial orders known as the crown-free prop-
erty. In terms of the previous paragraph, we show that crown-freedom is both
a necessary and a sufficient condition to guarantee the consistently-ordered
property. The absence of crowns in partial orders has other useful applica-
tions, for example [DRW82] exploit crown-free partial orders to develop an
efficient scheduling algorithm.
The structure for this paper is as follows. In section 2, we supply a model
for event ordering in a distributed environment with a one-way communication
structure. The model yields a formal definition of the consistently-
ordered property. In section 3, we introduce the crown-free property and
show that it is equivalent to the consistently-ordered property. In section 4,
we discuss the computational complexity of deciding whether a communication
structure is crown-free. In section 5 we make some observations about
our results and sketch applications to multilevel security and hierarchical
databases. The reader unfamiliar with these applications may wish to read
section 5.2 first. In section 6 we conclude the paper.
We begin with a remark on the notation. We have chosen to present our
formalisms with the conventions of the Z notation. For the most part, Z
notation follows typical set theory; where different, we give an explanatory
note.
Let Classes be a set, an element of which we refer to as a class. As an
example, in a multilevel security application Classes might correspond to all
possible security classifications, such as 'Confidential', `Secret-NATO', and
so on. Let be some finite subset of Classes, and let ! be a
relation on P that is antisymmetric and transitive. To extend the multilevel
security example above, P corresponds to those security classifications that
are actually employed, and ! is the dominance relation between security
classifications.
is a partial order. If we say that
which we also write P to allow the case where
to denote that P i and P j are incomparable. Finally, we
assume that S has a greatest element; we discuss relaxing this assumption
later in the paper.
Let Events be a set, an element of which we refer to as an event. As
an example, in a database application Events might correspond to the set
of all possible transactions. Let be some finite subset of
Events. To extend the database example, E typically corresponds to the set
of committed transactions.
We associate every event in E with some class in P. The (partial) function
mapping. Combining our prior
examples, we might associate each transaction with a particular security
classification, e.g. L(e 1 means that transaction e 1 is classified as
'Secret'.
Event e is said to be local to P if event e is visible at class
P if there exists a P j - P such that e is local to P j . We define E P to be
the set of all events local to P Pg.
Informally, this definition reads, 'E P is the set of all e of type Events such
that L(e) is P '.
We require that local events at any given class be totally ordered. At first,
it might appear that it would be more desirable to only require a partial order
on local events. However, such an approach allows remote sites to extend the
partial order inconsistently. For example, in a transaction processing context,
our model requires each local site to produce a total serialization order for
local transactions, even though some pairs of transactions might not conflict.
The reason is that two remote sites might induce different serialization orders
by the scheduling of further transactions, thus precluding a global consistent
order.
To model the total order of local events, let A P be a sequence of the
events in E P . A P is total and injective with respect to E P ; each local event
in E P appears exactly once in A P .
In our model, ordering decisions at class P are constrained only by ordering
decisions at dominated classes, i.e. classes
e 1 and e 2 are events visible at P . If e 1 and e 2 are ordered at some class P j
where class P must respect that ordering. On the other hand,
are not ordered at any class P j where class P is free
to choose either ordering for the two events. Also, for the same reason that
each site must totally order local events, each site must also totally order all
visible events. We introduce some machinery to formally express this idea.
We define the subsequence predicate, whose signature is:
as follows. Let A and B be two sequences of events. Then A v B is true
iff A can be obtained from B by discarding events in B.
We define global(P ) to be an injective sequence of events visible at P
such
1. A P v global(P )
2.
The first condition on global(P ) states that global(P ) must respect the local
order at P . The second condition on global(P ) states that global(P ) must
respect the total orders chosen at all dominated classes P j . The second
constraint reads, 'for all P j of type Classes where P j is dominated by P , it
is the case that global(P j ) is a subsequence of global(P )'. Note that P is a
free variable in both constraints.
For a given partial order S and set of events E, there may be many
possible values global(P ), i.e. global is a relation, not a function. However,
for some choices of global(P j ) at may not exist. This
possibility is exhibited in table 1 and figure 1. Suppose and
class E class A class global(P class )
does not exist
Table
1: Event And Ordering Assignments
Figure
1: Example Partial Order That Is Not Consistently-Ordered
consider the event and ordering assignments given in table 1. For the given
events and orderings, global(P max ) does not exist.
To provide a more concrete interpretation of the difficulty exhibited in
table 1, suppose that e 1 and e 2 are transactions at classes P 1 and P 2 , re-
spectively. Further suppose that global(P ) represents some equivalent serial
order for the execution history of all transactions visible at P . In the example
shown, the value of global(P 3 ) indicates that P 3 serializes e 1 before
perhaps by the scheduling of some (unshown) local transaction. Simi-
larly, the value of global(P 4 ) indicates that P 4 serializes e 2 before e 1 . Since
do not communicate and are unable to detect that a
global serialization anomaly has arisen. However, the inconsistent serialization
is apparent at P max , and is reflected by the fact that global(P max ) does
not exist.
The purpose of this paper is to find the largest class of partial orders that
still guarantees the existence of of global(P ) for any class P , no matter what
choices are made in constructing global(P j ) for dominated classes
Informally, each class in a conforming structure can make arbitrary ordering
decisions about otherwise unordered events and still be guaranteed that the
resultant global ordering is consistent. We formalize the notion of consistent
global ordering as follows:
Definition: The partial order S is consistently-ordered at P iff for all
classes possible sequences of events global(P j ), there
exists at least one global(P ). To extend the notion of being consistently-
ordered to the entire partial order, we say that S is consistently-ordered iff S
is consistently-ordered at the greatest element.
As an example, the partial order in figure 1 is consistently-ordered at P i
4, but not at P max , as shown by the choices exhibited in table
1. Hence the partial order in figure 1 is not consistently-ordered.
3 The Crown-Free Property
is an induced subgraph of D
if VQ ' VD and and tail of e belong to VQg. Infor-
mally, an induced subgraph retains as many edges as possible from the parent
graph.
A crown in is a subset fx of P such that x
(j mod n)+1.
A crown is exhibited in figure 2. (With some imagination, the 'crown' can
be envisioned in three dimensions.) S is crown-free iff S has no crown. The
above definition of a crown is a variation of one in [Bou85] and is similar to
[Riv85, page 531].
Crowns in directed graphs are defined analogously to crowns in partial
orders. Note that a directed acyclic graph D contains a crown iff there exists
an induced subgraph Q of D such that:
1. Q is bipartite.
2. The undirected version of Q is cyclic.
For example, the directed graph shown in figure 1 has a crown since the
desired Q can be obtained by discarding P max . To continue the example, let
be the partial order whose Hasse diagram is shown in figure 1 and
D be the directed graph whose nodes are P and whose edges are !. Note
that, by definition, ! is transitively closed even though the corresponding
Hasse diagram, e.g. figure 1, does not show the transitive edges for purposes
of clarity. S has a crown since if we discard P max from D, we obtain the
same Q as before.
We now give the main result of the paper:
Theorem 1 S is consistently-ordered iff S is crown-free.
Proof:
!: Suppose S is consistently-ordered. Then S is crown-free.
For the sake of contradiction, consider an S that has at least one crown.
Let Q be such a crown. Label the minimal n nodes in Q as
label the maximal n nodes in Q as P
loss of generality, relabel
the 2n nodes in Q as necessary such that P i , dominates
as in figure 2. Let global(P i
be he (i\Gamma1 mod n)+1 ; e (i mod n)+1 i. Let P max denote the maximal element in
P. Then global(P does not exist, since we are requiring the "sequence"
to accommodate the orderings:
which is impossible. Hence S is not consistently-ordered, which is a contra-
diction. Therefore, S is crown-free./: Suppose S is crown-free. Then S is consistently-ordered.
A
A
A
A
A
A
A
AU?
A
A
A
A
A
A
A
AU?
A
A
A
A
A
A
A
AU
A
A
A
A
A
A
A
AU?
Figure
2: A Crown
For the sake of contradiction, consider a partial order S that is not
consistently-ordered. Consider P , a minimal element in P such that global(P )
does not necessarily exist.
Consider an instance where P is unable to form global(P ). It must be the
case that P is faced with an inconsistent set of event ordering requirements.
Each event ordering is of the form he x ; e y i, and each requirement to so order
the events is imposed by some global(P j ), where
Suppose that T is a minimal sized set of inconsistent event orderings. Let
the size of T be n, and relabel the events as needed such that T may be listed
as:
We make several observations about T . Since T is assumed to be of minimal
size, each event in the list appears exactly twice. Also, for each event e i ,
Since T is minimal, we have that L(e i To see
why, suppose that L(e i the ordering of e i and e j is done
exactly once done, namely at L(e j ). Hence by substituting all occurrences
of e j for e i we could reduce the size of T , which is a contradiction. Similar
arguments apply if L(e i
Thus there are n incomparable classes where the n events giving rise to
are local. We collect these n classes in the set B low . Note that B low is an
antichain.
For each he i ; e (i mod n)+1 i in T , consider the least upper bound class of
The claim is that there are exactly n distinct least
upper bound classes for the pairs of classes in T . To see why, suppose that
there there are more than n least upper bound classes. Then some pair of
classes in T would have at least two least upper bounds, and from figure 1 it
is clear that S has a crown, which is a contradiction. Now suppose that there
are fewer than n least upper bound classes. Then at least two pairs in T must
share a least upper bound, denoted P j . These two pairs must comprise at
least 3 distinct events, and global(P j ) totally orders these events. But then
T is not of minimal size, which is a contradiction.
We collect the n least upper bound classes into the set B high . Classes in
B high are incomparable, or else we could again argue that T is not minimal.
Note that B high is also an antichain.
let Q be a directed graph whose nodes are B low [ B high and whose
edges defined by !. Q is bipartite and its undirected version is cyclic. The
cycle is exhibited by the structure T . Therefore S has a crown, namely Q,
which is a contradiction. Again, figure 2 provides an illustration. 2
Computational Complexity
We now use the results of section 3 to give a polynomial-time algorithm for
determining whether a communication structure S ensures globally consistent
event orderings. By Theorem 1, it suffices to show that S is crown-free. The
naive algorithm, explicit checking of each subset for the bipartite property
and for the existence of a cycle, is obviously exponential. Bouchitt'e [Bou85]
gives a polynomial-time algorithm, which we summarize below, for the crown
detection problem. The crown detection algorithm is based on deriving a
bipartite graph from the partial order, then using an 'elimination scheme' to
examine this bipartite graph.
The split graph of a partial order (P; !) is the bipartite graph
E) where each x 2 P is associated with one vertex
vertex there is an edge (v; w 0 ) in E if and only if x ! y where
v is associated with x and w 0 is associated with y. Bouchitt'e [Bou85] and
Trotter [Tro81] establish that crowns in (P; !) give rise to crowns in the
split graph and that any crown in the split graph having more than 4 nodes
comes from a crown in (P; !). Thus, it suffices to check (P; !) for crowns of
size exactly four and then check the split graph, which is bipartite, for the
existence of crowns.
Note that a bipartite graph has a crown of size greater than 4 if and only
if it has a chordless cycle of length greater than 4. A bipartite graph in which
every cycle of length greater than 4 has a chord is called chordal. Bouchitt'e
shows that bipartite graph G is chordal if and only if the following iterative
procedure results in a graph with no edges:
while not done do
(1) if G contains a vertex with only one neighbor,
then remove such a vertex (and the incident edge)
(2) else if G contains an edge (x,y) such that every
neighbor of x is adjacent to each neighbor of y
and vice-versa then remove such an edge (i.e.,
remove the two vertices and all incident edges).
(3) else done:= true
if G has no edges
then return (CHORDAL)
else return (NOT CHORDAL)
Let G and G 0 denote the graph at the beginning and end, respectively, of
an iteration. The correctness of the algorithm follows from the facts that
Every chordal graph has an edge of the type described in condition (2)
G is chordal iff G 0 is chordal [Bou85]
No edge can be eliminated from a chordless cycle of length greater than
Let n be the number of nodes and m the number of edges in the graph
arising from (P; !). Checking for crowns of size 4 can be done by brute
force in time O(n 4 ). The split graph has 2n nodes and m edges and can be
constructed in time O(m). The elimination algorithm has at most 2n
iterations. In the worst case, each iteration involves time O(n) to search for
a vertex that can be eliminated plus O(m) checks for whether an edge can be
eliminated, each of which takes time O(n 2 ). Thus, the total running time is
In practice, it should be possible to substantially improve
the running time by precomputing a list of candidates for elimination, then
updating this list on each iteration of the loop.
We remark that the result of major importance is that the complexity
of crown detection in partial orders is polynomial rather than NP-complete.
We do not address the complexity of crown detection in arbitrary directed
graphs since the issue is not directly relevant to this paper. However, given
the variance in complexity results for standard graph problems dependent
on whether the graph in question corresponds to a partial order [Moh], it is
possible that crown detection in arbitrary directed graphs is NP-complete.
In this section, we make some observations on our results and discuss how our
results can be applied to problems in two areas, namely multilevel security
and hierarchical databases.
5.1 Observations
For the analysis given so far, we have assumed that the partial order
has a greatest element. (Recall that a partial order has a greatest
element iff the partial order has a unique maximal element). The reason
for the assumption is as follows. Suppose S has no greatest element, that S
has at least one crown, and that for every crown Q in S, at least one node
in Q is a maximal element in S. In this scenario, there is clearly still the
possibility for globally inconsistent event ordering, but no node in S is in a
position to observe the inconsistency. This lack of an observer complicates
the discussion, and so, for our analysis, we assumed that S had a greatest
element.
For practical purposes, it is not that important whether S has a greatest
element or not. In either case, if S has a crown, then globally inconsistent
ordering decisions are possible, if not necessarily observable.
Our second set of observations relates the crown-free property to some
other typical classifications of partial orders. An often employed special case
of a partial order is a lattice. In a lattice, each pair of classes has a unique least
upper bound and a unique greatest lower bound. Lattices are not necessarily
crown-free, as can be seen by considering the subset lattice on a set with
three elements. (See [AJ93a] or [AJ93b] for an elaboration of this example).
Another intuitively appealing special case of a partial order is a planar
partial order. A partial order is planar if its Hasse diagram is planar and
each edge in the Hasse diagram is monotone, i.e. edges are prohibited from
'looping around' the outside of the diagram. (See [Riv85, The Diagram] for
a fuller explanation.) By consulting the results of [Riv85], where the types of
structures that all nonplanar partial orders must contain are enumerated, we
note that the structure Q in figure 2 is on the proscribed list [Riv85, figure
page 121]. Thus we can be sure that all planar partial orders are crown-
free. One can also see directly that the structure in figure 2 is nonplanar.
The existence of other structures on the list in [Riv85] demonstrates that
although the restriction to planarity is a sufficient condition for a partial
order to be consistently-ordered, it is not a necessary condition.
5.2 Applications
A primary concern in multilevel security is information leakage, such as information
in a 'Secret' database leaking to some process executing at the
'Unclassified' level. Leakage can occur in two ways - directly through an
overt operation such as reading a data item or indirectly through a covert
or signaling channel. Direct leakage can be accounted for by following so-called
mandatory access control policies such as the Bell-Lapadula model
[BL76, Den82].
Indirect leakage is more troublesome. In a covert or signaling channel,
information leaks by means of contention over some resource [BL76, Den82,
Lam73]. An example channel is provided by the read and write locks in a
conventional database. In a database with a locking protocol for concurrency
control, a read (or write) of a data item must be preceded by the acquisition
of a read (or write) lock. A request for a write lock, potentially made by a
transaction at the 'Unclassified' level, is delayed if a read lock has already
been granted, perhaps to a transaction at the 'Secret' level. The delay experienced
by the 'Unclassified' transaction can be used to infer activity at
the 'Secret' level. Hence `Secret' information leaks to the 'Unclassified' level
if 'Secret' transactions can obtain standard read locks on `Unclassified' data
items.
The problem of covert or signaling channels has been extensively studied.
One general approach is to physically separate components at one security
level from components at another, thus simplifying the argument that indirect
channels do not arise. A natural outgrowth of this trend is a distributed
implementation with the type of one-directional communication structure
that is the subject of this paper. In particular, multilevel, replicated secure
databases, first identified in [Com83], lend themselves to distributed implementations
Candidate concurrency control algorithms for multilevel, replicated secure
databases appear in [JK90, Cos92, KK92]. For the most part, these
algorithms assume that the communication structure between different security
classes is a lattice for reasons outlined in [Den82]. However, as noted
above, some lattices have crowns, and hence without additional synchronization
information, distributed implementations of multilevel, replicated
databases cannot guarantee serializable execution transaction histories for
lattices. Recognition of this problem in the published concurrency control
algorithms was in fact the beginning of the present paper. Given the interest
in constructing multilevel systems, a precise demarcation of the problematic
structures was clearly required. Solutions to the dilemma are to ensure that
the communication structure is crown-free, to modify the structure to be
crown-free if it is not, or to add additional synchronization measures, such
as a global clock.
Communication structures similar to those in multilevel security appear
in hierarchical databases [HC86]. Hierarchical databases partition a global
database by the access characteristic of transactions. A typical hierarchical
database might have a main database containing 'raw' information, and
derivative databases where transactions read, but do not write the raw data.
The reader is referred to [HC86] and [AJ93a] for more explanation of hierarchical
databases.
Hierarchical databases have no information leakage requirements. How-
ever, it is still undesirable for a derivative database to interfere with the
concurrency control at a main database. Such interference could take the
form of holding read locks on raw data items or forcing transactions at a
main database to abort to preserve serializability in a timestamp-ordering
protocol. The results of this paper can be applied to hierarchical databases
as follows: if the communication structure of a hierarchical database is restricted
to be crown-free, then a distributed implementation that guarantees
globally serializable transaction histories is possible without the introduction
of additional synchronization information.
6 Conclusion
Networks in which information flow is restricted to one direction in a distributed
network figure prominently in applications such as multilevel security
and hierarchical databases. In such networks, the requirements for
event ordering differ substantially from unrestricted networks, and indeed
improvements in ordering decisions are possible.
In this paper, we have defined the consistently-ordered property to describe
communication structures where local ordering decisions are guaranteed
to be globally consistent without the introduction of additional synchro-
nization, such as a centralized clock. For our main result, we employed the
crown-free property of partial orders to prove that a crown-free partial order
is equivalent to a consistently-ordered one. Fortunately, crown detection can
be carried out in polynomial time.
The results in this paper can be applied in any application with one-directional
communication structures, such as multilevel security or hierarchical
databases, to ensure that distributed applications enjoy desirable
properties, such as serializable execution histories.
Future Work
In [AJ93a], a component-based, timestamping algorithm was developed for
planar lattices. The results of this paper indicate that the timestamping algorithm
applies to crown-free partial orders. In [HC86], a proof technique called
the partitioned synchronization rule was presented for demonstrating the correctness
of database concurrency control algorithms. The proof technique
was proven for communication structures that are restricted to semitrees.
The results of this paper indicate that the partitioned-synchronization rule
applies to crown-free partial orders. Items for future work are to verify these
two conjectures.
Acknowledgements
It is a pleasure to acknowledge Ivan Rival for sharing his expertise on partial
orders, John McDermott for discussing the problem of consistent ordering,
and Jeff Salowe for considering the computational aspects of crown detection
--R
Distributed timestamp generation in planar lattice networks.
Planar lattice security structures for multi-level replicated databases
Secure computer systems: Unified exposition and multics interpretation.
Chordal bipartite graphs and crowns.
Committee on Multilevel Data Management Security
Transaction processing using an untrusted scheduler in a multilevel database with replicated architecture.
Cryptography and Data Security.
Minimizing setups for cycle-free ordered sets
Perfect elimination and chordal bi-partite graphs
Partitioned two-phase locking
Transaction processing in multilevel-secure databases using replicated architecture
On transaction processing for multilevel secure replicated databases.
A note on the confinement problem.
Algorithmic aspects of comparability graphs and interval graphs.
Graphs and Order: The Role of Graphs in the Theory of Ordered Sets.
--TR
--CTR
Transaction Processing in Multilevel Secure Databases with Kernelized Architecture: Challenges and Solutions, IEEE Transactions on Knowledge and Data Engineering, v.9 n.5, p.697-708, September 1997 | distributed networks;partial orders;multilevel databases;graph algorithms;hierarchically decomposed databases;security;crowns |
629420 | An Example of Deriving Performance Properties from a Visual Representation of Program Execution. | AbstractThrough geometry, program visualization can yield performance properties. We derive all possible synchronization sequences and durations of blocking and concurrent execution for two process programs from a visualization mapping processes, synchronization, and program execution to Cartesian graph axes, line segments, and paths, respectively. Relationships to Petri nets are drawn. | Introduction
Systems to visualize the execution history collected from a program can help explain what happens
during program execution (e.g., [4, 10, 13, 15]). Visualization systems can lend insight into why a
Supported in part by National Science Foundation grant NCR-9211342.
performance measure, such as program execution time or mean waiting time for a resource, has a
particular value. The answer to "why" helps identify how to change the program to improve the
measure.
But visualization has a use beyond providing images on a computer monitor: one can formally
deduce properties about program execution from a visualization. This was first demonstrated by
Roman and Cox [19], who deduced correctness properties. Roman and Cox illustrate that safety
properties (properties that hold in all computation states, such as invariants) and progress properties
(properties that hold in a particular program state) can be verified by defining a mapping
of program states to a visual representation, and then observing whether the sequence of visual
images corresponding to an execution sequence satisfies a desired property. They discuss the ability
"to render invariant properties of the program state as stable visual patterns and to render
progress properties as evolving visual patterns."
This paper provides a second example, deducing the following performance properties from a
visualization of a class of concurrent programs:
Q1: the sequence of synchronization points where the program blocks,
Q2: the blocking duration at each synchronization point, and
Q3: the duration of concurrent execution between synchronization points.
The class consists of program that meets the following assumptions: A program contains two
processes (A1). A process executes on a dedicated processor (A2). A process only blocks at
synchronization operations (A3). A synchronization operation in a process A is defined in terms
of a code segment C in the other process B: when A reaches the synchronization operation it
will block if and only if B is executing C. Each process is represented as a branchless directed
graph in which vertices represent code segments and edges always represent precedence relations
and may represent synchronization operations (A4). A process will optionally block when moving
from one vertex to another if the edge corresponds to a synchronization operation. The execution
time of the code segment corresponding to each vertex is an independent constant, exclusive of
time spent blocked (A5). Each process has an initial vertex in which the process starts execution
and, optionally, a final vertex in which the process terminates (A6). A process without a final
vertex never terminates.
We argue next that the assumptions are reasonable. Consider the two process assumption
(A1). The initial solutions to some classic parallel programming problems - such as shared
memory mutual exclusion algorithms - were initially solved only for two processes. And one
important performance evaluation tool - queueing networks - started only with the ability to
solve just one kind of queue (M/M/1) in isolation. In principle the analysis presented here can be
extended from two to an arbitrary number of processes; see the conclusions (x7) for a discussion.
Regarding the constant time assumption (A5), Adve and Vernon [5] conclude based on seven
parallel applications that "it appears reasonable to ignore the variability in execution times when
estimating synchronization delays," and that an exponential task time assumption could actually
lead to more severe errors than a constant-time assumption. As for relaxing assumption A2,
multiprogramming has been modeled in past work that uses the visualization underlying our
work, and can be incorporated into the model presented here. The branchless assumption (A4) is
not as restrictive as it might first appear, because loops whose number of iterations is known can
be unrolled to obtain a branchless graph. Our analysis includes non-terminating programs (A6),
because certain long running programs repeatedly execute the same code, such as simulations and
reactive programs (programs that react to external stimuli on an ongoing basis, such as operating
systems).
Our solution method represents program execution by a Timed Progress Graph (TPG). To our
knowledge, TPGs were originally used in Operations Research to find minimum length schedules
for two jobs that share a set of machines [20, pp. 262-263]. However, a good and not necessarily
minimal schedule was found "by eye" rather than by a formal method. Later, a simplified form
of TPG, called an untimed progress graph (UTP), in which each delay in the timed transition
diagram is unity, was used to analyze deadlocks. 1 Carson and Reynolds [7] define a UPG as "a
multidimensional, Cartesian graph in which the progress of each of a set of concurrent processes
is measured along an independent time axis. Each point in the graph represents a set of process
times." Kung, Lipski, Papadimitriou, Soisalon-Soininen, Yannakakis, and Wood [11, 17, 22, 24]
used UPGs to detect deadlocks in lock-based transaction systems. More recently Carson and
Reynolds used UPGs to prove liveness properties in programs with an arbitrary number of
containing P and V operations on semaphores that are unconditionally executed. Our
use of TPGs to analyze program performance properties is novel.
TPGs map the progress of each process to one Cartesian graph axis. Line segments represent
interprocess synchronization. A directed, continuous path that does not cross a segment represents
a particular execution of a program. This path may be found by computational geometric
algorithms to compute line segment intersections and for ray shooting.
The remainder of the paper is organized as follows. First we define TPGs and illustrate their
use on sample programs in x2. The representation of an execution of a program is then formalized
in x3. x4 presents an algorithm to solve the problems listed earlier. We then consider, in x5, a
special class of programs that displays periodic behavior and present an algorithm to solve for the
set of all possible periodic state sequences that can arise during any execution of a program. x6
relates TPGs to Petri Nets. Finally x7 contains conclusions.
Timed Progress Graphs
We first illustrate our approach to solving for quantities Q1 to Q3 from x1 on three programs.
Apparently TPGs were reinvented in the Computer Science field and attributed to Dijkstra [8].
process producer process consumer
X:
while (TRUE) do while (TRUE) do
read X from disk; receive X from consumer;
send X to consumer; write X to disk;
do od
Figure
1: Non-terminating producer/consumer program with one buffer.
Non-terminating Producer/Consumer
Figure
1 contains a producer/consumer program that use one message buffer for interprocess
communication. Thus when the producer sends a message it fills the single buffer and thus must
wait until the consumer executes receive before performing the next send. Figure 2 contains
an equivalent graph representation of the program and satisfies assumptions A1 to A6. The
deterministic execution time of each graph vertex, excluding time spent blocked, is shown in
square braces. Edge conditions B1 and B2 represent blocking that may occur during a send and
receive, respectively. (For simplicity of presentation, we assume that a process blocks at the end
of a send or receive operation; this condition is straightforward to relax.) Recall from A3 that
specifying a synchronization operation requires identifying a code segment C for which a process
may block. Therefore in Fig. 2 conditions B1 and B2 are the following: (B1) The producer never
blocks when it performs the first send; furthermore the producer blocks when it performs the n-th
send (n ? 1) if the consumer has not yet executed the 1-st receive. (B2) The consumer
cannot execute the n-th (n - receive until the producer has executed the n-th send.
Recall that a TPG maps the progress of each process to one Cartesian graph axis; the result for
Fig. 1 is shown in Fig. 3. The sequence of graph vertices that each process passes through is mapped
to a sequence of intervals (denoted by grey lines in Fig. 3(a)) along the corresponding axis. Interval
widths correspond to execution time of vertices. Synchronization between processes is represented
read X from disk;
send X to consumer;
receive X
from
consumer
write X to disk;
receive X from consumer
Producer: Consumer:
Figure
2: Graph model corresponding to Fig. 1. Numbers in square brackets refer to time spent in
each code segment. Labels on each edges denote conditions under which blocking occurs. Thick
circles represent initial vertices; there are no final vertices.
in Fig. 3(b) by horizontal and vertical line segments in the plane placed at appropriate boundaries
of vertex intervals to prohibit a transition into a vertex. The line segments, called constraint
lines, are closed at the end point closest to the origin and open at the other end point. Each line
segment corresponds to a blocking condition in the graph. Consider B1 in Fig. 2. From Fig. 3(a),
the producer makes the first transition from the read.; send.; vertex back to itself at time
4, the second at time 8, and the n-th at time 4n (in the absence of blocking). Similarly, the
consumer completes its first receive at time 1, its second at time 3, its third at time 5, and its n-th
at time 2n \Gamma 1. Therefore, by B1, the producer will block at time 4n (for n ? 1) if the consumer
has completed its 1-st receive, corresponding to points that exceed coordinate
on the axis representing the consumer. Thus the producer will block in any point (x; y) meeting
the following two conditions: corresponding to the vertical
lines in Fig. 3(b). Similarly, by B2, the consumer will block when it tries to complete the n-th
receive (at time the producer has not yet completed the n-th send (at time 4n). Thus
the consumer will block in any point (x; y) meeting the following two conditions:
corresponding to the horizontal lines in Fig. 3(b).
Given an initial point, execution of a program is represented by a point or a directed path in the
1st receive
2nd receive; 1st write
3rd receive; 2nd write
1st
read;
send
2nd
read;
send
3rd
read;
send
(a) Mapping states to a TPG
Producer
Consumer15
(c) Mapping program execution to a TPG
Consumer
(b) Mapping synchronization to a TPG
Consumer15
Figure
3: Representation of one-buffer producer/consumer program (Fig. 2) by a TPG.
plane, called a timed execution trajectory (TET). The initial point is (0,0) if both processes start
simultaneously. A TET is a point only if the initial point represents a deadlocked state. Otherwise
the TET is a path and consists of a possibly infinite sequence of rays that have slope 0, 1, or 1.
The ray has slope 0 (respectively, 1) when the process corresponding to the vertical (respectively,
horizontal) axis is blocked, and slope 1 when both processes are running concurrently. Because
constraint lines represent forbidden state transitions, a TET cannot cross a constraint line. A
finite portion of the TET for the one buffer producer/consumer problem is the thick directed path
in Fig. 3(c). The general method of constructing a TET is given below.
TET construction rule: Given a TPG, a TET is constructed recursively as follows. Let upper
case letters with optional superscripts (i.e., G; G 0 ; G I ) denote graph points. Let a point G denote
the ordered pair of coordinates any two points G and G 0 ,
line segment that is closed at G 0 , open at G 00 , and satisfies G
Rule I: If G lies on some constraint line [G 0 ; G 00 ), then there either (1) will or (2) will not exist
a point G I distinct from G 0 at which [G 0 ; G 00 ) intersects another constraint line instance such that
I . In case (1), the TET rooted at G is a ray with initial point G and final point G I . In
case (2), the TET rooted at G is a ray with initial point G and final point G 00 , followed by a TET
rooted at G 00 .
Rule II: If G lies off a constraint line, then a slope one ray rooted at G either (1) will or (2) will
not intersect a constraint line. In case (1), the TET rooted at G is a slope one ray with initial
point G and final point G 0 , where G 0 is the only point on the ray that lies on a constraint line,
followed by the set of TETs rooted at G 0 . In case (2), the TET rooted at G is an infinite length,
slope one ray rooted at G.
Example 1 Consider the TET portion in Fig. 3(c). Point lies off a constraint line,
and follows Rule II, case (1). Thus the TET is ray [(0; 0); (1; 1)) followed by the TET rooted at
(1,1). Next, lies on horizontal constraint line [(0; 1); (4; 1)). By Rule I case (2), the
second TET ray must be [(1; 1); (4; 1)). Next Rule II case (1) again applies and the third TET
ray must be [G; G 0 ), where This process is continued forever to yield
an infinite length TET. 2
The TET in Fig. 3(c) yields the three performance properties Q1 to Q3 sought in the opening
paragraphs of this paper. The sequence of synchronization points, Q1, corresponds to the sequence
of horizontal and vertical rays arising in a TET. The TET of Fig. 3(c) contains only horizontal
rays, and thus the program never blocks when the producer executes a send. The blocking
duration at each receive, Q2, is the length of each horizontal ray in the TET in Fig. 3(c): 3 time
units for the first receive (because the first horizontal ray is [(1; 1); (4; 1))), and 2 time units for
each subsequence receive. Finally, the duration of concurrent execution between synchronization
points, Q3, is the length of the perpendicular projection of each diagonal ray in the TET on either
axis in Fig. 3(c): 1 time unit from when the program starts until a process first blocks (because
the first diagonal ray is [(0; 1); (1; 1))), and 2 time units after each subsequence receive.
2.2 Program 2: Non-terminating Mutual Exclusion
Figure
4 contains a different form of synchronization than the last program: two database transactions
update the same record (a serially reusable resource) in mutually exclusive fashion using two
semaphores. (Only one is needed, but we use two to illustrate several concepts.) The equivalent
graph model is shown in Fig. 5. Blocking can only occur on the edges out of a vertex representing
a code segment in which a P semaphore operation is performed. The blocking condition is that a
process cannot complete a P operation until the other process is not in the code segment between
completion of a P operation and a V operation on the same semaphore. Such code segments label
edges in Fig. 5 and correspond to C in A4 (x1),
The corresponding TPG in Fig. 6 shows in heavy lines the set of TETs that could arise from
two initial points. Suppose that process 1 runs for 5.5 time units before process 0 starts; this
corresponds to initial point (0,5.5). The TET consists of a single slope one ray of infinite length
by Rule II, case (2). Therefore the processes forever execute without blocking. Now suppose that
process 0 starts execution 2 time units before process 1; there are two possible TETs, both rooted
at initial point (2,0). Both TETs contain as their first ray ([(2; 0); (3; 1))), representing concurrent
execution by both processes for one time unit. The final point of this ray, (3,1), represents
the program state when both processes simultaneous attempt to perform the P(B) semaphore
A,B: semaphore=1;
process 0 process 1
while (TRUE) do while (TRUE) do
input; input;
output; output;
do od
Figure
4: Non-terminating data-base transactions using semaphores and a serially reusable resource
operation. There are two possible outcomes, corresponding to which process first completes P(B).
Thus point (3,1) represents a non-deterministic program state. If process 1 first completes P(B),
then process 0 blocks and the second TET ray is vertical: ([(3; 1); (3; 3))). The TET has a final
point, which is (3,3), representing a deadlock. The deadlock arises because process 1 then attempts
and blocks because process 0 already holds semaphore A. The alternate TET with initial
point (2,0) has as its second ray ([(3; 1); (6; 1))) (in which process 1 blocks for 3 time units). This is
followed by a third ray, with slope one, initial point (6,1), and infinite length. Thus when process
starts 2 time units after process 0, the program either reaches a deadlock or process 0 blocks for
units after which both processes run forever without blocking. Because UPGs have been
used extensively for analysis of deadlocks [7, 11, 17, 22, 24], deadlocks are not considered further
in this paper.
2.3 Program 3: A Terminating Program
Fig. 7, unlike the previous two programs, contains a terminating program. A producer process
reads a disk file whose first record specifies the number of successive records, and sends the records
to the consumer, which then writes the records to another disk file. (To simplify presentation,
input;
output;
Process 0:
input;
output;
Process 1:
Figure
5: Graph model corresponding to Fig. 4.
Process 1
input;
output
input;
output
Figure
Timed progress graph corresponding to Fig. 5.
constant Size=2; A: integer[0.Size];
process producer process consumer
I, NumRecords:
read NumRecords from disk; while (TRUE) do
for I=1 to NumRecords do Y: character;
X:
read X from disk; Y=A[K];
od
Figure
7: Terminating producer/consumer code with shared memory interprocess communication.
the consumer is non-terminating.) We assume that interprocess communication is implemented
through mutually exclusive access to shared memory using an array with Size elements. The
corresponding graph (Fig. 8) is obtained by unrolling the loops in the two processes.
The corresponding TPG (Fig. 9, for NumRecords=6), unlike the preceding TPGs, is bounded
on four sides, not just two (i.e., the left and bottom by the axes). The two additional bounds,
called the right and top bounding lines, with equations
of the producer and consumer, respectively, at times 25 and 31.
Because Fig. 7 combines the synchronization of the preceding two examples, its TPG (Fig.
combines the two forms of constraint lines in the preceding TPG figures. The TETs shown
represent all possible executions when both producer and consumer start simultaneously. Note
that all TETs are of finite length and have a final point (at point (25,31)), because the producer
terminates after sending six records and the consumer blocks forever at P(Full) after receiving
six records.
read
NumRecords
read X;
P(Empty)
[1.5]
Producer:
Consumer:
read X;
P(Empty)
[1.5]
BP1: n-th (for n<=Size) transition never blocks;
n-th (for Size<n<NumRecords) transition blocks if the consumer has not yet executed (n-Size)
transitions of vertex "V(Empty)."
BC1: n-th transition blocks if the producer has not yet executed n-th transition out of vertex
"V(Full)."
Figure
8: Graph model corresponding to Fig. 7. Thick circles denote initial and final vertices.
We learn from the TPG that the first synchronization point encountered is when the consumer
performs P(Full) (point (1,1)). The next synchronization point occurs when the producer and
consumer simultaneously attempt, respectively, the third and second access to the shared buffer
pool (point (11,7)). At this time a race condition occurs. If the producer obtains semaphore
first (and thus the TET contains ray [(11; 7); (12:5; 7))), then for the remainder of execution
the consumer never blocks and the producer will repeatedly synchronize briefly at its P(Empty)
operation. On the other hand, if the consumer obtains semaphore ME first (and thus the TET
contains ray [(11; 7); (11; 8))), then the processes will again encounter a race condition followed
Producer
Consumer
Write Y
Write Y
Write Y
Write Y
Write Y
Write Y
Figure
9: Timed progress graph corresponding to Fig. 8 with NumRecords=6.
by the same two possible outcomes. All TETs contain as part of their final ray [(25; 25); (25; 31)],
which represents the consumer removing the final buffer after the producer has terminated.
2.4 Problem Statement
The preceding examples demonstrate that quantities Q1 to Q3 from x1 can be computed by solving
problem P1 below. P2 is added to illustrate state reachability analysis with TPGs.
P1: Given a TPG, find the set of all possible TETs rooted at some initial point.
P2: Given a TPG, determine if there exists any process starting times that correspond to a point
leading to exactly one TET in which no process ever blocks (e.g., the TET rooted at (0,5.5)
in Fig. 6). Furthermore, if there exists such times, then output an example.
3 Construction of a TET
Before presenting algorithms solving P1 and P2, we formally define a TPG and the rule to construct
all possible TETs. Let R and Z denote, respectively, the non-negative reals and integers.
3.1 Formal Definition of TPG
Definition. A TPG for a graph representation of a program is an ordered pair h ; G C i, where
is a non-empty set of constraint line segments in the positive R 2 plane, and G C is an initial point
in R 2 representing the earliest instance at which both processes have started execution.
For example, consider the TPG in Fig. 3(b). Let V denote the vertical lines: f(8
Zg. The horizontal lines are Zg. The
TPG is h V [ H ; (0; 0)i. For terminating programs, also contains the right and top bounding
lines (e.g., Fig. 9).
3.2 Transition Function f
To formalize Rules I and II from x2.1 for a TPG h ; G C i, we define below two functions, f
respectively. (The subscript "o" in f o denotes a ray that is orthogonal to the axes, and the "d" in
f d denotes a ray that is diagonal [with slope one].) Defining f o and f d requires some notation. For
any continuous path fl and any point G on fl, we write G 2 fl. For any directed, continuous path
fl, we write fl:i (respectively, fl:f ) to denote the initial (final) point. We assume that fl:i 6= fl:f .
For any two continuous paths fl and fl 0 in R 2 , g. The relation
lies on line or ray L. Line or ray [G; (1; 1)) has slope one and infinite length.
lies on a constraint line instance L and -
G is the smallest of L:f and those points
greater than G at which L intersects another constraint line instance. Formally,
min
i: (1)
lies off a constraint line instance and -
G is the smallest of (1; 1) and the set of
points at which a slope one ray rooted at G intersects a constraint line instance. Formally,
Functions f
maps each point G to a (possibly empty) set of successors f(G) ' R 2 .
A point is nondeterministic iff the transition out of the point is not unique (e.g., (3,1) in Fig. 6),
for example representing states in which both processes simultaneously perform a P operation on
the same semaphore. All other points are deterministic. A point is dead iff there is no transition
out of the point (e.g., (3,3) in Fig. 6), representing states in which both processes are blocked. A
dead point may represent either a program deadlock or program termination.
Definition. A point G is nondeterministic iff jjf (G)jj ? 1 and is dead iff
3.3 Constructing a TET
A TET is a point or a directed continuous path consisting of a sequence of horizontal and diagonal
rays. There may be multiple TETs rooted at the same initial point. The following definition of
TET also provides a rule to construct a TET that formalizes the recursion in Rules I and II of x2.
timed execution trajectory (TET) of a TPG h ; G 0 i rooted at any point G 0 2 R 2 is
either (1) a point G 0 or (2) a directed, continuous path rooted at G 0 . Case (1) holds iff
Case (2) holds iff the path is a ray sequence [G may be
In case (2), n is finite iff f(G n
Example 2 In Fig. 3(c), the first line segment in the TET rooted at initial point G
1)g. This is because (0,0) does
not lie on a constraint line instance and a slope one ray rooted at point (0,0) first intersects a
constraint line instance at point (1,1). Next, f((1; 1)g. Thus, the second
TET line segment is 1). Continuing in this manner yields [(0; 0); (1; 1)),
as the only possible TET.
As a second example, consider G Fig. 6. Then f(G
However, (3,1) is nondeterministic; thus f(G 1 1)g. Continuing like this yields two
possible TETs: [(2; 0); (3; 1)), [(3; 1); (3; 3)) and [(2; 0); (3; 1)), [(3; 1); (6; 1)), [(6; 1); (1; 1)). 2
This section contains algorithms to solve P1 and P2 from x2.4 for terminating programs.
4.1 Computing Function f
Essential to solving P1 and P2 is a method of computing transition function f for a TPG h ; G C i.
By definition, computation of f(G) for any point G in R 2 requires computing either f
lies on a line in , or f d (G) otherwise. Computation of f o and f d is discussed below.
Computation of f Computation of f line L is straightforward using
relation (1) of x3.2. (1) requires computation of the set of all points of intersection of line (L:i; L:f)
with other lines in . We precompute fG 0
which then reduces the computation of each evaluation of f o (G) to evaluating the minimization
function in (1). Precomputing this set is equivalent to a well known computational geometric
problem: computing all intersections of a collection of horizontal and vertical lines (e.g., see [21,
Ch. 27]). Thus computation of f o is not considered further.
Computation of f d (G): Recall that f d (G) is the smallest of (1; 1) and each point at which
a slope one ray rooted at G intersects a constraint line instance. The well known problem of ray
shooting with line segments [6, pp. 234-247] can be used to find f d (G): Given a point, a direction,
and a finite set of line segments in a plane, find the first line segment intersected by a ray rooted
at the point with the given direction. The typical ray shooting solution first stores the line
segments in a data structure, so that subsequent queries consisting of a point and a direction can
be answered in sublinear time. Let ShootRay(point P, direction D, set of line L) denote
a ray shooting algorithm whose parameters are a point P ; a direction D (either for a ray
directed away from or toward the axes, respectively); and a set L, containing a finite number of
line segments. The return value is the point of intersection or (1; 1) if the ray does not intersect
a line segment. Formally, f d
4.2 Problem P1: Finding All Possible TETs
Problem P1: Given a TPG h ; Gi, output a representation of all TETs rooted at G.
The definition of TET along with the aforementioned methods to compute f o and f d solve P1.
However, there is one technical problem: f(G) is a set that contains either one (for deterministic
or two (for non-deterministic G) points. If f(G) contains two points, then there are at least
two possible TETs rooted at G. We say "at least two" because if a point on one of the TETs
rooted at G contains a point distinct from G that is nondeterministic, there will be more than
two TETs rooted at G. Therefore a solution to P1 requires calculating a set of TETs.
The proposed algorithm constructs a directed graph. If the set of all possible TETs rooted
at G is simply the point G, then the graph contains one vertex, labeled G. Otherwise the graph
contains one vertex for each ray end point in all possible TETs rooted at G. The graph is colored,
with green representing unexplored vertices and the remaining vertices colored red.
FindAllTETs (for terminating programs): Initialize the graph to contain one vertex representing
G, colored green. The following step is repeated until there are no more green vertices
in the select a green vertex G 0 in the graph; color G 0 red; for each vertex G 00 2 f(G 0 ), add
an arc from G 0 to G 00 and color G 00 green.
Example 3 For Fig. 9, the graph consists of a vertex for point (0,0) with an outgoing arc to a
vertex for point (1,1), an arc from (1,1) to a vertex for (5,1), an arc from (5,1) to (11,7), two
outgoing arcs from (11,7) to (12.5,7) and (11,8), and so on, with all graph paths leading to the
vertex for point (25,31). Only the vertex for (25,31) has no outgoing arc. 2
4.3 Problem P2: Deciding Existence of Non-blocking TETs
We restate P2 using the following definition: a TET is non-blocking if the TET contains no points
representing a state in which a process is blocked. (Geometrically, a diagonal ray rooted at a point
that leads to a non-blocking TET never intersects a constraint line in and its final point lies on
the top or right bounding line in .) Problem P2 is equivalent to: Given a constraint line set
from a TPG, determine if there exists an initial point on the x or y axis that leads to exactly one
TET such that the TET is non-blocking. If there is, output the TET.
Algorithm FindFreePoints: Let L be a set containing the lines in and left := [(0; 0); (0; 1)]
and bottom := [(0; 0); (1; 0)]. Set G to ShootRay(L:f; \Gamma; L). If there exists a line L in such
that G 2 f left, bottomg and G 0 =ShootRay(L:f; +; L) lies on the top or right bounding line, then
return G; G 0 ; otherwise report "no non-blocking TETs exist."
R
O
Figure
10: TPG containing an infinite number of TETs.
5 Case 2: Non-terminating Programs
The preceding algorithms cannot be used for non-terminating programs for two reasons. First,
a TPG for terminating programs is a rectangle with four finite length sides, while a TPG for
non-terminating programs extends to infinity in the Cartesian quadrant in which both axes are
positive. The unbounded nature of the TPG means that a TET will have infinite length, unless the
execution reaches a deadlock. Second, there may be an infinite set of TETs rooted at a point in
the plane (Fig. 10). This occurs when the program timings are such that in any state represented
by a point on some TET, the program will eventually reach another race condition.
Non-terminating programs whose synchronization is only for mutually exclusive resource access
(e.g., x2.2) permit a simple characterization, because they exhibit periodic behavior. The grey
lines in Fig. 6 partition the plane into a set of equal-size rectangles, in which the location of the
rectangle sides in the plane correspond to initiation of new infinite loop iterations in a process.
Each rectangle is called a quadrant. Formally defining a quadrant requires the notion of process
cycle time. Let r 2 f0; 1g denote one of two processes, and -
denote the other process
(i.e., if vice versa). The cycle time of process r, denoted OE r , is the time
required for process r to pass through each vertex in the infinite loop in its graph representation
once, ignoring the time spent blocked. Thus in Fig. 5 OE
A quadrant is a region f(G g.
The initial quadrant is the quadrant containing the origin.
Example 4 The initial quadrant in Fig. 6 has opposite vertices (0,0) and (11,11). Each quadrant
has opposite vertices
The significance of quadrants is that constraint lines in all quadrants are congruent:
Definition. For any point
G 0 are congruent, denoted G j G 0 , iff line segments are congruent if
their end points are congruent.
The fact that the placement of constraint lines in all quadrants are congruent has two impli-
cations. First, it is sufficient to analyze just one quadrant in the plane to derive the set of all
possible TETs, and this modification of the algorithms from the preceding section follows. Sec-
ond, a TET containing only deterministic points will consist of a transient portion followed by an
infinite number of repetitions of congruent subtrajectories, representing periodic behavior. (See
Theorem 1 in [3].) A repeated subtrajectory is called a limit cycle execution trajectory (LCET).
Example 5 In Fig. 11, OE 3. The TET subpath rooted at point (3; 2) in Fig. 11
consists of an infinite number of repetitions of the following LCET: a horizontal ray of length
2 and a diagonal ray whose projected length on either axis is 3. The transient TET portion is
the subpath with initial point (0,0) and final point (3,2). All instances of the LCET in the TET
are congruent. For example, two instances are [(3; 2); (5; 2)), [(5; 2); (8; 5)) and [(13; 8); (15; 8)),
10)). The two are congruent because the first and second rays of the first subtrajectory
are congruent to the first and second rays of the second subtrajectory, respectively. 2
Figure
Timed progress graph of non-terminating program that uses one semaphore.
LCETs motivate one problem in addition to P1 and P2 (x2.4), solved in this section:
P3: Find the set of all possible LCETs in a TPG that are reachable from some initial point.
Special Case TPG Definition: A TPG for a non-terminating program synchronizing only for
mutually exclusive resource access is an ordered triple h\Phi; ; G C i specifying process cycle times,
constraint lines in the initial quadrant only, and a point representing the initial program state:
\Phi: an ordered pair of cycle times OE 0 and OE 1 satisfying OE
: a set of constraint line generators, or line segments [W; X) that lie in the initial quadrant, each
corresponding to one edge in the graph model labeled by a non-empty condition. The initial
and final points of generator [W; X) are W and X, respectively. The instances of a generator
are defined to be all lines in the R 2 plane congruent to the generator; formally, all instances
of generators in is def
\Lambdag.
an initial point satisfying
lies on
either the x or y axis, within one cycle time of the origin.
The TET construction rules (I and II) and the definition of transition function f given earlier
in x2.1 and x3.2, respectively, apply unaltered to a TPG h\Phi; ; G C i.
Example 6 Figure 11 illustrates a finite portion of the TPG hfOE
A practical consideration: We henceforth assume that the execution time of each vertex in
a graph representing a program (e.g., the numbers in square brackets in Figs. 2, 5, and 8) is
an integer rather than a real number. Otherwise the computational geometric algorithms to be
presented will not work correctly with finite precision arithmetic (e.g., computation of the mod
operation is subject to roundoff error). The assumption of integer delays is not unreasonable in
practice for software performance evaluation. For example, in measurements from a computer
with a microsecond period clock and all measured times are rational numbers of the form x
Therefore scaling all measurements by the inverse of the clock period (e.g.,
yields the integer quantities required by the proposed algorithms.
5.1 Modified Algorithms to Compute
Computing f Recall from x3.2 that for a point G on a constraint line L, f o (G) is the
smallest point G 0 ? G in the set containing the final point of L and the points of intersection of
L with other constraint lines. Because in h\Phi; ; G C i contains only constraint lines in the initial
quadrant, we map G to a congruent point in the initial quadrant (i.e., mod(G)), then compute
as described in x4.1, and finally map each G 0 2 f back to the quadrant
containing G. Formally, we compute f
OE r
g.
Computing f d (G): The method to compute f d (G) given in x4.1 must be modified because a
non-terminating TPG has an infinite number of constraint lines in the plane. Therefore, as we
Figure
12: Illustration of using ShootRay, but only within initial quadrant.
did with f o (G), we compute f d using only the initial quadrant. We use the observation that any
diagonal ray fl in a TET can be partitioned into collinear rays
which the initial points of lie on a quadrant boundary. This fact provides an algorithm
to compute f d for each ray :i to the initial quadrant, compute
right initial quadrant edgesg), and translate G 0 back to fl k 's
quadrant to obtain fl k+1 :i. See [1] for the complete algorithm.
Example 7 In Fig. 11, f d (5; Fig. 12 shows the two ShootRay operations required
to compute f d (G) because ray (5; 2); (8; 5) lies in exactly two quadrants. Here,
5.2 P1: Finding All Possible TETs
Recall problem P1: Given a TPG h\Phi; ; Gi, output a representation of all TETs rooted at G.
Before stating a solution, two implications of non-terminating programs must be considered: A
TET may have infinite length, and there may be an infinite number of TETs (recall Fig. 10).
For the former case (infinite length TET), the TET must consist of a transient subtrajectory
followed by an infinite number of repetitions of an LCET. Therefore our algorithm will output a
representation of the transient trajectory and the first LCET. For the later case (infinite number
of TETs), we choose an integer value maxNPaths such that if our algorithm finds more than
maxNPaths possible TETs, it assumes that there are an infinite number of TETs and terminates
without further exploration. We now generalize algorithm FindAllTETs (x4.2) to solve P1 for
non-terminating programs.
Algorithm FindAllTETs (for non-terminating programs):
1. Initialize a graph to contain one green vertex labeled by G. Set nPaths=1.
2. Set G vertex. Color G 0 red. Increment nPaths by jjf (G 0 )jj \Gamma 1.
3. For each point G 00 in f(G 0 ), create a vertex labeled by G 00 , and add a directed edge from G 0
to G 00 . If G 00 is congruent to some point P labeling a vertex in the graph path from G to
G 00 , then add an arc from G 00 to P and color vertex G 00 red; otherwise color it green.
4. If the graph contains a green vertex and nPaths- maxNPaths, then go to step 2. Otherwise
output each graph path rooted at G. Label paths containing a cycle as a transient followed
by an LCET; label the remaining paths as a transient only.
Example 8 The graph constructed by FindAllTETs for the TPG of Fig. 11 consists of an edge
from the graph vertex labeled (0,0) to the vertex labeled (2,2), an edge from (2,2) to (5,2), from
(5,2) to (8,5), and from (8,5) back to (5,2). The fact that the graph contains only one path with
one cycle means that if the both processes simultaneously start execution (i.e., the program starts
in the state represented by (0,0)), then it must reach a periodic state sequence in which process 0
blocks for two time units and then both processes run concurrently for three time units. 2
5.3 Problem P2: Deciding Existence of Non-blocking TETs
Recall problem P2: Given \Phi and from a TPG, determine if there exists an initial point on the
x or y axis that leads to exactly one TET such that the TET is non-blocking. If there is, output
the TET.
Program termination makes only one minor difference from the algorithm given earlier
FreePoints in x 4.3): Change "lies on the top or right bounding line" to "equals (1; 1)", and
"return G; G 0 " to "return G; (1; 1)".
5.4 Problem P3: Find All Possible LCETs
We restate P3 by categorizing LCETs as blocking or non-blocking. A blocking LCET contains
a point representing a state in which some process is blocked. Geometrically, a blocking LCET
contains a horizontal or vertical ray, and a non-blocking LCET consists of a single diagonal ray.
Two TETs are homotopic if they are continuously transformable avoiding the constraint lines
(term due to Lipski and Papadimitriou [11] for paths in UPGs). For example, the TETs rooted at
all points in fG Fig. 6 all consist of one diagonal ray and are homotopic.
Problem P3: Given \Phi and from a TPG, output one element of each equivalence class of
blocking LCETs, and one element of each set of homotopic LCETs.
Algorithm FindNonBlockingLCETs (finds all non-blocking LCETs): Initialize L as in Find-
FreePoints (x4.3). Set ;. For each L 2 do: if ShootRay(L:f; +;
non-blocking LCETs"; otherwise for each point G
output "[G; G+ (OE is a non-blocking LCET."
Consider next finding all blocking LCETs. The key insight is that a set of TETs that contains
rays that intersect a given constraint line instance, denoted L, also contain a common point that
lies on L: either a dead point, in which case the point is the final point of all the TETs, or L:f . In
the later case, this set must either have the same LCET or reach the same dead state that lies on
another constraint line instance. If they reach a LCET, it is therefore only necessary to consider
the TET rooted at L:f and determine if it contains another point congruent to L:f . Lemma 1
makes this precise.
Lemma 1 The set of all (possibly unreachable) blocking LCETs can be found by finding all L 2
and all
Proof: See Theorems 3 and 4 in [2]. 2
Let i denote the set of LCETs satisfying Lemma 1. We can determine which LCETs in i are
reachable by calculating, for each line L in an LCET in i, ShootRay(L:f; \Gamma; L), where L is the
same line set as in algorithm FindFreePoints. If the return value of ShootRay lies on the bottom
or left edge of the initial quadrant, then the LCET containing L is reachable. Therefore we propose
as a solution to P3 the exhaustive testing of all L and i, evaluating f i (L:f) as described in x5.1.
6 Relation of TPGs to Petri Nets
Figures
13(a) and 13(b) are Petri net [16] representations of Figs. 1 and 4, respectively. The
producer/consumer program in Figure 13(a) is in the Petri Net class called Deterministic Systems
of Synchronizing Processes (DSSP) [18]; the shaded place denotes a buffer shared by the two
processes. The data-base transaction program (Fig. 13(b)), however, is not in class DSSP, because
it violates the rule of a place representing a buffer being the input to at most one process. In
Fig. 13(b) the places representing semaphores as buffers are inputs to both processes. Fig. 13(b)
is a simple (or asymmetric choice) net (i.e., all arc weights are one and if two places share an
output transition then the set of output transitions of one place is either equal to or a subset of
the output transitions of the other place [16, p. 554]). Therefore the class of programs meeting
assumptions A1 through A6 is a DSSP restricted to two linear processes [23] but generalized to
omit the private-buffer assumption (i.e., Definition 2.7(ii) in [18]).
Magott [12] gives an O(N) algorithm to compute minimum cycle time (MCT), or the minimum
time required for a consistent Petri net to return to its initial marking, given deterministic firing
times for nets consisting of a set of N cyclic processes that mutually exclusively share a single
(a)
(b)
output
output
Process 1:
read send
receive write
Process 0:
Process 0:
Process 1:
Figure
13: Petri net representations of Figs. 1 and 4.
resource, and shows that finding MCT in most nets with more complex resource sharing is NP-
hard. Also proved are complexity results for systems of processes with communication by buffers.
Finally, Holliday and Vernon [9] use Petri nets with frequency expressions (i.e., probabilities)
to resolve deterministically which transition fires when a token enables two or more transitions
simultaneously, and analyze a program similar to Fig. 4.
Our TPG solution provides a fourth analysis method of one Petri net class (in addition to cov-
erability trees, matrix-equations, and decomposition techniques) for certain behavioral properties
(i.e., P1 - enumerate all possible transition firing sequences, given an initial marking) and certain
structural properties (i.e., P2 and P3).
Conclusions
We have analyzed two building blocks of interprocess synchronization: mutual exclusion and
asynchronous communication with a finite number of buffers. This paper demonstrates that
properties about the set of all possible executions of certain parallel programs can be exactly
analyzed by solving an equivalent computational geometric problem.
The analysis is limited to two processes. Extension to d processes requires ray shooting in a
d dimensional Cartesian graph with dimensional hyperplanes in a non-simple arrangement
that are bounded in one dimension. To our knowledge, this is an open computational geometric
problem, whose solution would allow solution of problems P1 and P2 for an arbitrary number of
processes. The closest problem solved in d dimensions is ray shooting with unbounded hyperplanes
that form a simple arrangement (e.g., see [14]). One limitation in d dimensions is that a progress
graph represents contention by multiple processes for a resource as a nondeterministic choice of
which process gets the resource, and does not represent different queueing disciplines.
The broader implication of this work is two open questions: Can other program visualizations
be mapped to geometric problems? Can geometry be used to analyze other Petri net classes?
Acknowledgments
D. Allison, L. Heath, D. Kafura, A. Mathur, and S. Tripathi and anonymous referees made suggestions
that improved the manuscript. C. Shaffer helped locate computational geometric algorithms.
--R
Computational geometric performance analysis of limit cycles in timed transition systems.
Geometric performance analysis of semaphore programs.
Geometric performance analysis of periodic behavior.
Visual analysis of parallel and distributed programs in the time
The influence of random delays on parallel execution times.
Intersection and Decomposition Algorithms for Planar Arrangements.
The geometry of semaphore programs.
System deadlocks.
A generalized timed Petri net model for performance analysis.
Visualizing performance debugging.
A fast algorithm for testing for safety and detecting deadlocks in locked transaction systems.
Performance evaluation of systems of cyclic processes with mutual exclusion using Petri nets.
JED: Just an event display.
On vertical ray shooting in hyperplanes.
Petri nets: Properties
Concurrency control by locking.
Deterministic buffer synchronization of sequential systems.
A declarative approach to visualizing concurrent computations.
Operations Research - Methods and Problems
Algorithms in C.
An optimal algorithm for testing for safety and detecting deadlocks in locked transaction system.
Deterministic systems of sequential processes: Theory and tools.
Locking policies: Safety and freedom from deadlock.
--TR | ray shooting;parallel computation;reachability analysis;computational geometry;visualization;mutual exclusion;performance evaluation;timed progress graphs;petri nets |
629425 | Generalized Algorithm for Parallel Sorting on Product Networks. | AbstractWe generalize the well-known odd-even merge sorting algorithm, originally due to Batcher [2], and show how this generalized algorithm can be applied to sorting on product networks.If G is an arbitrary factor graph with N nodes, its r-dimensional product contains $N^r$ nodes. Our algorithm sorts $N^r$ keys stored in the r-dimensional product of G in $O(r^2F(N))$ time, where F(N) depends on G. We show that, for any factor graph G, F(N) is, at most, O(N), establishing an upper bound of $O(r^2\,N)$ for the time complexity of sorting $N^r$ keys on any product network.For product networks with bounded r (e.g., for grids), this leads to the asymptotic complexity of O(N) to sort $N^r$ keys, which is optimal for several instances of product networks. There are factor graphs for which $F(N)=O({\rm log}^2\,N),$ which leads to the asymptotic running time of $O({\rm log}^2\,N)$ to sort $N^r$ keys. For networks with bounded N (e.g., in the hypercube the asymptotic complexity becomes $O(r^2).$We show how to apply the algorithm to several cases of well-known product networks, as well as others introduced recently. We compare the performance of our algorithm to well-known algorithms developed specifically for these networks, as well as others. The result of these comparisons led us to conjecture that the proposed algorithm is probably the best deterministic algorithm that can be found in terms of the low asymptotic complexity with a small constant. | Introduction
Recently, there has been an increasing interest in product networks in the literature. This is
partly due to the elegant mathematical structure of product networks and partly due to the
fact that several well-known networks, such as hypercubes, grids, and tori, are instances of the
family of product networks. Many other instances of product networks have been proposed
recently, such as products of de Bruijn networks [9, 28], products of Petersen graphs [25],
and mesh-connected trees [8] (which are products of complete binary trees). As a general
class, routing properties of product networks have been studied in [4, 10]. Topological and
embedding properties of product networks have been analyzed in [9].
These papers aside, there has been no general study of algorithms for this important class
of networks. This paper makes an attempt towards filling this gap by presenting a generalized
sorting algorithm for product networks. A first version of the algorithm presented here, as
well as other generalized algorithms for several different problems, has been proposed in [11].
We expect that other researchers will eventually develop a variety of additional algorithms
for product networks.
In [2], Batcher presented two efficient sorting networks. Algorithms derived from these
networks have been presented for a number of different parallel architectures, like the shuffle-
exchange network [30], the grid [22, 31], the cube-connected cycles [27], and the mesh of trees
[24].
One of Batcher's sorting networks has, as main components, subnetworks that sort bitonic
sequences. A bitonic sequence is the concatenation of a non-decreasing sequence of keys with
a non-increasing sequence of keys, or the rotation of such a sequence. Sorting algorithms
based on this method are generally called "bitonic sorters." Several papers have been devoted
to generalizing bitonic sorters [3, 17, 21, 23].
The main components of the other sorting network proposed by Batcher in [2] are sub-networks
that merge two sorted sequences into a single sorted sequence. He called these
"odd-even merging" networks. Several papers generalized this network to merging of k
sorted sequences, where k ? 2. These are generally called k-way merging networks. Examples
are Green [12] who constructed a network based on 4-merge, and Drysdale and Young
Tseng and Lee [32], Parker and Parberry [26], Lisza and Batcher [20],
and Lee and Batcher [16] who constructed networks based on multiway merging.
Similarly, other algorithms based on a multiway-merge concept have been presented the
most commonly known being Leighton's Columnsort algorithm [19]. Initially, the objective
of this algorithm was to show the existence of bounded-degree O(n)-node networks that can
in O(log n) time. In this network the permutations at each phase are hard-wired
and the sortings are done with AKS networks, which limits its applicability for practical
purposes. However, Aggarwal and Huang [1] showed that it is possible to use Columnsort
as a basis and apply it recursively. Parker and Parberry's network cited above is also based
on a modification of Columnsort. These algorithms behave nicely when the number of keys
is large compared with the number of processors.
In this paper we develop another multiway-merge algorithm that merges several sorted
sequences into a single sorted sequence of keys. From this multiway-merge operation we
derive a sorting algorithm, and we show how to use this approach to obtain an efficient sorting
algorithm for any homogeneous product network. In its basic spirit, our multiway-merge
algorithm is somehow similar to a recent version of Columnsort [18, page 261] (although both
were developed independently), but ours outperforms Columnsort due to some fundamental
differences in the interpretation of this basic concept. First, our algorithm is based on a
series of merge processes recursively applied, while Columnsort is based on a series of sorting
steps. The only time we use sorting is for N 2 keys. Columnsort, on the other hand, uses
several recursive calls to itself in order to merge. Second, by observing some fundemental
relationships between the structural properties of product networks, and the definition of
sorted order we are able to avoid most of the routing steps required in the Columnsort
algorithm.
Among the main results of this paper, we show that the time complexity of sorting
N r keys for any N r -node r-dimensional product graph is bounded above as O(r 2 N ). We
also illustrate special cases of product networks with running times of O(r 2 ), O(N ), and
O(log 2 N) to sort N r keys.
On the grid and the mesh-connected trees [8, 9] with bounded number of dimensions
the algorithm runs in asymptotically-optimal O(N) time. On the r-dimensional hypercube
the algorithm has asymptotic complexity O(r 2 ), which is the same as that of Batcher odd-even
merge sorting algorithm on the hypercube [2]. Although there are asymptotically-
faster sorting algorithms for the hypercube [6], they are not practically useful for reasonable
number (less than 2 20 ) of keys [18]. We note, however, that there are randomized algorithms
which perform better on hypercubic networks than the Batcher algorithm in practice [5].
Adaptation of such approaches for product networks appears to be an interesting problem
for future research.
For products of de Bruijn networks [9, 28], our approach yields the asymptotic complexity
of O(r 2 log 2 N) time to sort N r keys, which reduces to O(log 2 N) time when the number of
dimensions is fixed. The same running time can be obtained for products of shuffle-exchange
networks also, because products of shuffle-exchange networks are equivalent in computational
power (i.e. in asymptotic complexity of algorithms) to products of de Bruijn networks [9].
This running time is same as the asymptotic complexity of sorting N r keys on the N r -node
de Bruijn or shuffle-exchange network by Batcher algorithm.
Finally, we can summarize the main contributions of this paper as to:
Effectively implement a sorting algorithm for homogeneous product networks,
ffl Obtain generalized upper bounds on the running time required for sorting on any
homogeneous product network, regardless of the topology of the factor network used
to build it,
ffl Show that, for several important instances of homogeneous product networks, the upper
bound derived matches the running time of the most-popular algorithms developed
specifically for these networks.
This paper is organized as follows. In Section 2 we present the basic definitions and the
notation used in this paper. In Section 3 we present our multiway-merge algorithm and
show how to use it for sorting. In Section 4 we show how to implement the multiway-merge
sorting algorithm on any homogeneous product network and analyze its time complexity. In
Section 5 we apply the algorithm to several homogeneous product networks and we obtain
the corresponding time complexity. The conclusions of this paper are given in Section 6.
Figure
1: Recursive construction of multi-dimensional product networks: (a) the factor
two-dimensional product; (c) three-dimensional product.
Definitions and Notation
2.1 Definitions and Notation Relating to Product Networks
Let G be an N-node connected graph, we define its r-dimensional homogeneous product as
follows.
Given a graph G with vertex set arbitrary edge set
EG , the r-dimensional homogeneous product of G, denoted PG r , is the graph whose vertex
set is V whose edge set is E PGr , defined as follows: two vertices
are adjacent in PG r if and only if both of the following
conditions are true:
1. x and y differ in exactly one symbol position,
2. if i is the differing symbol index, then
In this paper we assume that the r-tuple label for a node of PG r is indexed as 1 \Delta \Delta \Delta r, with
1 referring to the rightmost position index and r referring to the leftmost position index.
At a more intuitive level, the construction of PG r from PG G, can be
described by referring to Figure 1. Let x be a node of PG r\Gamma1 , and let [u]P G r\Gamma1 be the graph
obtained by prefixing every vertex x in PG r\Gamma1 by u, so that a vertex x becomes ux. First,
place the vertices of PG r\Gamma1 along a straight line as shown in Figure 1. Then, draw N copies
of PG r\Gamma1 such that the vertices with identical labels fall in the same column. Next extend
the vertex labels to obtain [u]P G r\Gamma1 , for Finally, connect the columns
in the interconnection pattern of the factor graph G, such that ux is connected to u 0 x if and
only if (u;
In this construction, we use [u]P G r\Gamma1 to refer to the uth copy of PG r\Gamma1 . What is not
explicitly stated here is the fact that [u]P G r\Gamma1 is the uth copy at dimension r. We can further
extend this notation to allow us to add a new symbol at any position of the vertex labels.
For this purpose, we use [u]P G i
to mean that the vertex labels of PG are extended by
inserting the value u at position i. As a result, the symbol at position j moves to position
allows us to observe that the construction of the
preceding paragraph could be re-stated for [u]P G i
for any position i, not just the leftmost
position.
Conversely, we can obtain the [u]P G i
subgraphs, for erasing all
the dimension-i edges in PG r . This process can be repeated recursively, and described by
a simple extension of our notation: we use [u; v]PG i;j
r\Gamma2 to refer to subgraphs isomorphic to
PG r\Gamma2 obtained by erasing the connections at dimensions i and j from PG r . A particular
subgraph so obtained can be distinguished by its unique combination of [u; v] values at
index positions i and j, respectively. The notation is similarly extended for erasing arbitrary
number of dimensions, and the order of the values in square brackets corresponds to the
order of the superscripts.
2.2 Definition and Properties of the Sorted Order
For an arbitrary factor graph G, the vertex labels 0; define the ascending order
of data when sorted. However, we need to define an order for the nodes of PG r , which will
determine the final location of the sorted keys. The order defined is known as snake order.
for the r-dimensional product graph PG r :
1. If the snake order corresponds to the order used for labeling the nodes of G.
2. If r ? 1, suppose that the snake order has been already defined for PG r\Gamma1 . Then,
(a) [u]P G r
has the same order as PG r\Gamma1 if u is even, and reverse order if u is odd.
(b) if any value in [u]P G r
precedes any value in [v]P G r
.
The snake order for product graphs is closely related to Gray-code sequences, which have
the fundamental property that any two consecutive terms in the sequence differ in exactly
one bit. Here we are dealing with N-ary symbols instead of binary symbols. Therefore we
need to use N-ary Gray-code sequences.
First recall the definition of Hamming distance and Hamming weight. Let s; z be r-tuples
from f0; then the Hamming distance between s and z is D(s;
is the absolute value of s . The Hamming weight of an r-tuple s is
Here we allow one or more of the elements of the r-tuples to be the special
"all" symbol , which will be defined later. If any of the symbols in the r-tuple is the "all"
symbol then its index position is omitted whenever the r-tuple is involved in the computation
of Hamming distances and Hamming weights.
We say that a sequence Q r is an N-ary Gray-code sequence of order r, if its elements
are all the r-tuples in f0; any two consecutive elements in it have unit
Hamming distance. Consequently, the Hamming weights of two consecutive terms will have
different parity. We use R(Q r ) to denote the sequence obtained by listing the elements of
Q r in reverse order.
The definition below shows one way to construct N-ary Gray-code sequences of arbitrary
order recursively. Let [u]Q k denote the sequence obtained by prefixing each element of Q k
with the symbol u if u is even, or by prefixing each element of R(Q k ) with u, if u is odd.
Definition 3 An N-ary Gray-code sequence of order r, denoted Q r , can be obtained as
1.
2. concatenation
of the sequences inside the curly brackets.
Note that Q r is in fact the snake order defined on the vertices of the r-dimensional
product network PG r .
Example: If the 3-ary Gray-code sequences of order r, for are:
112, 102, 101, 100, 200, 201, 202, 212, 211, 210, 220, 221, 222g,
Note that Q r could have been defined for inserting the value u at any position in the
above definition, rather than the leftmost position. This observation allows us to use [u]Q i
to denote the subsequence of Q r that contains the value u in position i. We are especially
interested in the subsequences [u]Q 1
1. For given u, the elements of
this subsequence are in positions u, u, and so on, in
Given this observation, and the identity relationship between Q r and the snake order for
the nodes of PG r , it follows that if PG r contains a sequence of keys sorted in snake order,
the keys on the subgraph [u]P G 1
are also sorted in snake order, and are in the positions
u, etc., of the whole sequence.
Consider now dividing Q r into N r\Gamma1 subsequences of N consecutive elements each. We
observe from Definition 3 that any two elements within the same subsequence would differ in
their rightmost symbols only. Thus, we can distinguish a particular subsequence of elements
with the common symbols they have at positions 2; use [
to denote the
group sequence obtained from Q r in this fashion, where stands for all of 0;
For example, given Q 3 as above, its group sequence is
where stands for all of 0, 1, and 2. Thus, more explicitly, we can write
Here, an element g of [
is of the form if the Hamming weight
of g is even, or of the form if the Hamming weight of g is odd.
Moreover, two successive elements of [
still have unit Hamming distance.
Given the relation between Q r and PG r , an element g of [
identifies a dimension-1
G-subgraph of PG r . This is because in such a G-subgraph of PG r all node labels have the
same values at symbol positions 2; We say that a G-subgraph is even (resp. odd)
if the Hamming weight for its corresponding element of [
is even (resp. odd). We
can extend the above notation and write [ ; ]Q 2;1
r\Gamma2 to identify the set of PG 2 -subgraphs at
dimensions f1; 2g. Two consecutive elements of [ ; ]Q 2;1
r\Gamma2 will again have unit Hamming
distance, and thus the elements of [ ; ]Q 2;1
r\Gamma2 will be ordered in Gray-code sequence. Again,
an element
r\Gamma2 can have even or odd Hamming weight, and the corresponding
subgraph can be said to be even or odd.
3 Multiway-Merge Sorting Algorithm
This section develops the basic steps of the proposed sorting algorithm without regard to
any specific network. For this discussion, it does not even matter whether the algorithm is
performed sequentially or in parallel. The subsequent sections will give the implementation
details for product networks.
A sorted sequence is defined as a sequence of keys (a
a . The multiway-merge algorithm combines N sorted sequences A
(a into a single sorted sequence
Since this will be the case when we implement the algorithm on product networks, we will
assume m to be some power of N , hence the resulting sorted
sequence, J , will contain N k keys.
The heart of the proposed sorting algorithm is the multiway-merge operation. Thus, we
will spend much of our time discussing this merging process. In order to build an intuitive
understanding of the basic idea of the merge operation, we assume that the keys to be
sorted are placed on a two-dimensional block, as shown in Figure 2. This is not to imply
a two-dimensional organization of the data in product networks. When implementing the
algorithm in product networks, each row of data (containing
will be stored on a 1)-dimensional subgraph of the product graph. The two-dimensional
organization in Figure 2 is for the reader's convenience in visualizing what happens to the
data at various steps of the algorithm, so that we can use the terms "row" and "column" in
order to refer to groups of keys that are subjected to the same step of algorithm. Our use of
the terms "row" and "column" should not be interpreted to imply the physical organization
of data in a two dimensional array.
Subject to this clarification, we initially assume that each sorted sequence A i , is in a
different row (see Figure 2). We also assume the existence of an algorithm which can sort
We make no assumption about the efficiency of this algorithm as yet. In Section 5
we discuss several possible ways to obtain efficient algorithms for this purpose. The purpose
of this assumption is to maintain the generality of the discussions, independent of the factor
network used to build the product network.
To show the correctness of the algorithm we will use the zero-one principle due to Knuth
A
A
A
A
m=N
Figure
2: Initial situation before the merge process starts. Each sorted sequence is represented
as a horizontal block (a row).
[13]. The zero-one principle states that if an algorithm based on compare-exchange operations
is able to sort any sequence of zeroes and ones, then it sorts any sequence of arbitrary
keys.
3.1 Multiway-Merge Algorithm
Here we consider how to merge N sorted sequences, A
single large sorted sequence. The initial situation is pictured in Figure 2.
The merge operation consists of the following steps:
1: Distribute the keys of each sorted sequence A i among N sorted subsequences
1. The subsequence B i;j will have the form
is equivalent to writing the keys of each A i on a m
N \Theta N array in snake order (as shown in
Figure
and then reading the keys column-wise so that column j of the array becomes B i;j ,
1. Note that each subsequence B i;j is sorted, since the keys in it are in
the same relative order as they appeared in A i .
Figure
4 illustrates the situation after the completion of this process. Each of the N
rows contains N sorted subsequences B i;j , where each B i;j box in Figure 4 corresponds to a
column of keys in Figure 3 written horizontally.
Example: If for some i, A
Step 2: Merge the N subsequences B i;j found in column j of Figure 4 into a single sorted
sequence C j , for 1. This is done in parallel for all columns by a recursive call
to the multiway-merge process if the total number of keys in the column, m, is at least N 3 .
If the number of keys in a column of Figure 4 is N 2 , a sorting algorithm for sequences of
length N 2 is used (we already assumed the existence of such an algorithm above), because a
recursive call to the merge process would not make much progress when m is N 2 (this point
will be cleared at the end of this section). At the end of this step, we write the resulting
subsequences vertically in N columns of length m each. The situation after this step is
illustrated in Figure 5.
m/N
Figure
3: Distribution of the keys of A i among the N subsequences B i;j . The thick line
represents the keys of A i in snake order.
m/N
Figure
4: Situation after step 1: each sequence A i , has been distributed into N subsequences
. Each of the subsequences contains m=N elements and is still sorted.
Figure
5: Situation after merging the subsequences in each column. The keys are sorted
from top to bottom.
Step 3: Interleave the sequences C j into a single sequence
sequence D is formed simply by reading the m \Theta N array of Figure 5 in row-major order
starting from the top row. The sequence D is re-drawn in Figure 6 from the C j sequences of
Figure
5 with no change in the organization of data. Figure 6 is identical to Figure 5, except
that we regard it as one big sequence to be read in row-major order.
We prove below that D is now "almost" sorted. This situation is shown in Figure 6. If
the keys being sorted can only take values of zero or one, the shaded area represents the
position of zeroes and the white area represents the position of ones. As D is obtained by
reading the values in row-major order, the potential dirty area (window of keys not sorted)
has length no larger than N 2 . This fact will be shown in Lemma 1.
Step 4: Clean the dirty area. To do so we start by dividing the sequence D into m=N
subsequences of N 2 consecutive keys each. We denote these subsequences as
1. The ith subsequence has the form
the first N rows of keys in Figure 6 (or equivalently in Figure 5) are concatenated to obtain
the next N rows are concatenated to obtain E 2 , and so on (see Figure 7.(a)).
We then independently sort the subsequences (rows in Figure 7.(a)) in alternate orders by
using the algorithm which we assumed available for sorting N 2 keys. E i is transformed into
a sequence F i (see Figure 7.(b)), where F i contains the keys of E i sorted in non-decreasing
order if i is even or in non-increasing order if i is odd, for
Now, we apply two steps of odd-even transposition between the sequences F i , for
(i.e. in the vertical direction of Figure 7.(b)). In the first step of odd-even trans-
position, each pair of sequences F i and F i+1 , for i even, are compared element by element.
Two sequences G i and G i+1 are formed (not shown in the figure) where
and g. In the second step of the odd-even transposition, G i and G i+1
rows
Figure
Sequence D obtained after interleaving. The order goes from top to bottom by
reading successive rows from left to right. The shaded area is filled with zeroes and the white
area is filled with ones. The boundary area has at most N rows, as shown in Lemma 1.
for i odd are compared in a similar manner to form the sequences H i and H i+1 . Figure 7.(c)
shows the situation after the two steps of odd-even transposition.
Finally, we sort each sequence H i in non-decreasing order, generating sequences I i , for
Figure 7.(d)). The final sorted sequence J is the concatenation of the
sequences I i .
We need to show that the process described actually merges the sequences. To do so we
use the zero-one principle mentioned earlier.
sorting an input sequence of zeroes and ones, the sequence D obtained
after the completion of step 3 is sorted except for a dirty area which is never larger than N 2 .
Proof: Assume that we are merging sequences of zeroes and ones. Let z i be the number of
zeroes in sequence A i , for 1. The rest of keys in A i are ones. Step 1 breaks
each sequence A i into N subsequences B i;j , 1. It is easy to observe, from the
way step 1 is implemented, that the number of zeroes in a subsequence B i;j is bz
or 1. Therefore, for a given i, the sequences B i;j can differ from each
other in their number of zeroes by at most one.
At the start of step 2, each column j is composed of the subsequences B i;j for
1. At the end of step 2, all the zeroes are at the beginning of each sequence
. The number of zeroes in each sequence C j is the sum of the number of zeroes in B i;j
for fixed j and 1. Thus, two sequences C j can differ from each other by
at most N zeroes. In step 3 we interleave the N sorted sequences into the sequence D by
taking one key at a time from each sequence C j . Since any two sequences C j can differ in
their number of zeroes by at most N , and since there are N sequences being interleaved,
(a) (b) (c) (d)
F
F
I
Figure
7: Cleaning of the dirty area.
the length of the window of keys where there is a mixture of ones and zeroes is at most N 2 .
Now we can show how the last step actually cleans the dirty area in the sequence.
Lemma 2 The sequence J obtained (by concatenation of sequences I i in snake order) after
the completion of step 4 is sorted.
Proof: We know that the dirty area of the sequence D, obtained in step 3, has at most
length N 2 . If we divide the sequence D into consecutive subsequences,
the dirty area can either fit in exactly one of these subsequences or be distributed between
two adjacent subsequences.
If the dirty area fits in one subsequence E k , then after the initial sorting and the odd-even
transpositions, the sequences H i contain exactly the same keys as the sequences E i , for
1. Then, the last sorting in each sequence H i and the final concatenation of
the I i sequences yield a sorted sequence J .
However, if the dirty area is distributed between two adjacent subsequences, E k and
we have two subsequences containing both zeroes and ones. Figure 7.(a) presents an
example of this initial situation. After the first sorting, the zeroes are located at one side of
F k and at the other side of F k+1 (see Figure 7.(b)).
One of the two odd-even transpositions will not affect this distribution, while the other
is going to move zeroes from the second sequence to the first and ones from the first to the
second. After these two steps, H k is filled with zeroes or H k+1 is filled with ones (see Figure
7.(c)). Therefore, only one sequence contains zeroes and ones combined. The last step of
sorting will sort this sequence. Then the entire sequence J will be sorted (see Figure 7.(d)).
3.2 The Need for a Special Algorithm for N 2 Keys
The reader can observe that, at the end of step 3, the dirty area will still have length N 2 even
when we are merging N sequences of length N each. Thus, we do not make much progress
when we apply the multiway-merge process to this case. This is a fundamental property of
the merge process, and not a weakness of our algorithm. This difficulty can be overcome in a
number of ways to keep the running time low, depending on the application area of the basic
idea of the merge algorithm. For example, if we are interested in building a sorting network,
we can implement subnetworks based on recursively updating N to a smaller value M and
then merge M sequences of length M and repeat this recursion until
a single sequence is obtained.
In this paper our focus is developing sorting algorithms for product networks. Here
we assume the availability of a special sorting algorithm designed for the two-dimensional
version of the product network under consideration. In subsequent sections we discuss several
methods to obtain such algorithms as we consider more specific product networks. The
efficiency of that special algorithm has an important effect on the overall complexity of the
final sorting algorithm by the proposed approach. For all the cases considered here, it will
turn out that the resulting running time is either asymptotically optimal or close to optimal
when the number of dimensions is bounded.
3.3 Sorting Algorithm
Using the above algorithm, and an algorithm to sort sequences of length N 2 , it is easy to
obtain a sorting algorithm to sort a sequence of length N r , for r - 2.
First divide the sequence into subsequences of length N 2 and sort each subsequence
independently. Then, apply the following process until only one sequence remains:
1. Group all the sorted sequences obtained into sets of N sequences each as in Figure 1.
(If we are sorting N r keys, then initially there will be N r\Gamma3 groups, each containing N
sorted sequences of length N 2 .)
2. Merge the sequences in each group into a single sorted sequence using the algorithm
shown in the previous section. If now there is only one sorted sequence then terminate.
Otherwise go to step 1.
4 Implementation on Homogeneous Product Networks
Here we mainly focus on the implementation of the multiway-merge algorithm on a k-dimensional
product network PG k in detail. The sorting algorithm trivially follows from
the merge operation as described above. The initial scenario is N sorted sequences, of
stored on the N subgraphs [u]P G k
of PG k in snake order. Before the
sorting algorithm starts each processor holds one of the keys to be sorted. During the
sorting algorithm, each processor needs enough memory to hold at most two values being
compared. Throughout the discussions, the steps of implementation are illustrated by a
three-dimensional product of some graph G of nodes. The interconnection pattern of
G is irrelevant for this discussion.
Step 1: This step does not need any computation or routing. Recall from Section 2 that
each of the subgraphs [u; v]PG k;1
k\Gamma2 of [u]P G k
contains a subsequence of keys sorted in snake
dimension 3
dimension 2
dimension 1
Figure
8: Initial situation on the example 3-dimensional product graph.
(a)
dimension 1
dimension 3
dimension 2
(b)
dimension 1
dimension 3
dimension 2
Figure
9: Step 2 of the multiway-merge algorithm.
order, and that the positions of the keys in that subsequence with respect to the total sorted
sequence are v, Therefore, the sequence B u;v
is already stored on the subgraph [u; v]PG k;1
k\Gamma2 , sorted in snake order.
This is illustrated in Figure 8, where the three sequences to be merged are available
in snake order on the three subgraphs formed by removing the edges of dimension-3. The
subgraph [0]P G 3
(leftmost subgraph in Figure 8) contains A 0 , the subgraph [1]P G 3
(center
contains A 1 , and the subgraph [2]P G 3(rightmost subgraph) contains A 2 . In this
example, each B u;v contains only N keys, which fit in just one G subgraph. In general, B u;v
will be available in snake order on [u; v]PG k;1
k\Gamma2 . In this example, they are at [u; v]PG 3;1
which really correspond to G-subgraphs at dimension 2 (i.e. columns of Figure 8).
Step 2: This step is implemented by merging together the sequences on subgraphs [u; v]PG k;1
with the same u value into one sequence on [v]P G 1
2, the merging is done
by directly sorting with an algorithm for PG 2 . If step is done by a recursive
call to the multiway-merge algorithm, where each subgraph [v]P G 1
merges the sorted
dimension 3
dimension 2
dimension 1
Figure
10: Step 3 of the multiway-merge algorithm.
sequences stored on their [u; v]PG k;1
subgraphs.
We illustrate this step in Figure 9. For clarity, we first show the initial situation in
Figure
9.(a). This is same as the situation in Figure 8, but dimensions 1 and 3 are exchanged
to show the subsequences that will be merged together more explicitly. The B u;v sequences
to be merged together are the columns of Figure 9.(a). The result of merging is shown in
Figure
9.(b). Each C v is sorted in snake order and is found in the subgraph [v]P G 1Step 3: This step is directly done by reintroducing the dimension-1 connections of PG k
and reading the keys in snake order for the PG k graph. No movement of data is involved
in this step. We explicitly show the resulting sequence for our example in Figure 10 by
switching dimensions 1 and 3 in Figure 9.(b). The reader can observe from Figure 10 that
the keys now appear to be close to a fully sorted order (compare Figure 10 with Figure 11.d
which shows the final sorted order). In fact, we know from Lemma 1 that in the case of
sorting zeroes and ones, we are left with a small dirty area. This implies that every key is
within a distance of N 2 from its final position.
Step 4: This last step cleans the potential dirty area. Recall that the 2-dimensional
subgraphs
2 of PG k can be identified by the group sequences
that an element g of [ ; ]Q 2;1
k\Gamma2 identifies a unique PG 2 subgraph at dimensions f1; 2g, and
that these PG 2 subgraphs are ordered by the corresponding group sequence which defines
the snake order between subgraphs. In this step we independently sort the keys in each
subgraphs at dimensions f1; 2g, where the sorted order alternates for "consecutive"
subgraphs. Each subgraph is sorted in snake order by using an algorithm which we assumed
available for two dimensions. The result of this step is illustrated in Figure 11.(a).
We now perform two steps of odd-even transposition between the subgraphs. In the first
step, the keys on the nodes of the "odd" PG 2 subgraphs are compared with the keys on the
corresponding nodes of their "predecessor" subgraphs. The keys are exchanged if the key in
the predecessor subgraph is larger. Figure 11.(b) shows the result of this first step in our
example. The keys 3 and 2 in nodes (1; 2; 1) and (1; 2; 2) have been exchanged with two keys
both with value 4 in nodes (0; 2; 1) and (0; 2; 2).
In the second step of odd-even transposition, the keys on the nodes of the "even" PG 2
subgraphs are compared (and possibly exchanged ) with those of their predecessor subgraphs.
Figure
11.(c) shows the result of this second step. In this figure, the key 5 in node (2,0,0)
has been exchanged with the key 6 in node (1,0,0).
(a)
dimension 3
dimension 1
dimension 2
dimension 3
dimension 1
dimension 2
(b)3 4 4
dimension 3
dimension 1
dimension 2
(c)
dimension 3
dimension 1
dimension 2
(d)
Figure
11: Step 4 of the multiway-merge algorithm.
Finally, a sorting within each of the 2-dimensional subgraphs ends the merge process
Figure
11.(d)).
One point which needs to be examined in more detail here is that, depending on the
graph G, the nodes holding the two keys that need to be compared and possibly
exchanged with each other may or may not be adjacent in PG k . If G has a Hamiltonian
path, then the nodes of G can be labeled in the order they appear on the Hamiltonian path
to define the sorted order for G. Then, the two steps of odd-even transposition are easy to
implement since they involve communication between adjacent nodes in PG k .
If, however, G is not Hamiltonian (e.g. a complete binary tree), the two nodes whose keys
need to be compared may not be adjacent, but they will always be in a common G subgraph.
In this case permutation routing within G may be used to perform the compare-exchange
step as follows: First, two nodes that need to compare their keys send their keys to each
other. Then, depending on the result of comparison, each node can either keep its original
key if the keys were already in correct order, or they drop the original key and keep the new
were out of order. To cover the most general case in the computation of running
time below, we will assume that G is not Hamiltonian, and thus we will implement these
compare-exchange steps by using permutation routing algorithms. We will see that whether
or not G is Hamiltonian only effects the constant terms in the running time complexity
function.
4.1 Analysis of Time Complexity
To analyze the time taken by the sorting algorithm we will initially study the time taken by
the merge process on a k-dimensional network. This time will be denoted as M k (N ). Also
denote the time required for sorting on PG 2 and R(N) denote the time required
for a permutation routing on G.
Lemma 3 Merging N sorted sequences of N k\Gamma1 keys on PG k takes M k
steps.
Proof: Step 1 does not take any computation time. Step 2 is a recursive call to the merge
procedure for dimensions, and hence will take M k\Gamma1 (N) time. Step 3 does not take any
computation time. Finally, step 4 takes the time of one sorting on PG 2 , two permutation
routings on G (for the steps of odd-even transposition), and one more sorting on PG 2 .
Therefore, the value of M k (N) can be recursively expressed as:
with initial condition
that yields
We can now derive the value of S r (N ).
Theorem 1 For any factor graph G, the time complexity of sorting N r keys on PG r is
Proof: By the algorithm of Section 3.2 the time taken to sort N r keys on PG r is the time
taken to sort in a 2-dimensional subgraph and then merge blocks of N sorted sequences into
increasing number of dimensions. The expression of this time is as follows:
r
never smaller than R(N ), the time obtained is S r
O(r
The following corollary presents the asymptotic complexity of the algorithm and one of
the main results of this paper.
Corollary 1 If G is a connected graph, the time complexity of sorting N r keys on PG r is
at most
Proof: To prove the claim, we first compute the complexity of sorting by our algorithm
on the r-dimensional torus. Then we refer to a result in [8] that showed that if G is a
connected graph, PG r can emulate any computation on the N r -node r-dimensional torus by
embedding the torus into PG r with dilation 3 and congestion 2. Since this embedding has
constant dilation and congestion, the emulation has constant slowdown [14]. (In fact, the
slowdown is no more than 6, and needed only when G does not have a Hamiltonian cycle).
Finally, we use these slowdown values to compute the exact running time for PG r
Now we compute the complexity of sorting on the r-dimensional torus. We basically
need a sorting algorithm from the literature that sorts N 2 keys in two-dimensional torus in
snake order. We also need an algorithm for permutation routing on the N-node cycle. For
example, we can use the sorting algorithm proposed by Kunde [15], which has complexity
It is also known that any permutation routing can be done on the N-node
cycle in no more than N=2 steps. Hence, we can sort on the N r -node r-dimensional torus in
at most 3(r \Gamma 1) steps.
Since the emulation of this algorithm by PG r requires a slowdown factor of at most 6, any
arbitrary N r -node r-dimensional product network can sort with complexity
5 Application to Specific Networks
In this section we obtain the time complexity of sorting using the multiway-merge sorting
algorithm presented for several product networks in the literature. To do so, we obtain upper
bounds for the values of S 2 (N) and R(N) for each network. Using these values in Theorem
will yield the desired running time.
Grid: Schnorr and Shamir [29] have shown that it is possible to sort N 2 keys on a N 2 -node
2-dimensional grid in 3N+o(N) time steps. It is also trivial to show that the time to perform
a permutation on the N-node linear array is at most R(N) These values of S 2 (N)
and R(N) imply that our algorithm will take at most 4(r \Gamma 1)
steps to sort N r keys on an N r -node r-dimensional grid. If the number of dimensions, r, is
bounded, this expression simplifies to O(N ).
This algorithm is asymptotically optimal when r is fixed since the diameter of the grid
with bounded number of dimensions is O(N ), and a value may need to travel as far as the
diameter of the network. If r is not bounded, then the diameter of the N r -node grid is
which means that the running time of our algorithm is off the optimal value by at
most a factor of r.
Mesh-connected trees (MCT): This network was introduced in [9] and extensively
studied in [8]. It is obtained as the product of complete binary trees. Due to Corollary 1 we
can sort on the N r -node r-dimensional mesh-connected trees in O(r 2 N) time steps. If r is
bounded, we again have O(N) as the running time.
This running time is asymptotically optimal when r is fixed, because the bisection width
of the N r -node r-dimensional MCT is O(N r\Gamma1 ), as shown in [8], and in the worst case we
may need to move \Omega\Gamma N r ) values across the bisection of the network. When r is not fixed,
the algorithm is off the bisection-based lower bound by a factor of r 2 . The diameter-based
lower bound used above for grids does not help to tighten this lower bound any further,
because the diameter of the MCT is logarithmic in the number of nodes [8]. It appears
interesting to investigate if it is possible to sort with lower running time than O(r 2 N) when
r is not bounded. If such an algorithm exists, it must use a completely different approach
than ours, because the value of S 2 (N) in Theorem 1 cannot be less than O(N) due to the
O(N) bisection width of the two-dimensional MCT network.
Hypercube: The hypercube has fixed 2. It is not hard to sort in snake order on
the two-dimensional hypercube in 3 steps. A permutation routing on the one-dimensional
hypercube takes only one step. Therefore, the time to sort on the hypercube with our
algorithm is running time is same as the running
time of the well-known Batcher odd-even merge algorithm for hypercubes. In fact, Batcher
algorithm is a special case of our algorithm.
Petersen Cube: The Petersen cube is the r-dimensional product of the Petersen graph,
shown in Figure 12. The Petersen graph contains 10 nodes and consists of an outer 5-cycle
and an inner 5-cycle connected by five spokes. Product graphs obtained from the Petersen
graph are studied in [25]. Like the hypercube, the product of Petersen graphs has fixed
N , and therefore the only way the graph grows is by increasing the number of dimensions.
Since the Petersen graph is Hamiltonian, its two-dimensional product contains the
two-dimensional grid as a subgraph. Thus, we can use any grid algorithm for sorting 100
keys on the two-dimensional product of Petersen graphs in constant time. Consequently,
the r-dimensional product of Petersen graphs can time. The constant
involved is not small, but it is not going to be unreasonably large either. It may very well
Figure
12: Petersen graph.
be possible to improve this constant by developing a special sorting algorithm for the two-dimensional
product of Petersen graphs. This is, however, outside the scope of this paper.
Product of de Bruijn and shuffle-exchange networks: To sort on their two-dimensional
instances we can use the embeddings of their factor networks presented in [9] which have
small constant dilation and congestion. In particular, a N 2 -node shuffle-exchange network
can be embedded into the N 2 -node 2-dimensional product of shuffle-exchange networks with
dilation 4 and congestion 2. Also a N 2 -node de Bruijn network can be embedded into the
2-dimensional product of de Bruijn networks with dilation 2 and congestion 2. Sorting
on the N 2 -node shuffle-exchange or de Bruijn networks can be done in O(log 2 n)
time by using Batcher algorithm [30]. Thus, we can sort on the N 2 -node 2-dimensional product
of shuffle-exchange or de Bruijn network by emulation of the N 2 -node shuffle-exchange
or de Bruijn network in S 2 steps. Using this in Theorem
1, our algorithm will take O(r 2 log 2 N) time steps to sort N r keys. Again, if r is bounded the
expression simplifies to O(log 2 N ). If r is not bounded, the running time of our algorithm is
asymptotically the same as the running time of sorting N r keys on the N r -node de Bruijn
or shuffle-exchange graphs by Batcher algorithm. Here again, we come across an interesting
open problem, to see if it is possible to sort on products of these networks in asymptotically
less time for unbounded number of dimensions.
6 Conclusions
In this paper we have presented a unified approach to sorting on homogeneous product
networks. To do so, we present an algorithm based on a generalization of the odd-even
merge sorting algorithm [2]. We obtain O(r 2 N) as an upper bound on the complexity of
sorting on any product network of r dimensions and N r nodes.
The time taken by the sorting algorithm on the grid and the mesh-connected trees with
bounded number of dimensions is O(N ), which is optimal. On the hypercube the algorithm
takes O(r 2 reaching the asymptotic complexity of the odd-even merge sorting
algorithm on the hypercube.
On other product networks our algorithm has the same running time as those of other
comparable networks. For instance, on the product of de Bruijn or shuffle-exchange graphs
the running time is O(r 2 log 2 N ). This is asymptotically the same as the running time of
Batcher algorithm on the N r -node shuffle-exchange or de Bruijn graphs.
From a theoretical point of view, it will be interesting to investigate if there are better
algorithms for product networks when r is not bounded. Several interesting alternatives
appear to be feasible, although we have not had the time to investigate them. For instance,
we could try to generalize the hypercube randomized algorithms for product networks.
--R
"Network Complexity of Sorting and Graph Problems and Simulating CRCW PRAMS by Interconnection Networks,"
"Sorting Networks and their Applications,"
"On Bitonic Sorting Networks,"
"A Unified Framework for Off-Line Permutation Routing in Parallel Networks,"
"A Comparison of Sorting Algorithms for the Connection Machine CM-2,"
"Deterministic Sorting in Nearly Logarithmic Time on the Hypercube and Related Computers,"
"Improved Divide/Sort/Merge Sorting Network,"
"Computational Properties of Mesh Connected Trees: Versatile Architecture for Parallel Computation,"
"Products of Networks with Logarithmic Diameter and Fixed Degree,"
"A General Framework for Developing Adaptive Fault-Tolerant Routing Algorithms,"
Homogeneous Product Networks for Processor Interconnection.
"Some Improvements in Non-Adaptative Sorting Algorithms,"
Searching and Sorting
"Work-Preserving Emulations of Fixed-Connection Networks,"
"Optimal Sorting on Multi-Dimensionally Mesh-Connected Computers,"
"A Multiway Merge Sorting Network,"
"On Sorting Multiple Bitonic Sequences,"
Introduction to Parallel Algorithms and Architectures: Arrays
"Tight Bounds on the Complexity of Parallel Sorting,"
"A Modulo Merge Sorting Network,"
"A Generalized Bitonic Sorting Network,"
"Bitonic Sort on a Mesh-Connected Parallel Computer,"
"K-Way Bitonic Sort,"
"Efficient VLSI Networks for Parallel Processing Based on Orthogonal Trees,"
"The Folded Petersen Network: A New Communication- Efficient Multiprocessor Topology,"
"Constructing Sorting Networks from k-Sorters,"
"The Cube-Connected Cycles: A Versatile Network for Parallel Computation,"
"Product-Shuffle Networks: Toward Reconciling Shuffles and Butter- flies,"
"An Optimal Sorting Algorithm for Mesh Connected Com- puters,"
"Parallel Processing with the Perfect Shuffle,"
"Sorting on a Mesh-Connected Parallel Computer,"
"A Parallel Sorting Scheme whose Basic Operation Sorts n Elements,"
"An Economical Construction for Sorting Networks,"
--TR
--CTR
Yuh-Shyan Chen , Chih-Yung Chang , Tsung-Hung Lin , Chun-Bo Kuo, A generalized fault-tolerant sorting algorithm on a product network, Journal of Systems Architecture: the EUROMICRO Journal, v.51 n.3, p.185-205, March 2005
Shan-Chyun Ku , Biing-Feng Wang , Ting-Kai Hung, Constructing Edge-Disjoint Spanning Trees in Product Networks, IEEE Transactions on Parallel and Distributed Systems, v.14 n.3, p.213-221, March | product networks;odd-even merge;algorithms;interconnection networks;sorting |
629430 | Scheduling Data-Flow Graphs via Retiming and Unfolding. | AbstractLoop scheduling is an important problem in parallel processing. The retiming technique reorganizes an iteration; the unfolding technique schedules several iterations together. We combine these two techniques to obtain a static schedule with a reduced average computation time per iteration. We first prove that the order of retiming and unfolding is immaterial for scheduling a data-flow graph (DFG). From this nice property, we present a polynomial-time algorithm on the original DFG, before unfolding, to find the minimum-rate static schedule for a given unfolding factor. For the case of a unit-time DFG, efficient checking and retiming algorithms are presented. | INTRODUCTION
OW to efficiently and optimally schedule iterative or
recursive algorithms is an important problem in VLSI
high level synthesis and compilers for parallel machines
like VLIW or Data-Flow machines. For example, given a
signal flow graph of any filter (it may have many cycles),
we would like to know how to obtain a schedule such that a
resultant synthesized hardware can achieve the highest
pipeline rate. We combine two techniques, retiming and
unfolding, to maximize the execution rate of static sched-
ules. This combination technique turns out to be very simple
and efficient, and has great potential to be generalized
to other applications. In this paper, we study some fundamental
theorems of this combination, and provide efficient
algorithms.
The input algorithm is described as a data-flow graph
(DFG), which is widely used in many fields; for example, in
circuitry [1], in program descriptions [2], [3], [4], etc. In a
DFG, nodes represent operations and edges represent
precedence relationships. The graph G in Fig. 2a is an example
of DFG, where the number attaches to a node is its
computation time. A DFG is called a unit-time DFG if the
computation time of every node is one unit. A certain delay
count is associated with each edge to represent inter-iteration
precedences.
Although our results are quite general to many applications
which use the model of DFGs, in this paper, we specifically
consider the problem of multiprocessor scheduling
for a recursive or iterative algorithm, as being studied in
[5], [4], which is particularly useful in DSP applications
(DFGs are usually called signal-flow graphs in DSP). Fig. 1
shows an innermost body of a loop program, and the DFG
in Fig. 2a is its corresponding DFG. A schedule 1 for G is
shown in Fig. 2b, which takes three time units (equals the
cycle period of G) to complete an iteration. Since we can use
this schedule repeatedly for each iteration, we call such a
schedule a static schedule.
to n do
Fig. 1. A loop program.
The retiming technique has been effectively used to improve
static schedules by rearranging the delays [1], [6].
The retimed DFG, denoted by G r , in Fig. 3a corresponds to
a faster static schedule in Fig. 3b. Static schedules can be
further improved by using the common unfolding tech-
nique. The unfolding technique is studied in [4], [6], and
generalized to handle switches in [7]. The original DFG is
unfolded f times, so the unfolded graph, denoted by G f ,
consists of f copies of the node set and the edge set. Each
instance of a static schedule contains f iterations, and its
computation time is called the cycle period. Therefore, the
iteration period, which is the average computation time per
iteration (cycle period/f) can be reduced. This f is called an
unfolding factor.
A static schedule can be obtained from an unfolded
graph, and executed repeatedly for every f iterations. The
amount of memory needed to store a static schedule is proportional
to the unfolding factor. For general DFGs, Parhi
and Messerschmitt [4] show that, if the unfolding factor is
the least common multiple of all the loop delay counts in
DFG, a rate-optimal schedule can be achieved. An iterative
algorithm is designed in [6] to find the minimum unfolding
factor to achieve a given iteration period.
1. Under the assumption that there are enough resources available, each
node is scheduled as early as possible.
. L.-F. Chao is with the Dept. of Electrical and Computer Engineering, Iowa
State University, Ames, IA 50011. E-mail: [email protected].
. E. H.-M. Sha is with the Department of Computer Science and Engineer-
ing, University of Notre Dame, Notre Dame, IN 46556.
E-mail: [email protected].
Manuscript received 14 Oct. 1992; revised 17 Oct. 1994.
For information on obtaining reprints of this article, please send e-mail to:
[email protected], and reference IEEECS Log Number 100505.
IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 8, NO. 12, DECEMBER 1997
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 2 / 9
Like most previous works [4], [6], we assume that the
scheduler can only operate on integral grids, i.e., each operation
starts at an integral instant of time. The rate-optimal
scheduling and unfolding factors for a fractional-time
scheduler, which can start an operation at any fractional
time instant, are discussed in [8]. The size of program code
or control unit is proportional to the unfolding factor. A
synthesis system should provide many alternatives, such as
the pairs of unfolding factors and their corresponding
minimum iteration periods under retimings.
The designer can choose the most suitable pair among
them. For a given maximum unfolding factor F from the
design requirement, we present efficient algorithms to find
these pairs on the original DFG. One obvious way is to unfold
the original DFG first, and then do retiming to find the
minimum iteration period. Instead, we show that we can
perform retiming directly on the original DFG, and obtain
the same minimum iteration period without working on a
large unfolded DFG. This nice and counter-intuitive property
is shown in Section 3 by proving that the order of
retiming and unfolding does not matter for obtaining the
minimum iteration period.
The results for unit-time DFGs are obtained in Section 4.
A simple inequality is derived as a necessary and sufficient
condition for the existence of a retiming to produce a
schedule with unfolding factor f and cycle period c. The
minimum iteration period for a given unfolding factor can
be evaluated from this inequality. And, there is an efficient
algorithm which runs in time O(|V| -
|E|) for finding such
a retiming, where V is the node set and E is the edge set.
As for a general-time DFG, the necessary and sufficient
condition can not be characterized in a simple formula as that
for unit-time DFGs. For getting the pair of an unfolding factor f
and its corresponding minimum iteration period, we present
our retiming algorithm in Section 5 which runs in time
and the preprocessing algo-
rithm, which runs in time O(f
When we want to obtain all the pairs of unfolding factors
which are less than F, and the corresponding minimum iteration
periods, the nice thing for the preprocessing algorithm is
that it only needs to be performed once for the maximum unfolding
factor F, instead of F times. Note that all the algorithms
in the paper are easily implemented, since they are mainly
variations of general shortest path algorithms.
We first describe definitions and properties of retiming
and unfolding. Then, we prove the order of retiming and
unfolding is immaterial in Section 3. The algorithms and
results for unit-time DFG are presented in Section 4 and, for
general-time DFG, in Section 5. Finally, we make some concluding
remarks in the last section. Since detailed proofs are
sketched or omitted due to space limitations, interested
readers are referred to [9], [10].
DEFINITION 1. A data-flow graph (DFG) d, t) is a
node-weighted and edge-weighted directed graph, where V
is the set of nodes,
V is the set of edges, d is a
function from E to the nonnegative integers, and t is a
function from V to the positive integers.
(a) (b)
Fig. 2. The corresponding data-flow graph G: (a) a DFG G, (b) a static schedule.
(a) (b)
Fig. 3. A retimed DFG: (a) a retimed DFG G r , (b) a static schedule.
CHAO AND SHA: SCHEDULING DATA-FLOW GRAPHS VIA RETIMING AND UNFOLDING 3
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 3 / 9
Interiteration data dependencies are represented by
weighted edges. An edge e from u to v with delay count d(e)
means that the computation of node v at iteration j depends
on the computation of node u at iteration j # d(e). The set of
edges without delay composes a directed acyclic graph, which
represents data dependencies within the same iteration. 2
We define one iteration to be an execution of each node in
exactly once. The computation time of the longest path
without delay is called the cycle period. For example, the cycle
period of the DFG in Fig. 2a is three from the longest path,
which is from node B to C. An edge e from u to v with delay
count d(e) means that the computation of node v at iteration j
depends on the computation of node u at iteration j # d(e).
For the sake of convenience, we use the following notation.
The notation u v
e
# means that e is an edge from node u to
node v. The notation u v
means that p is a path from node u
to v. The delay count of a path p v v v v
# is
1 . The total computation time of a path p is
2.1 The Retiming Technique and Retimed Graphs
The retiming technique [1] moves delays around in the following
way: A delay is drawn from each of the incoming
edges of v, and then, a delay is pushed to each of the out-going
edges of v, or vice versa. A retiming r of a DFG G is a
function from V to the integers. The value r(v) is the number
of delays drawn from each of the incoming edges of node
v and pushed to each of the outgoing edges. 3 (See Fig. 3.)
be the DFG retimed by a retiming r
from G. For any edge u v
e
# , we have d r
a similar property applies for any path. A retiming is legal if
the retimed delay count d r is nonnegative for every edge in E.
A legal retiming preserves the data dependencies of the
original DFG, although a prologue is needed to set up the
initial assignments.
Compare Fig. 2 and Fig. 3. The technique of retiming regroups
operations in a loop into new iterations, in which
each operation is executed once. The operation v in the
original iteration i is shifted to the new iteration i # r(v). In
general, if r(v) . 0, r(v) instances of node v appear in the
prologue; if r(v) . 0, #r(v) instances appear in the epilogue.
The edges without delay in G r give the precedence relations
of the new loop body. Although the prologue and epilogue
are introduced by retiming, the size of prologue and epilogue
can be controlled by adding simple constraints to the
proposed retiming algorithms [9].
Under the definition of retiming [1], there is no distinction
between recursive (cyclic) and nonrecursive (acyclic)
parts in the DFG. If the input/output behavior needs to be
preserved, a host can be introduced in the DFG so that there is
an edge from the host to every input node and an edge from
every output node to the host. After this transformation, our
2. For the graph to be a meaningful data-flow graph, the delay count of
any loop should be nonzero.
3. Note that r(v) is positive if delays are pushed along the direction of
edges.
algorithms can be applied to the DFG without special considerations
for the nonrecursive part.
2.2 The Unfolding Technique and Unfolded Graphs
Let f be a positive integer. An unfolded graph with unfolding
factor f, denoted G f , consists of f copies of node set V
and represents the same precedence relations in G by delay
counts on edges. We say the unfolded DFG G
is a DFG obtained by unfolding G f times. Set V f is the union
of
# . Let u i be the node u in V i and the computation
t(u). For example, the DFG in Fig. 4a is the
unfolded graph with factor two for the DFG G r in Fig. 3a.
The unfolded graph gives a more global view to the data
dependencies with a manageable graph size.
We use the subscript f to represent the correspondences
between G and G . For a node v (resp. edge e) in G, v f (resp. e )
represents any copy of v (resp. e) in G f . For a path p f in G f ,
there is a unique path p in G corresponding to p f . Let Z be the
set of integers and [0, f) the set of integers 0, 1, 2, #, f # 1. For
a, b - Z, the notation a - f b means that there exists n - Z such
that a # f. The operation a % f produces a congruent
integer within [0, f).
One cycle in G f consists of all computation nodes in V f .
The period during which all computations in a cycle are
executed is called cycle period. The cycle period &(G ) of G
for every path p f in G f }. During
a cycle period of G f , f iterations of G are executed. Thus, the
iteration period of G f is equal to &(G f ) /f, in other words, the
average computation time for each iteration in G. For the
original DFG G, the iteration period is equal to &(G). An
algorithm can find &(G) for a DFG in time O(|E|) [1].
Some properties of the unfolded graph have been studied
in [4]. A procedure has been proposed to generate the
unfolded graph. However, the relationship between d(e)
and d f (e F ) is not clear. We characterize some properties of
the unfolded graph in the following, which will be used in
the proofs of this paper. Though Property 1d has been
(a) (b)
Fig. 4. An unfolded retimed DFG G r,f and a globally-static schedule:
(a) G r,f with 2.
4 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 8, NO. 12, DECEMBER 1997
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 4 / 9
pointed out in [4], we restate it with our notation. From
these properties, a simpler procedure of constructing the
unfolded graph is shown in Fig. 5.
PROPERTY 1. Let u and v be nodes in G and u v
e
# .
a) For any 0 . i < f and 0 . j < f, there is an edge u v
e
f
in G f if and only if
b) The f copies of edge e in G f are the set of edges
c) The total number of delays of the f copies of edge e is d(e),
i.e., d e d e
f
f
0These properties can be easily extended to paths by substituting
for e and each path in G has exactly f copies in G .
From the definition of &(G), Leiserson and Saxe [1] derived
the following characterization for cycle period.
LEMMA 2.1 [1]. Let G be a DFG and c a cycle period. &(G) . c if
and only if for every path p in G if
We prove a similar property for the unfolded graphs in
the following lemma. The value of &(G f ) is obtained from
the original DFG G, and we show that the cycle period in
the unfolded graph G is the maximum total computation
time among all paths where the total delay count is less
than f in the original graph G.
LEMMA 2.2. Let G be a DFG, c a cycle period, and f an unfolding
factor.
a) &(G f ) is equal to max {t(p) | d(p) < f for every path p in G.
only if for every path p in G if t (p) > c
then d(p) . f.
2.3 The Combination
For a DFG G, let G r,f be the DFG obtained by unfolding G r with
factor f, and G f r be the DFG obtained by retiming G f with
function r f , which is a retiming from V f to Z. We define the
minimum cycle period, denoted by MCP(f), under an unfolding
factor f as min ( )
G
& , which is the minimum cycle period
achieved by unfolding G with factor f. In the next section,
we show that MCP f G G
f
the approach of retiming first, then unfolding, can achieve the
same minimum cycle period, we say that the order of retiming
and unfolding is immaterial. With this approach, one does not
need to compute retiming functions on a large unfolded
graph. Hence, it is computationally more efficient.
Although the retimed graph in Fig. 3 achieves the minimum
cycle period with unfolding Factor 1, MCP(1), the
graph G r,f where has a cycle period 4, which is not
MCP(2). In this paper, retiming algorithms are designed to
find a retiming achieving MCP(f) without unfolding a DFG
first. For the above example, our algorithm will find another
retiming r " shown in Fig. 6, which is optimal with both unfolding
Factors 1 and 2. The cycle period of G r equals MCP(1).
From Lemma 2.2, we know the cycle period of G r, f with
is three, which is the minimum.
In this section, the relationship between G f r and G r, f for a
fixed unfolding factor is explored. Intuitively, it seems that the
, provides finer retimings on the unfolded node set
and gives better flexibility on retiming. However, we show
that the order of retiming and unfolding is not essential. More
precisely, we prove MCP f G G
Therefore, in the next two sections, we focus on finding a
retiming r to achieve MCP(f) for a given unfolding factor f on
for every edge do begin
Add edge e u v
f
G to
Add edge e u v
f
G to
Fig. 5. The procedure for constructing E f and d .
(a) (b)
Fig. 6. A DFG retimed by another retiming r ": (a) DFG G r retimed by r ", (b) iteration period = 2.
CHAO AND SHA: SCHEDULING DATA-FLOW GRAPHS VIA RETIMING AND UNFOLDING 5
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 5 / 9
The following lemma says that any cycle period which is
obtained by retiming the unfolded graph G f can be
achieved by retiming on the original graph G directly, while
Lemma 3.3 proves the converse.
LEMMA 3.1. Let G be a DFG, c a cycle period and f an unfolding
factor. For any legal retiming r f on the unfolded graph G f ,
such that & ( )
G c
. , there exists a legal retiming r on the
original graph G such that &(G r,f ) . c.
PROOF. Assume that r f is a legal retiming from V f to Z, such
that
G c
f r . Let r be a retiming from V to Z and
we choose r u r u
f
f
1 for every u - V, where u i
is the ith copy of node u in V f . We show that r is a legal
retiming and &(G r, ) . c.
Since r f is a legal retiming, it is easy to show that r
is also a legal retiming. Then, we prove that &(G r, f ) .
G c
. , from Lemma 2.1, we know for
every
# for u v
f
V , we know
f
V be a
path such that t f (p f ) > c and u v
in G is the corresponding
path of p . Since t f (p i for the ith copy of
# , we know r f (v (i+d(p))%f
for every 0 . i < f. By summing up these f inequalities,
we have r V r u d p f d p f
f
f
1 . There-
fore, we know for every path p in G, if t(p) > c, then r(v) #
r(u) . d(p) # f. Thus, from Lemma 2.2, r is a legal retiming
of G such that &(G r, f ) . c.
The following lemma proves that G f and G r,f are struc-
turely isomorphic, which means that there is a one-to-one
correspondence among edges and nodes of the two
graphs, and the mapping is to circularly shift every copy
of node v in V by the amount r(v). Actually, this lemma
gives a way of constructing DFG G r,f directly from G with
given r and f, instead of constructing G r first and then unfolding
it. We consider that G f and G r,f have the same node
set, denoted V f , but different edge sets, denoted E f and E r,f
respectively, and different delay functions, denoted d f and
d r,f respectively.
Lemma 3.2. Let d, t) be a DFG, r a retiming on G,
and f an unfolding factor. The unfolded graph
and the unfolded retimed graph G
have the following relation: There is an edge from u i to v j in
G f iff there is an edge from u i r u
# to v j r v f
# in G r,f .
PROOF. Consider an edge e r,f from u i r u f
# to v j r v f
# in
G r,f . From Property 1, this edge corresponds to edge
e
# in G r , iff d r
r(v), the modular equation is
equivalent to that d(e) - f j # i, which means that there
is an edge from u i to v j . Thus, the lemma is proved.
The above lemma also holds for the corresponding paths in
G f and G r,f . For the structurally equivalent graphs G f r
, and
G r,f , we show that there is certain correspondence between
the two retimings r and r.
Lemma 3.3. For any legal retiming r on the original graph G
such that &(G r,f ) . c, there exists a legal retiming r f on the
unfolded graph G f such that & ( )
G c
f r .
PROOF. Let r be a retiming from V to Z such that &(G r,f ) . c.
From Lemma 3.2, we know that G f and G r,f are struc-
turely equivalent, and there is an edge from u i to v j in G
iff there is an edge from u i r u f
# to v j r v f
# in G r,f .
We want to prove that there exists a retiming r f such
that G f r f
, is equivalent to G r,f , that is, each pair of corresponding
edges have the same delay count. Let e f and
e r,f be a pair of corresponding edges for u v
e
# and
e
# . We want to find a retiming r to
satisfy the equations d e d e
G f . Since d e d e r u r v
# , we derive
r Thus, if the following linear
system has an integer solution, a retiming r f is found. For
every u v
e
# in G f and u v
e
# in G r,f ,
are unknown variables and
are constants. We show that the linear
system is consistent and has an integer solution r f .
is equivalent to G r,f , the retiming r f is certainly
a legal retiming. The lemma is proved.
The following theorem is derived from Lemma 3.1 and
Lemma 3.3.
THEOREM 1. Let G be a DFG and f a fixed unfolding factor.
a) There is a legal retiming r on the original graph G such
that &(G r,f ) . c if and only if there is a legal retiming
r on the unfolded graph G such that & ( )
G c
f r .
It seems that retiming on the unfolded graph tends to have
better iteration period, because finer retiming functions can
be found. However, this theorem tells us that, for a fixed
unfolding factor, the minimum iteration period can be
found no matter which unfolding or retiming is performed
first. Obviously, a retiming on the original graph saves time
and space, which is the focus of the rest of the paper.
6 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 8, NO. 12, DECEMBER 1997
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 6 / 9
It is well known that any DFG which involves loops, feedbacks
or recursions has a lower bound on the iteration period
[11]. This iteration bound %(G) for a DFG G is given by
G
D l
loop in G
where T(l) is the sum of computation time in loop l, and D(l) is
the sum of delay counts in loop l. For a unit-time DFG, it takes
O(|V||E|) to compute the bound %(G). The loop which gives
the iteration bound is called critical loop, l cr . For example, the
in Fig. 2a is 4/3. A schedule is rate-optimal if the iteration
period of this schedule equals to the iteration bound.
In this section, we show that c/f . %(G) is a necessary and
sufficient condition for the existence of a retiming to produce
a schedule with unfolding factor f and cycle period c. The
minimum cycle period for a given unfolding factor is derived
as
- % . An efficient O(|V||E|)-time algorithm
is design to find such a retiming.
be the graph modified from
every edge e in E. Note
that G # x may have nonintegral, even negative, delay
counts if x is not an integer. The next lemma shows the relation
of G # f/c and the lower bound %(G).
LEMMA 4.1. Let G be a unit-time DFG and f and c positive inte-
gers. The graph G # f/c contains no loops having negative
delay counts if and only if c/f . %(G).
PROOF. Assume that G # f/c contains no loops having negative
delay counts. Let D-(l) be the delay count of a
loop l in G # f/c. Since the number of edges in loop l
equals T(l) in unit-time DFG, we know
(f/c) - T(l) . 0. Therefore, we have c/f . T(l)/D(l) for
every loop l in G. The if part can be proved similarly.
Similar to the characterization of cycle period for unit-time
DFGs under retiming in [1], we give a characterization
of cycle period under both retiming and unfolding.
LEMMA 4.2. Let E) be a unit-time DFG, c a positive in-
teger, and f an unfolding factor. There is a legal retiming r
on G f such that &(G r,f ) . c if and only if G # f/c contains
no loops having negative delay counts.
PROOF. The only if part can be easily proved by contradic-
tion. If there is a loop with negative delay, from
Lemma 4.1, we know c/f is smaller than the lower
bound %(G). Thus, no retiming r exists such that
For the if part, we assume that there is no loop
with negative delay count. We construct a retiming
for the DFG as follows: We first add a new node v 0 in
the graph G # f/c, where v 0 is connected to all the
nodes with delay 0, and compute all the shortest
paths from v 0 to other nodes. Let Sh(v) be the length
of the shortest path from v to v 0 . We choose the
retiming r of node v equals the ceiling of Sh(v) for every
v in V. It is easy to prove that r is a legal retiming and
which is similar to the proof in [1].
From this lemma, we have a retiming algorithm to find a
retiming r such that &(G r,f ) . c, as shown in Fig. 7. We adopt
the single-source shortest path algorithm introduced in [12]
to find the shortest paths Sh(v) for all nodes v. If there is a
negative loop, the algorithm will report it, and if not, we return
the retiming function r(v) to be the ceiling of Sh(v). The
time complexity of this retiming algorithm is O(|V| - |E|).
The static schedule with cycle period c can be easily obtained
by unfolding the retimed graph G r in time O(f |E|).
The following theorem provides us a simple existence
criterion, c/f . %(G), to check whether a given cycle period
c can be achieved.
THEOREM 2. Let G be a unit-time DFG and f and c positive inte-
gers. The following statements are equivalent:
does not contain any cycle having negative delay
count.
There exists a legal retiming r on G such that, after G
is retimed by r and unfolded by f, the cycle period
PROOF. From Lemma 4.1, the first and the second statements
are equivalent. The equivalence of the second
and the third statement is proved from Lemma 4.2.
For a pair of given integers f and c, if there exists a legal
retiming r such that &(G r,f ) . c, we say the pair is feasible.
From Theorem 2, the designer only needs to check the inequality
c/f . %(G) in order to decide the feasibility of a pair
of f and c. For an unfolding factor f, the minimum feasible c
from the inequality is MCP f f G
- % . Thus, all the
pairs of f and MCP(f) are easily generated, and the designer
may choose the suitable pair according to its requirement.
A legal retiming r for such a chosen pair can be found
merely in time O(|V| |E|). The corresponding schedule
can be generated from the graph G r,f . The maximum difference
between the iteration period of the chosen pair and the
iteration bound %(G) is less than 1/f.
Retiming Algorithm
Input: a DFG G, an unfolding factor f and a cycle period c.
Output: A retiming r such that &(G r,f
begin /* pass is used
to prevent the algorithm being trapped in negative
loops. */
for every node v - V do begin
to Q; end;
pass - 0;
last - the last element in Q;
while Q is not empty and pass < |V| do begin
Pop v from Q;
for every edge
if w is not in Q then PUSH w to Q; end;
last then begin
last - the last element in Q;
pass
if Q is empty then
for every v - V do r v Sh v
else "There is a negative loop in the graph G # f/c."
Fig. 7. Retiming Algorithm for unit-time DFGs.
CHAO AND SHA: SCHEDULING DATA-FLOW GRAPHS VIA RETIMING AND UNFOLDING 7
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 7 / 9
As a consequence of Theorem 2, the minimum rate-optimal
unfolding factor, which produces a rate-optimal schedule,
can be derived as D(l cr ) / gcd(T(l cr ), D(l cr And, the
corresponding rate-optimal schedule can be easily derived
from our algorithm.
We first design an algorithm to find a retiming r such that
possible, for a given unfolding factor f and a
given cycle period c. A preprocessing algorithm is performed
first in order to find a set of critical paths, which is
presented in the second subsection. In the first subsection,
some properties about these critical paths are derived in
order to represent the above problem in a simple Linear
Programming (LP) form, which can be solved in time
O(|V| 3 ) by a general shortest path algorithm. Then, by binary
search on all the O(f|V| 2 ) possible cycle periods, we
can find a legal retiming r which achieves the minimum
cycle period on DFG G r,f for a given unfolding factor f in
time
5.1 Retiming Algorithm
Since the delay count of every path from u to v is changed
by the same amount r(u) - r(v) after the DFG is retimed by
r, the quantities defined below specify a set of critical paths
for all paths from u to v such that we only need to look at
these quantities in order to decide whether &(G f ) . c.
Definition 2. Let u and v be nodes in G and s an integer where
a) Define '(u, min{d(p)|for every path u v
V in G.
s
in
G such that
For an integer s, the value '(u, v) is the minimum delay
count of all the paths from u to v. The value 7
s
is the
maximum computation time among all these paths from u to
v, where the delay counts are equal to '(u ,v) s. The paths
V such that
s
7 are
called critical paths. The preprocessing algorithm described
in Subsection 5.2 computes '(u, v) and 7 # !
s
u and v in V and every 0 . s < f.
THEOREM 3. Let t, d) be a DFG, c a cycle period, and f
an unfolding factor. Then, &(G f ) . c if and only if, for all
nodes u and v in V and for all s where
s
PROOF. To prove the only if part by contradiction, we assume
and that there exist u, v - V and s -
[0, f) such that '(u, v)
s
p be a critical path u v
V with
s
7 . Thus, there exists a path p in G such
that d(p) < f and t(p) > c. From Lemma 2.2, we know
&(Gf) > c. This is a contradiction.
Now, we prove the if part. Assume that for all u, v -
V and for all s where
s
. Under this assumption, we claim that,
for every path p in G, if d(p) < f, then t(p) . c, which implies
V be a path in G. If p is a critical
path, the claim is true. Otherwise, we want to show
that d(p) . f or t(p) . c. In case that d(p) . '(u, v) + f, we
know d(p) . f since '(u, v) . 0. Otherwise,
is not a critical path,
we have t p u v
s
7 . From the assumption, we
know that either d(p) . f or t(p) . c. Therefore, the claim
is proved.
The next theorem gives us the necessary and sufficient
conditions for a retiming r with &(G r,f ) . c in terms of ' and
s . From this theorem, we are able to construct a simple
LP form for retiming and unfolding.
THEOREM 4. Let t, d) be a DFG, c a cycle period and f
an unfolding factor. The following two statements are
equivalent:
a) r is a retiming on G such that &(G r,f ) . c.
b) for all nodes u and v in V and for all s where
s
PROOF.
s
similarly defined on G r .
First, we prove that the effect of retiming on the functions
'(u, v) and 7 # !
s
similar to that on the
functions d and t, i.e., ' r (u,
r
theorem is easily
derived from Theorem 3 on the retimed graph G r .
THEOREM 5. Let t, d) be a DFG, c a cycle period, and f
an unfolding factor. Then, r is a legal retiming on G such
that &(G r,f ) . c if and only if
e
# of G, and
s
PROOF. The function r is a legal retiming if and only if d r (e) . 0
for every e in G. Since d r r is a legal
retiming if and only if r(v) # r(u) . d(e) for every
e
# . The second inequality comes from Theorem 4.+
For a particular cycle period c and an unfolding factor f,
this theorem gives a simple LP form to find a legal retiming
r, if it exists, such that &(G r,f ) . c. This LP form can be easily
solved by single-source shortest path algorithms. Assume
that the preprocessing of computing ' and 7 # !
s for a
given unfolding factor f is done. The number of values
s
In time O(f |V| 2 ), we can generate
an LP form for a given cycle period c by scanning the values
of
s
the inequalities which have the
same left-hand side are covered by the one which has the
8 IEEE TRANSACTIONS ON PARALLEL AND DISTRIBUTED SYSTEMS, VOL. 8, NO. 12, DECEMBER 1997
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 8 / 9
smallest value on the right-hand side, we obtain an LP form
which has at most |V| 2 inequalities. Thus, the legal retiming
can be found in time O(|V| 3 ) by Bellman-Ford algorithm
for shortest-path problems.
It is easy to observe that the cycle period of G r,f is a value
of
s
v, and s. We can sort the set of
values
s
every u and v - G and s - [0, f)}, and
perform binary search on this set of values in order to find
the minimum cycle period of G r,f over all retiming r. There-
fore, the legal retiming such that &(G r,f ) is the minimum
that can be found in time O((f|V|
From the definitions, the functions ' and 7 # !
s are independent
of the unfolding factor. Let F be the maximum
value of unfolding factors under resource consideration.
After we perform the preprocessing step for the maximum
unfolding factor F once in time O(F
|V|)), the values of MCP(f) for any f, f . F, can be found
by using our retiming algorithm without further preprocessing
for each f.
Although the minimum cycle period MCP(f) is an increasing
function, the behavior of the minimum iteration
period MIP(f), which is MCP(f)/f, is hard to characterize. In
order to find the minimum iteration period MIP(f) for every
f . F, we need to compute the minimum cycle period
MCP(f) for every f and then find the one such that MCP(f)/f
is minimized. Thus, the minimum of the minimum iteration
period MIP(f) for all f, where f . F, can be found in time
5.2 Preprocessing Algorithm
The function ' can be easily computed by all-pair shortest
path algorithms in time O(|V| 3 ). We first compute the following
two functions, D f and T f , defined on the unfolded
graph G , and then show how to compute the function 7 # !
s
from ', D , and T f .
DEFINITION 3. Let be an unfolded graph, u i and
nodes in G , and s an integer where 0 . s < f. The function
D is a function from V f - V to nonnegative integers,
and T f is a function from V f - V f to positive integers.
a) Define D f
)|for every path u v
in G f }.
f
)|for every path u v
in G f such that d f (p f
(When there is no path u v
undefined and
With length measure (d f (p f path, the functions
D f and T can be computed in time O(|V| 3 f 3 ) by the Floyd-Warshall
algorithm for all-pairs shortest path problems [13].
The following lemma shows the relation between functions
' and D f .
LEMMA 5.1. Let u and v be two nodes in V, f an unfolding factor,
and s an integer where 0 . s < f. If there exists a path
V such that
f
PROOF. First, we show the following property of D :
# . (The detailed
proof appears in [9], [10].) Assume that there exists a
path
V such that
1b, we know the corresponding path u v
f
in G f has delay d p
f
' . Thus,
f
From the above property of D f , we have
# .
Thus, the lemma is proved.
The following theorem shows us how to generate
s
, ) from '(u, v), D f (u, v), and T (u, v).
THEOREM 6. Consider two nodes u and v in G. For every s where
s
f
f
otherwise
We rewrite the definition of 7
s
in terms of
the paths in the unfolded graph as
s
in G such that d p u v s f
' }. The theorem
follows. (The detailed proof appears in [9], [10].)
The function ' can be easily computed by all-pair shortest
path algorithms in time O(|V| 3 ). In time O(|V| 2 f), we
can construct 7 # !
s
every u and v from T f by using
the above theorem.
6 CONCLUDING REMARKS
In this paper, we study some fundamental theorems about
the combination of two useful techniques: retiming and
unfolding. This understanding gives us more insight for
many problems in which retiming and unfolding can both
be applied, such as multiprocessor scheduling and data-path
design in VLSI computer-aided design. We believe
that these results can also be applied to other applications.
CHAO AND SHA: SCHEDULING DATA-FLOW GRAPHS VIA RETIMING AND UNFOLDING 9
F:\LIBRARY\TRANS\PRODUCTION\TPDS\2-INPROD\100505\100505_1.DOC regularpaper97.dot AB 19,968 10/22/97 9:37 AM 9 / 9
One interesting result shows that unfolding before retiming
or after retiming has no effect on the iteration period.
Therefore, we do not need to unfold DFG first to obtain re-
sults; instead, we can do operations directly on the original
DFG. We present an efficient algorithm for finding the minimum
iteration period from an unfolding factor f which runs
in time O((f|V| retiming.
When we consider the unit-time DFG, which is applicable
to RISC multiprocessors, software pipelining, unit-time parallel
pipelines, etc., more surprising results are obtained. For any
pair of cycle period c and unfolding factor f, as long as c/f is no
less than the iteration bound %(G), there exists such a sched-
ule, i.e., there exists a retiming r to derive this schedule. The
retiming algorithm runs in time O(|V||E|) The result in
Theorem 2 is generalized in [8] to obtain rate-optimal schedules
for general-time DFGs under various models.
After we understood the fundamental properties in this
paper, we successfully applied these results to a scheduling
problem with resource constraints in [14]. One important
open question is to measure the minimum iteration period
under resource constraints for an unfolding factor from the
results without resource constraints. These fundamental
properties and best schedules can be used to derive approximation
algorithms.
ACKNOWLEDGMENTS
This work was supported in part by U.S. National Science
Foundation Grant MIP-8912100, U.S. Army Research Of-
fice-Durham Grant DAAL03-89-K-0074, and DARPA/ONR
contract N00014-88-K-0459.
--R
"Retiming Synchronous Circuitry,"
"Retiming and Unfolding Data-Flow Graphs,"
"Data Flow Program Graphs,"
"Static Rate-Optimal Scheduling of Iterative Data-Flow Programs Via Optimum Unfolding,"
"Unfolding and Retiming Data-Flow DSP Programs for RISC Multiprocessor Scheduling,"
"Unfolding and Retiming for High-Level DSP Synthesis,"
"A Systematic Approach for Design of Digital-Serial Signal Processing Architectures,"
"Static Scheduling for Synthesis of DSP Algorithms on Various Models,"
"Scheduling Data-Flow Graphs Via Retiming and Unfolding,"
"Scheduling and Behavioral Transformations for Parallel Systems,"
"The Maximum Sampling Rate of Digital Filters under Hardware Speed Constraints,"
Data Structures and Network Algorithms.
Networks and Matroids.
"Rotation Scheduling: A Loop Pipelining Algorithm,"
--TR
--CTR
Timothy W. O'Neil , Edwin H.-M. Sha, Combining extended retiming and unfolding for rate-optimal graph transformation, Journal of VLSI Signal Processing Systems, v.39 n.3, p.273-293, March 2005
Timothy W. O'Neil , Edwin H.-M. Sha, Combining Extended Retiming and Unfolding for Rate-Optimal Graph Transformation, Journal of VLSI Signal Processing Systems, v.39 n.3, p.273-293, March 2005
Qingfeng Zhuge , Bin Xiao , Zili Shao , Edwin H.-M. Sha , Chantana Chantrapornchai, Optimal code size reduction for software-pipelined and unfolded loops, Proceedings of the 15th international symposium on System Synthesis, October 02-04, 2002, Kyoto, Japan
M. Jacome , G. de Veciana , C. Akturan, Resource constrained dataflow retiming heuristics for VLIW ASIPs, Proceedings of the seventh international workshop on Hardware/software codesign, p.12-16, March 1999, Rome, Italy
Qingfeng Zhuge , Zili Shao , Bin Xiao , Edwin H.-M. Sha, Design space minimization with timing and code size optimization for embedded DSP, Proceedings of the 1st IEEE/ACM/IFIP international conference on Hardware/software codesign and system synthesis, October 01-03, 2003, Newport Beach, CA, USA
Qingfeng Zhuge , Bin Xiao , Edwin H.-M. Sha, Code size reduction technique and implementation for software-pipelined DSP applications, ACM Transactions on Embedded Computing Systems (TECS), v.2 n.4, p.590-613, November
Han-Saem Yun , Jihong Kim, Power-aware modulo scheduling for high-performance VLIW processors, Proceedings of the 2001 international symposium on Low power electronics and design, p.40-45, August 2001, Huntington Beach, California, United States
Meikang Qiu , Zhiping Jia , Chun Xue , Zili Shao , Edwin H.-M. Sha, Voltage Assignment with Guaranteed Probability Satisfying Timing Constraint for Real-time Multiproceesor DSP, Journal of VLSI Signal Processing Systems, v.46 n.1, p.55-73, January 2007
Dongming Peng , Mi Lu, On exploring inter-iteration parallelism within rate-balanced multirate multidimensional DSP algorithms, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.13 n.1, p.106-125, January 2005
Michael I. Gordon , William Thies , Saman Amarasinghe, Exploiting coarse-grained task, data, and pipeline parallelism in stream programs, ACM SIGOPS Operating Systems Review, v.40 n.5, December 2006
Han-Saem Yun , Jihong Kim , Soo-Mook Moon, Time optimal software pipelining of loops with control flows, International Journal of Parallel Programming, v.31 n.5, p.339-391, October
Chiang , Lan-Rong Dung, Verification method of dataflow algorithms in high-level synthesis, Journal of Systems and Software, v.80 n.8, p.1256-1270, August, 2007
Karam S. Chatha , Ranga Vemuri, Hardware-Software partitioning and pipelined scheduling of transformative applications, IEEE Transactions on Very Large Scale Integration (VLSI) Systems, v.10 n.3, p.193-208, June 2002 | retiming;loop parallelization;data-flow graphs;parallel processing;scheduling;unfolding |
629432 | A Unified Architecture for the Computation of B-Spline Curves and Surfaces. | AbstractB-Splines, in general, and Non-Uniform Rational B-Splines (NURBS), in particular, have become indispensable modeling primitives in computer graphics and geometric modeling applications. In this paper, a novel high-performance architecture for the computation of uniform, nonuniform, rational, and nonrational B-Spline curves and surfaces is presented. This architecture has been derived through a sequence of steps. First, a systolic architecture for the computation of the basis function values, the basis function evaluation array (the BFEA), is developed. Using the BFEA as its core, an architecture for the computation of NURBS curves is constructed. This architecture is then extended to compute NURBS surfaces. Finally, this architecture is augmented to compute the surface normals, so that the output from this architecture can be directly used for rendering the NURBS surface.The overall linear structure of the architecture, its small I/O requirements, its nondependence on the size of the problem (in terms of the number of control points and the number of points on the curve/surface that have to be computed), and its very high throughput make this architecture highly suitable for integration into the standard graphics pipeline of high-end workstations. Results of the timing analysis indicate a potential throughput of one triangle with the normal vectors at its vertices, every two clock cycles. | Introduction
The explosive growth of computer graphics over the last two decades has been greatly facilitated by
impressive hardware innovations: Raster graphics became popular due to the emergence of low-cost
semiconductor memories for the frame buffer. Graphics co-processors and graphics display controllers
designed with the goal of off-loading the graphics computation from the CPU resulted in
the widespread availability of quality graphics cards for personal computers. The Geometry Engine
[3] ushered in the era of high-performance graphics workstations. Successive generations of work-stations
have exploited increasing levels of pipelining and parallelism in the graphics pipeline (See
for example, the Reality Engine [1]). The graphics applications however, have constantly increased
their requirements so as to be continuously beyond the reach of available hardware capabilities. In
this evolution, more and more complex graphics abstractions have been made available to the main
processor: starting from simple lines, the current high-end systems support anti-aliased, Z-buffered,
Gourard-shaded, 24 bits of color per pixel triangles in hardware.
We believe that the next step in this evolution is the migration of parametric curves and surfaces
into hardware. An example of this trend is the recent microcode implementation of non-uniform
rational B-Splines (NURBS) in a graphics workstation [6]. In this paper we present a unified
architecture for the computation of various types of B-Spline curves and surfaces. We believe that
this architecture is significant because of the following factors:
ffl This is the first solution to handle all types of B-Spline curves and surfaces.
ffl The architecture is capable of very high performance: one triangle with the normals at its
vertices, every two clock cycles.
ffl The architecture has a linear structure that minimizes the number of pins required in a VLSI
implementation.
ffl The architecture is independent of the size of the curve/patch (in terms of the number of
control points, as well as the number of points on the curve/patch) to be computed.
ffl The above three features make this architecture highly suitable for integration into the graphics
pipeline of high-end workstations.
We are focusing on B-Splines rather than many other parametric curves and surfaces that have
been described in the literature, since NURBS has emerged as the modeling primitive of choice for the
geometric design community. Non-uniform rational B-Spline curves and surfaces have been an Initial
Graphics Exchange Specification (IGES) standard since 1983 [14]. Many commercial or in-house
modeling applications like Geomod and Proengineer are based on rational B-Spline representations.
The popularity of rational B-Splines is due to the following facts:
ffl They provide one common mathematical form for the accurate representation of standard
analytic shapes, especially conics, as well as free-form curves and surfaces. Thus unification of
all forms of curves and surfaces is done by NURBS.
ffl They offer an extra degree of freedom in the form of weights, apart from knot vector and
control points, which can be used in designing wide variety of shapes.
ffl They are projection invariant: That is, the projection of the curve is achieved by projecting
the control points and suitably modifying the weights of the control points.
Introduction to rational quadratic representation of conics can be found in [15]. Further information
about the NURBS representation of circles is in [21],[24].
A few hardware implementations of B-Splines have been reported in the literature. One of
the early papers in this direction is the work of T.Li et al.[16] where an architecture to generate
Bezier curves and patches was proposed. De Rose [7] et al., proposed a triangular architecture to
generate B-Spline curves using the deBoor-Cox algorithm. Mathias [17], has developed a similar
architecture for Bezier curves using the de Casteljau algorithm. He has also developed architectures
for B-Spline inversion and B-Spline generation [18]. Recently, Megson [19] has come up with a
design to calculate the basis functions required to generate B-Splines. He has also developed a
composite design to calculate B-Spline patches. All the architectures presented in the literature
have the following limitations that seriously restrict their practical implementation.
ffl the size of the hardware is tied to the size of the problem (the number of control points) that
is to be solved.
ffl a large number of I/O pins are needed.
The architecture proposed in this paper overcomes these limitations and in addition, as pointed
out earlier, provides a unified high-performance solution to the computation of B-Spline curves and
surfaces.
In the next section, the fundamentals of B-Splines are explained. The properties of all types of B-Splines
as well as their computation requirements are outlined in this section. An efficient algorithm
to compute basis functions and its hardware implementation are presented in Section 3. Using
Figure
1: B-Spline Curve
the above basis function calculating architecture, a unified architecture to calculate Uniform/Non-
Uniform Rational/Non-Rational B-Spline Curves and Surfaces is presented in Section 4. Finally,
the architecture presented in this work is compared with other similar solutions proposed in the
literature and is shown to be superior both in time and space efficiency.
2 Theory of B-Splines
The B-Spline curve is defined over a parameter u by the following equation
The curve is drawn for various values of u varying from umin to umax . Typically
and 1. The point on the curve at the parametric value u is denoted by P (u). There are
points denoted by P i . These control points are points in object space, using which
the shape of the B-Spline curve can be controlled. In geometric modeling applications, the position
of these control points are changed to achieve the required shape of the curve. The curve need not
pass through the control points, though B-Spline curves always lie within the convex hull of control
points. A typical B-Spline curve is shown in Figure 1. The basis function or the blending function is
denoted by N i;k (u). These basis functions will decide the extent to which a particular control point
controls the curve at a particular parametric value u. The parameter k is called the order (one more
than the degree) of the curve. For a cubic curve,
The basis function N i;k (u), depends on the parametric value and the order of the curve, and is
recursively defined as follows.
Figure
2: B-Spline Surface
The constants t i s, called knot values, are specific instances of the parametric value u and are
strictly in non-decreasing order. There are (n All the knot values
put together is called a knot vector. In section 2.1 we will see more about the knot vector and the
properties of the basis functions.
The properties of basis functions and B-Splines are discussed in detail in [22]. For further reading
on B-Splines [2][4][20] and [23] are suggested.
The extension of a B-Spline curve is the B-Spline surface given by
where, P (u; v) is the point on the surface for the parametric values u and v. The grid of (n
points is denoted by P ij . The basis functions in u and v directions are denoted by
N i;k (u) and N j;l (v). The variables k and l denote the orders of the surface in the direction of u and
v respectively. For a bi-cubic patch, shows a bi-cubic B-Spline patch.
A Rational B-Spline curve is a normalized result in 3D, of a 4D non-rational B-Spline curve,
defined by 4D control points. A Rational B-Spline curve is defined by the formula,
The term w i denotes the weight of the 3D control point P i . The term w i N i;k (u)
denotes the
extent to which the control point P i has control over the curve. When w i tends to infinity, the curve
is pulled towards P i and when w i is zero, the control point P i does not have any influence over
the curve. Detailed study of rational B-Spline was carried out first by Versprille [26]. More details
about rational B-Splines can be found in [22], [24], [25].
A rational B-Spline surface is defined by the formula,
Here again, the term w ij N i;k (u)N j;l (v)
denotes the extent of influence of the control
on the patch. There is a grid of (n points for the surface and with
every control point is associated the weight of that control point.
2.1 Basis Functions and Control Points
In this section, we point out a few interesting aspects of the basis functions and the control points.
These properties will be used later in this paper.
As the basis functions are dependent on the knot vector, we describe the knot vector in more
detail. Let the parameter u vary from 0 to 1. Let the knot vector be [0.0, 0.1, 0.13, 0.3, 0.35, 0.35,
0.35, 0.4, 0.6, 0.7, 0.9, 1.0], and let the order of the curve k be 4 (cubic curve). The basis function
curves, for this knot vector and order, is shown in Figure 3. As shown in the figure, the knot values
are specific instances of u as it varies from its minimum value to the maximum value. Note that
the knot values can be repeated as given in the example above, where t 0:35. The
basis function for the control point P 0 , namely N 0;4 will be non-zero for the values of u between
0:35. At all other values of u, this basis function will remain zero. The basis
function N 1;4 will be non-zero only for the values of u from t 1 to t 5 . In general, the basis function
N i;k will be non-zero for the values of u from t i to t i+k . It can be shown that at any particular
value of u in the valid range, there will be k and only k basis functions with non-zero values [8]. The
valid range of u is from t k\Gamma1 to t n+1 . In our example, the valid range is from 0:3 to
Those basis functions with non-zero values for the value of u under consideration, are called useful
basis functions. We will use this concept of useful basis functions later in this section to reduce the
computational complexity of B-Splines.
We can now impose various restrictions on the knot vector. These restrictions give raise to various
kinds of B-Splines. If the knot vector is such
then the resultant B-Spline is called an Uniform B-Spline [8]. A typical uniform knot vector would
be [0.0, 0.1, 0.2, 0.3, \Delta \Delta \Delta, 0.9, 1.0]. Here we can see that the differences between the adjacent knot
values are equal. For a Uniform B-Spline all basis function curves are identical (Figure 4). Hence
it is enough to calculate the basis function curve only once, thus reducing the complexity of the
0,4
4,4
7,4
value
Figure
3: Basis Function Curves for Non-Uniform Knot Vector
B-Spline computation to a large extent. Taking advantage of this fact, a VLSI architecture to solve
Uniform B-Spline curves has been proposed in [10] and a unified architecture to compute uniform
rational/non-rational B-Spline curves/patches is also proposed in [11].
In the uniform knot vector, if the first and the last knot values are repeated k times then the
resultant knot vector is called the Open knot vector. This forces the curve to start from the first
control point and end in the last control point.
If the knot vector does not conform to the above two conditions then it is called a Non-Uniform
knot vector and the B-Spline is called a Non-Uniform B-Spline. The example given in the beginning
of this section is a non-uniform knot vector. Unlike uniform B-Splines, the basis function values
for the B-Spline with non-uniform knot vector is to be computed for every value of the parameter.
The rationalized B-Spline curves and surfaces with Non-Uniform knot vector are called NURBS
(Non-Uniform Rational B-Spline) curves and surfaces.
Having described knot vectors and the types of B-Spline curves and surfaces, we now proceed to
analyze the properties of the basis functions. It is clear from the equations of B-Spline curves and
surfaces that only if the basis function value is non-zero, the corresponding control point controls
N 0;4 N 1;4 N 2;4 N 3;4 N 4;4 N 5;4 N 6;4
Figure
4: Basis Function Curves for Uniform Knot Vector
the shape of the curve. As there are only k useful basis functions at any value of u, as discussed
earlier, only k control points contribute to the computation of the curve. These control points are
called active control points. In case of a B-Spline surface, there will be a grid of (k \Theta l) active control
points.
If we know the useful basis functions at a particular value of u, we can compute only those
basis function values and compute the point on the curve. The other basis functions and their
corresponding control points need not be considered. Hence the complexity of the computation
of the curve is dependent only on k, the order of the curve, and not on n, the number of control
points of the curve. It should be noted that no approximations have been used in bringing down
the complexity from O(n) to O(k). Instead, a careful study of the basis functions has led to the
elimination of useless computations from the naive computation scheme.
our problem boils down to finding the useful basis functions given the value of u. We use
the fact that the knot values are strictly in non-decreasing order and any value of u is thus, clearly
sandwiched between a pair of consecutive knot values. (This is a very important observation which
is used in unfolding the recursion in the basis function computation. This observation leads to the
conclusion that there is only one first order basis function with value 1, and the rest have value 0.)
then the k useful kth order basis functions are N i\Gammak+1;k , N i\Gammak+2;k ,\Delta \Delta \Delta, N i;k .
Using the first subscript of these useful basis functions, the active control points can also be found.
In our example, if lies between t 0:6, and the useful basis functions
are N 4;4 , N 5;4 , N 6;4 and N 7;4 , which have non-zero value. Thus the active control points are P 4 , P 5 ,
then the index i is incremented and the current useful basis functions
and the active control points are found.
Similarly, for a surface, with order k and l, there are k and l useful basis functions for particular
values of u and v respectively. Here again, the grid of k \Theta l active control points can be found using
the above method.
3 VLSI architecture for Basis Function Generation
In the computation of a B-Spline curve or a surface, the basis function computation plays an important
role. As seen from Equation 3, the basis function computation is recursive and apparently
calls to itself.
The calculation of one point on the curve, requires values. Hence the total
number of calls to the basis function routine would be (n 1). Using the discussion in
the preceding section that there are only k useful basis functions, the number of calls reduces to
There are two problems in using the recursive equation 3 for the computation of basis functions.
First, in the computation of these k basis functions, many lower order functions return zero. This
observation shows that there are few computations that can be eliminated. Second, if there are
multiple knots (such as, t then the denominator of the Equation 3 may
become zero for certain calls to the basis function routine (such as N i;1 , N i+1;1 , N i+2;1 , N i;2 , N i+1;2 ,
leading to division errors.
The above two difficulties are overcome by using the following method ([4][5]), which computes
only those basis function values, including the lower order basis functions, which have non-zero
value. This algorithm just unfolds the recursion and identifies a directed acyclic graph (DAG) of
basis functions which have non-zero values. This DAG, while sorted in topological order, would give
the order of computation of basis function values that eliminates the above two difficulties.
From the discussion in the previous section, we know that, for a given value of u, only one basis
function of order one is non-zero, because one can find only one i such that t i as the t i s
are in non-decreasing order.
From the Figure 5, we can see that from the non-zero first order basis function, k kth order
basis functions can be calculated. The multiplicative factor along the edges is given in the inset and
whenever two edges meet an addition is performed to get the basis function value at the meeting
j+m
j+m
j+m
u-t
Figure
5: Basis Function Computation Graph
Controller -
Figure
Basis Function Evaluation Array (BFEA) and the Controller
node.
In this method, every computation is indispensable and the denominator does not become zero.
In what follows, this method is used to develop a new systolic architecture for the computation of
basis functions.
3.1 Systolic computation of basis function
Figure
6 shows the systolic linear array for the computation of the basis function. The controller
pumps the required input for the computation of basis functions to the first cell in the Basis Function
Evaluation Array (BFEA). The design of BFEA and its input pattern are explained below.
Each cell in the BFEA computes one level in the DAG, shown in the Figure 5. The first level
which has only one element N i;1 (u) involves no computation as its value is always one for any i such
that t i This value is pumped by the controller to the first cell in the BFEA.
Starting from the second level, one processing element is assigned the job of computing one level
of the DAG. Hence it is required to have just processing elements. From the 1)th cell of
the BFEA, the k basis functions of order k are output.
From the Figure 5 and the method of computing the basis functions explained in the previous
section, it is clear that the first cell requires the knot pair t . The second cell requires one
more pair t on. The k \Gamma 1th cell requires, apart from the used by
its preceding cells, the knot pair t . The last cell receives outputs
blending function values. Hence one dummy pair is input to make the number of input and the
output quantities the same. This dummy pair will follow the same path as that of other knot pairs.
Since this is a linear architecture, the data required by the cells downstream, has to be passed
on by the controller, through the cells upstream. Thus, the ith processing cell performs
of computation and steps of work to communicate the knot values to the cells downstream.
Thus the total work by each cell is k. The k \Gamma 1th cell will output one useful basis function value every
clock cycle after the initial pipeline fill. Thus the whole process of computation and communication,
proceeds systolically, through this linear architecture.
The next section elaborates the issues involved in designing each cell in the above Basis Function
Evaluation Array.
3.2 Design of a cell in BFEA
One cell in the BFEA, with its functional units, is shown in Figure 7. We present below the
mathematics behind the design of the core components of this cell.
One cell in the BFEA, say (j
sequence after receiving and the knot pairs.
Hardware requirements of each cell can be greatly reduced by identifying the symmetry in the
calculation of basis functions and suitably modifying the algorithm. This can be done in the following
ways. We know that
Let us consider the computation of N i;k (Equation 8). The first term in the RHS of Equation 8,
can be decomposed as follows.
Delay Element
u-t
u-t
Figure
7: One Processing Cell of Basis Function Evaluation Array
1. It can be seen that the second term in the RHS of Equation 11, and the second term in
Equation 7 are the same. Further the first term in Equation 11 is computed prior to N i;k .
Thus we can make use of pre-computed values, and get the first term of Equation 8, by just
one mathematical operation, instead of four. Similarly the first term in the RHS of Equation
9, is calculated using the second term in the RHS of Equation 8. The subtractor of the adder-
subtractor unit in Figure 7 computes the Equation 11 while Equation 8 is computed by the
adder.
2. If we communicate t i+k , t i and u to calculate (t we require two
subtractors. If (t i+k \Gamma u) is sent instead of t i+k and sent instead of t i and as
two subtractors can be reduced to one adder. Further a separate line
required to communicate the parametric value u is avoided.
From the data dependency analysis of the computation of basis functions it can be seen that
each cell requires a minimum of five stages. The above observations are faithfully implemented in
the processing cell. The design of the rest of the cell can be derived directly from the equation for
the basis function computation.
The input to various processing cells at different time units are shown in the Figure 8. The figure
shows the input to various cells for calculating k basis function values for two different values of u.
The row gives the data at a specific point in space, at various time intervals and the column shows
the data distribution in space at a specific time instant. Figure 8 has three data inputs for each cell:
one each for three inputs of the BFEA. The first row is the input to the line marked as N i;j (u) in
the
Figure
7. The second and third row corresponds to the two inputs marked
the
Figure
7 respectively. The entry in the second and third rows just give the indices of the knots.
For example, the entry in the second row refers to the input and the entry
in the third row refers to the input t u. The pattern of input to the first cell at various time
instances is as shown in the Figure 9. The same pattern can be seen in Figure 8, for 4, from
time 1 to 7 for input i, and from time 5 to 11 for input j. Note that the first order basis function is
1 only at that particular clock cycle when the indices i and and it is zero at all other
times. The input to the first cell also includes t i\Gammak+1 and t i+k which are used as dummy inputs.
3.3 Time required for basis function generation
We assume for the sake of simplicity, throughout this paper, that all functional units take equal
amount of time, namely, one time unit.
Each cell has a delay of five time units. The time interval between the first input of a knot value
and the generation of the corresponding basis function value of order k is
The first term gives the time taken for the pumping of basis function of order 1. The second
term gives the delay involved in before the first output.
Time required to get all the k outputs is
Order/ Cell
No.
Second
BFEA
cell
Three
inputs
to
the
at
10th
time
step
for
two
different
parametric
values
and
u2.
This
time
chart
is
to
compute
two
sets
of
fourth
order
basis
and
u2
j+1fourth
order
basis
functions
for
u1fourth
order
basis
functions
for
u2
Figure
8: Input scheduling to various processing cells of BFEA
Figure
9: Pattern of Input to the first cell of BFEA
Figure
8 shows the input pattern for the computation of two sets of basis functions of the same
order in succession. Such a case is true only in the computation of B-Spline curves. When a B-Spline
surface is calculated, the orders in different directions of the parametric value u and v, need not be
the same. If they are of different orders, the output is to be taken from two different cells of the
BFEA. In this case, the input is timed in such a way so as to ensure that the data in the output
line of the BFEA is not corrupted with two basis function values.
It can be seen from Figure 8 that the four useful basis functions are output for u1, from time
19 to 22. This is immediately followed by the second set of basis functions. Thus, one useful basis
function is output every clock cycle. This is the best we can achieve out of this linear architecture,
as there is only one output line.
Further, as BFEA forms the core of the NURBS architecture, the optimizations adopted in this
design, have a direct impact on the design of the final architecture. For example, we are computing
just the k non-zero useful basis functions, whereas, the solution proposed by Megson [19], computes
all the n+1 basis functions. As k !! n, the time required to generate the curve/surface is drastically
reduced. Further, this fact, apart from making our architecture independent of the number of control
points (n), also renders the time taken to compute the curve/surface, independent of n.
4 VLSI architecture for NURBS curves and surfaces
We describe in this section how the BFEA is effectively deployed to compute curves and surfaces of
the most general form of B-Spline - the Non-Uniform Rational B-Spline.
4.1 NURBS curve computation
The NURBS curve computation is given by,
Basis Function Evaluation Array
Controller
Cell
Accumulating
Values
Knot
Functions
Basis
Points on
the Curve
To/From Host
Curve Control Points
Figure
10: Architecture for the computation of NURBS Curve
The architecture proposed for its computation is as shown in Figure 10. As seen in the previous
section, the BFEA gives one useful basis function value every clock cycle after the initial set-up time.
The numerator and the denominator of Equation 14 are calculated simultaneously by multiplying
these useful basis function values with separately, and summing these results independently
in the Accumulating Cell (AC). The control points P i s are active control points and w i are
their corresponding weights. Finally the division is performed within the AC itself and the point on
the curve is calculated. The product P i w i is performed beforehand and is called a weighted control
point. The weighted control points and w i are pumped by the controller to the AC, synchronizing
with the basis functions input.
To calculate the next point on the curve, the parametric value u is incremented, and the input
to the BFEA is changed appropriately. Whenever u crosses t i+1 , the index i is incremented and
the new set of active weighted control points and their weights are sent to the AC. This process
continues until all the points on the curve have been computed.
4.2 Time required to calculate a NURBS curve
As seen from Equation 13 the time required to calculate all the basis functions is
involved in the AC for calculating the x coordinate is five time units. Hence the time required to
generate the x coordinate of the first point is
In subsequent clock cycles, the y and z coordinates are output.
The x coordinate of the second point on the curve is output k clock cycles after the x coordinate
of the first point. If there are C points to be calculated on the curve, the time at which the x
coordinate of the last point is output is
In subsequent clock cycles, the y and z coordinates of the last coordinate are also output. Hence
the total time required to calculate the whole curve is
Note that the above equation is independent of the number of control points n.
4.3 NURBS Surface Computation
The above architecture for the curve computation can be easily extended to compute NURBS
surfaces. In Section 3, we started with an argument that the time taken to compute the basis
function is the dominant factor in the computation of the curve or surface when compared with the
inner product operation. However, in this section we will see that this inner product computation
is to be speeded up if it is to cope with the output rate of BFEA. It would also be obvious at the
end of this paper that the inner product computation cannot be speeded up arbitrarily, thus making
any more attempts to improve the basis function computation useless.
The Equation 6 can be rewritten as follows.
The terms inside the parenthesis in the numerator and in the denominator are called virtual control
points, and the weights of the virtual control points respectively. The above equation can be viewed
Basis Functions pumped to VCCA
k1
pts.
ctrl.
Basis
Functions
pumped
to
AC
P0406Control Points pumped to VCCA to PAC No.
pumped to AC by VCCA
Virtual Control Points
Point on the surface
calculated by AC
l
k control pts.
0,k 1,k 2,k 3,k 4,k 5,k 6,k
0,l
1,l
2,l
3,l
4,l
5,l
6,l
7,l
Figure
11: Algorithm for Surface generation
as the equation of a NURBS curve with the virtual control points and their weights playing the role
of control points and their weights. Hence the architecture presented here initially computes the
virtual control points and its weights. These values are used in the NURBS curve architecture to
compute the NURBS surface. The architecture to calculate a NURBS surface is shown in Figure 12.
The NURBS surface has two orders associated with it - k, in the direction of u, and l, in the
direction of v. Extending the arguments presented in Section 2, it is clear that for a given value of
the parameters u and v there are only k \Theta l active control points that correspond to the non-zero
basis function values. The entities involved in the computation of one point on the surface are shown
in
Figure
11.
The grid of k \Theta l active control points is loaded in columns of l values (as shown by the vertical
rectangular boxes in Figure 11) in to the Partial Accumulating Cells (PACs) of the Virtual Control
point Calculating Array (VCCA). The same BFEA is used to compute the basis function values for
both the parametric values. Initially the kth order useful basis function values in the direction of
are calculated and are pumped to the VCCA (Figure 13), by the BFEA. As the name implies,
the VCCA, computes the virtual control points and its weights. The VCCA computes the virtual
control points by taking the dot product of a row of k active weighted control points (shown by
horizontal dotted boxes in Figure 11) with the corresponding basis function values.
Control Pts
To/From Host
Values
Knot
Functions
Basis
Surface
Virtual
Control Pts
Calculating
Array
Basis Function Evaluation Array
Controller
Virtual
Control
Accumulating
Cell
Points
Points on
the Surface/Curve
Curve Control Points
Figure
12: Architecture for the computation of NURBS Surface
Points
Control
Virtual
Controller
From
Points
Control
Surface
AC
BFEA
VIRTUAL CONTROL POINTS
(v)
j,l
Figure
13: Virtual Control point Calculating Array
Once the virtual control points and their weights have been computed the problem of calculating
a NURBS surface, boils down to a problem of computing a NURBS curve. The AC instead of getting
the curve control points from the controller, gets the virtual control points from the VCCA. With
these control points and the lth order basis functions directly from the BFEA, the AC calculates
the point on the surface.
The points on the surface are computed column by column. A column of points is an isoparametric
curve on the surface for a constant value of u and all discrete values of v. As u is constant
for a particular column, the basis functions along u, once computed and stored in PACs, can be
reused. The active control point grid is found for every u-v pair, and the weighted control points
in the computed grid are sent to VCCA, for virtual control point computation. The lth order basis
functions for the new value of v are calculated by the BFEA. The basis functions and the virtual
control points are taken by the AC and the next point on the surface is generated. This process
continues until all the discrete values of v are considered and an isoparametric curve for a particular
value of u has been computed. Then the value of v is reset to its initial value and u is incremented.
The new set of basis functions for u are computed as before and stored in the internal registers of
PAC in VCCA. The above algorithm continues until all the discrete values of u have been considered.
Note that although every point on the surface requires the computation of both k and l basis
function values in the direction of u and v, the k basis function values are computed only for every
new value of u. These values are stored and used for the whole curve drawn for a particular value
of u and for all values of v.
4.4 Time required to calculate a NURBS patch
The time required to generate a NURBS patch can be calculated with respect to the input to the
BFEA.
If basis functions of different orders say k and l are computed one after the other, then a delay of
fi is to be introduced in the input of BFEA to ensure that the output line of BFEA is not corrupted
with two basis function values. The following table shows the values of fi and the delay in output
experienced under various conditions.
Condition Delay in the Delay in
Input fi the Output
Further, when the second set of basis functions immediately follows the first set, there is a synchronization
problem in the AC between the virtual control points and the second set of basis functions.
This is because of the fact that the basis functions are computed earlier than the inner product
computation. Hence an additional delay of ff is introduced in the input of the BFEA. The value
of ff can be shown to be one when k l, and zero otherwise. This shows that the performance of
the system now depends on the inner product computation and not on the computation of basis
functions.
Let us assume that C u represents the number of discrete values of u and C v the number of
discrete values of v. Thus there are C u isoparametric curves (constant u) with C v points computed
in each curve.
Time required to pump the input to the BFEA for the last point on the surface would be
Each isoparametric curve computation requires, k inputs for the kth order basis function generation,
followed by a delay of (ff times l inputs for the generation of basis functions
of order l. The first term gives the time to compute C u such curves. The second term is the delay
introduced between the computation of these curves. This delay is introduced for C
during the calculation of the whole surface. These two terms put together would give the total time
taken for all the input including the last point on the surface. So to get the time at which the input
for the computation of the last point starts, l is subtracted from the above quantity.
From the time T 6 the time taken to calculate the last point is given by 7l (from the results
of Hence the total time required to calculate the whole surface is
It can be seen that the above equation is independent of the number of control points n and m.
4.5 Computation of normals to a NURBS Surface
In the standard graphics pipeline, when primitives like triangles are pumped, the vertices with their
normal vector are required. These normal vectors are used in the lighting calculations. This section
extends the above architecture to include the computation of surface normals also. Thus the NURBS
surface can be tessellated into triangles, and the actual normal vectors at the vertices of the triangle
can be computed using this architecture.
To calculate the normal to the NURBS surface at a point P
), the tangent vector of the
curve
with respect to v at the point v 0
is computed. Then the tangent vector
of the curve P (u; v 0
with respect to u at the point u
is computed. The cross product
of the above two quantities would give the normal vector to the surface at the point P
Clearly, from the equation of the NURBS surface, the tangent vector calculation would involve
the computation of the (first) derivative of the basis functions.
4.5.1 Computation of Basis Function Derivatives
As seen from the equation of the basis function, Equation 2, the derivative of first order basis
function is zero. Further as the sum of the basis function values at a particular parametric value is
one, the sum of their derivatives is zero. The derivative of the basis function is given by
The new basis function evaluation cell, which calculates the derivatives of these basis functions
also, is shown in the Figure 14.
4.5.2 Computation of Tangent Vectors
The derivative at a point on the NURBS surface, for constant u (= u 0
with respect to v is given
by,
j;l (v))]
j;l (v))]
u-t
Delay Element
Figure
14: The Modified BFEA Cell
The derivative of the point, for constant v (= v 0
with respect to u is given by,
where W
i;k (u)) and X
i;k (u)).
It can be seen that X and W are virtual control points and its weights respectively, and these
quantities are computed by VCCA. As the form of the equations of W u and X u are same as that of
W and X, these quantities can also be computed by VCCA. Figure 15, shows the modified VCCA
to accommodate the computation of X u and W u . It contains an additional array of PACs for which
the derivative of the basis function is given as the input. Both the PAC arrays are given the same set
of control points. As a result, when X and W are calculated by the first PAC array, the values of X u
and W u are computed by the second array. These four values are pumped to three Accumulating
Cells as shown in the Figure 16. These accumulating cells are similar to the AC described earlier,
but without the functional unit for division. These cells, with the above four values and the values
Surface
Control
Points
Array
Array
Figure
15: The Modified VCCA
of N j;l (v) and N 0
j;l (v) from the BFEA, compute all the quantities required to calculate the point on
the surface and its tangent vectors. After these tangent vectors are calculated, their cross product
is computed to complete the process. The output of the point and its normal are synchronized by
introducing delays in the path of the point. While calculating a point on the (3D) NURBS curve,
where there is no concept of normal, only the center AC, out of the three is used by the the controller
to send the weighted control points.
The performance loss due to the incorporation of the normal calculating hardware is very meager
as the delay introduced in the path of the point, which is around ten, is insignificant when the whole
surface is computed in thousands of clock cycles. Hence the performance is practically the same as
that of the architecture for the computation of the point alone.
4.6 The Controller
The Controller pumps the required data at the appropriate time to various functional units. The
controller itself can be divided into various functional modules as shown in the Figure 17. The
detailed design of the various modules of the Controller have been presented in [9].
Normal at
the Surface
Point on
Calculate
Calculate Calculate
AC AC AC
From
From
From
From
j,l
BFEA
WN' (v)
BFEA
j,l
j,l
j,l
j,l
j,l
XN (v)
S u
j,l
j,l
WN (v)
Figure
Computation of Surface Points and Normal Vectors
Performance Evaluation
The architecture presented above decouples the size of the problem to the extent possible. Every
computation/result computed in BFEA is indispensable. From the Equation 21 it is clear that the
coefficient of C u is large, making the timing more dependent on C u than on C v . Hence the proposed
architecture performs well when the number of discrete values of u is less than the number of discrete
values of v. Further, this architecture performs better when a whole curve or a surface is calculated
than when a few discrete points are needed. This is because the algorithm makes complete use
of inter-dependency and the information sharing between consecutive points on the curve/surface.
Thus this architecture outputs one triangle every two clock cycles after the initial pipeline fill.
We now compare the performance of this architecture with that of the architecture proposed
by Megson [19]. If the C u and C v are proportional to n and m, then the time required for the
computation of the curve/surface increases linearly when using the architecture proposed in this
To AC
To VCCA
To BFEA
To/From Host
Storage
Interface Unit
Unit
Control
Controller
Unit
Control
AC
Unit
Control
BFEA Control Unit
O
A
U
Figure
17: The Controller
paper. As the computation of every point on the curve/surface is dependent on n and m, the time
required by the Megson's architecture increases quadratically. This can be seen in Figure 18 and
Figure
19.
Analyzing the hardware complexity of the architecture proposed by Megson, it requires at most
5max(k; l)+3(max(m; n)+1) inner product cell equivalents and 3(m+1)(n+ ffi +2) memory registers
for surfaces with (m points and blending functions of degrees k and l and where
lj. The architecture presented in this paper requires utmost 7max(k; l) product
cell equivalents, max(k; l) \Theta (5 + 4(max(k; l))) buffer registers and 4kl memory registers. It
can be seen that both the processing element requirements and the memory requirements are much
less than that of the Megson's architecture. Note that the hardware requirements specified here for
this architecture is for the computation of NURBS whereas for Megson's architecture it is for the
computation of just non-rational B-Splines.
Number
of
clock
cycles
Number of control points
Our architecture
Megsons architecture
Figure
Curve Computation - Comparison
Number of control points Number of
control
points
Number
of
clock
cycles
Megsons architecture
Our architecture
Figure
19: Surface Computation - Comparison
6
Summary
A unified architecture for the computation of B-Spline curves and surfaces has been presented. This
architecture is called a unified architecture since it can compute uniform, non-uniform, rational
and non-rational B-Spline curves and surfaces. We have considered the computations necessary for
NURBS in the above description. To compute a non-rational curve, all the control point weights
have to be set to unity. To compute a uniform curve no change is necessary since a uniform knot
vector is a special case of a general knot vector. This unified architecture has been derived through
a sequence of steps.
First, a systolic architecture for the computation of the basis function values, the BFEA, was
developed. Using the BFEA as its core, an architecture for the computation of non-uniform rational
B-Spline curves was constructed. This architecture was then extended to compute NURBS surfaces,
by introducing the concept of virtual control points and by reducing the computation of a surface
to the computation of a sequence of curves. Finally, this architecture was augmented to compute
the surface normals so that the output from this architecture can be directly used for rendering the
NURBS surface.
The overall linear structure of the architecture, the small number of data paths required by the
architecture, the non-dependence of the architecture on the size of the problem (in terms of the
number of control points and the number of points on the curve/surface that has to be computed)
and the very high throughput of this architecture make it highly suitable for integration into the
standard graphics pipeline of high-end workstations.
Results of the timing analysis indicate a potential performance of one triangle every two clock
cycles, with its normal vectors at its vertices. It has also been shown that the BFEA is considerably
better than the earlier solution for the basis function computation [19]. Further, improvements in
the basis function computation are not immediately warranted since as seen in section 4.4, the BFEA
has to be slowed down by the introduction of some delay so that its output can be utilized by the
inner product computation units. Improvement in the inner product computation part cannot be
done without compromising on the linear structure of the architecture. Hence further improvements
will require the use of multiple linear arrays or other such configurations.
We conclude by pointing out that this is the first complete hardware solution for the computation
of NURBS curves and surfaces.
--R
Reality engine graphics.
An introduction to use of splines in computer graphics.
The geometry engine
A Practical Guide to Splines.
Package for calculating with splines.
A system for cost effective 3D shaded graphics.
The triangle: A multiprocessor architecture for fast curve and surface generation.
Computer Graphics: Principles and Practice(2nd edition).
Special Purpose Architectures for B-Splines
VLSI architectures for the computation of uniform B-Spline curves
Parallel architecture for the computation of uniform rational B-Spline patches
VLSI architecture for the computation of NURBS patches.
Direct manipulation of free-form deformations
Rational quadratic Bezier representation for conics.
VLSI systolic architectures for computer graphics.
Systolic Architectures for Realistic 3D Graphics.
Systolic architecture in curve generation.
Systolic algorithms for B-Spline patch generation
Geometric Modeling.
On the use of infinite control points in CAGD Computer Aided Geometric Design
Curve and surface constructions using rational B-Splines
Applications of B-Spline approximation to Geometric problems of Computer Aided Design
Rational B-Splines for curve and surface representation
Geometric modeling using non-uniform rational B-Splines: mathematical techniques
--TR | VLSI architecture;NURBS;graphics;geometric modeling |
629437 | A Practical Approach to Dynamic Load Balancing. | AbstractThis paper presents a cohesive, practical load balancing framework that improves upon existing strategies. These techniques are portable to a broad range of prevalent architectures, including massively parallel machines, such as the Cray T3D/E and Intel Paragon, shared memory systems, such as the Silicon Graphics PowerChallenge, and networks of workstations. As part of the work, an adaptive heat diffusion scheme is presented, as well as a task selection mechanism that can preserve or improve communication locality. Unlike many previous efforts in this arena, the techniques have been applied to two large-scale industrial applications on a variety of multicomputers. In the process, this work exposes a serious deficiency in current load balancing strategies, motivating further work in this area. | Introduction
A number of trends in computational science and engineering have increased the need for effective
dynamic load balancing techniques. In particular, particle/plasma simulations, which have
recently become more common, generally have less favorable load distribution characteristics
than continuum calculations, such as Navier-Stokes flow solvers. Even for continuum problems,
the use of dynamically adapted grids for moving boundaries and solution resolution necessitates
runtime load balancing to maintain efficiency. In the past ten years, researchers have proposed a
This research is sponsored by the Advanced Research Projects Agency under contract number DABT63-95-
C-0116.
number of strategies for dynamic load balancing [2, 3, 4, 5, 6, 7, 9, 10, 11, 17, 19, 20, 22, 23, 24].
The goal of this work was to build upon the best of these methods and to develop new algorithms
to remedy shortcomings in previous efforts. The techniques are designed to be scalable, portable
and easy-to-use. Improvements over existing algorithms include the derivation of a faster diffusive
scheme that transfers less work to achieve a balanced state than other algorithms. Mechanisms
for selecting and transferring tasks are also introduced. The techniques attempt to maintain
or improve communication locality in the underlying application. Parametric studies illustrate
the benefits offered by the faster diffusion algorithm as well as the efficacy of the locality
preservation techniques. Finally, the framework is applied to two large-scale applications running
on hundreds of processors. The success of the methods in one case demonstrates the utility
of the techniques, and their failure for the second application motivates further research in this
area by revealing limitations in current approaches.
The abstract goal of load balancing can be stated as follows:
Given a collection of tasks comprising a computation and a set of computers on which these
tasks may be executed, find the mapping of tasks to computers that results in each computer
having an approximately equal amount of work.
A mapping that balances the workload of the processors will typically increase the overall efficiency
of a computation. Increasing the overall efficiency will typically reduce the run time of
the computation.
In considering the load balancing problem it is important to distinguish between problem
decomposition and task mapping. Problem decomposition involves the exploitation of concurrency
in the control and data access of an algorithm. The result of this decomposition is a set
of communicating tasks that solve the problem in parallel. These tasks can then be mapped to
computers in a way that best fits the problem. One concern in task mapping is that each computer
have a roughly equal workload. This is the load balancing problem, as stated above. In
some cases the computation time associated with a given task can be determined a priori. In
such circumstances one can perform the task mapping before beginning the computation; this is
called static load balancing. For an important and increasingly common class of applications,
the workload for a particular task may change over the course of a computation and cannot be
estimated beforehand. For these applications the mapping of tasks to computers must change
dynamically during the computation.
A practical approach to dynamic load balancing is to divide the problem into the following
five phases:
Some estimate of a computer's load must be provided to first determine
that a load imbalance exists. Estimates of the workloads associated with individual tasks
must also be maintained to determine which tasks should be transferred to best balance
the computation.
Profitability Determination: Once the loads of the computers have been calculated, the
presence of a load imbalance can be detected. If the cost of the imbalance exceeds the cost
of load balancing, then load balancing should be initiated.
Calculation: Based on the measurements taken in the first phase,
the ideal work transfers necessary to balance the computation are calculated.
Tasks are selected for transfer or exchange to best fulfill the vectors provided
by the previous step. Task selection is typically constrained by communication locality
and task size considerations.
Once selected, tasks are transferred from one computer to another; state
and communication integrity must be maintained to ensure algorithmic correctness.
By decomposing the load balancing process into distinct phases, one can experiment in a "plug-
and-play" fashion with different strategies at each of the above steps, allowing the space of techniques
to be more fully and readily explored. It is also possible to customize a load balancing
algorithm for a particular application by replacing more general methods with those specifically
designed for a certain class of computations.
3 Assumptions and Notation
The following assumptions are made with regard to the scenario under which the techniques
herein are applied. First, the computers are assumed to be of homogeneous processing capac-
ity, and there are no sources of external load. The underlying software system must provide a
basic message-passing library with simple point-to-point communication (send and receive op-
erations) and basic global operations (global sum, for example). Finally, access to an accurate
(milli- to microsecond level) system clock must be provided.
The following variables and notations are used to denote various quantities in the system:
ffl There are P computers in the system.
ffl If the network connecting the computers is a d-dimensional mesh or torus, the sizes of its
dimensions are Q
ffl The diameter of the network is denoted D and is the length of the longest path between
any two computers in the network, according to the routing algorithm used. (E.g., for a d-dimensional
mesh, D would be
assuming messages are routed fully through
each dimension before proceeding to the next.)
ffl The mapping function from tasks to their respective computers is called M . Thus, M(i)
is the computer to which task i is mapped.
is the set of tasks mapped to computer i.
ffl The set of neighbors of either computer i or task i is denoted N i , as appropriate to the
context in which i is used. The neighbors of a computer are those adjacent to it in the
physical network. The neighbors of a task are those tasks with which it communicates.
4 Algorithms and Implementation
This section presents some of the design choices in the implementation of the methodology outlined
in Section 2. Choices among algorithms and techniques are motivated when necessary.
4.1 Load Evaluation
The usefulness of any load balancing scheme is directly dependent on the quality of load measurement
and prediction. Accurate load evaluation is necessary to determine that a load imbalance
exists, to calculate how much work should be transferred to alleviate that imbalance and
ultimately to determine which tasks best fit the work transfer vectors. Load evaluation can be
performed either completely by the application, completely by the load balancing system or with
a mixture of application and system facilities.
The primary advantage of an application-based approach is its predictive power. The application
developer, having direct knowledge of the algorithms and their inputs, has the best chance
of determining the future workload of a task. In a finite-element solver, for example, the load
may be a function of the number of grid cells. If the number of cells changes due to grid adap-
tation, news of that change can be immediately propagated to the load balancing system. For
more complex applications, the disadvantage of this approach is in determining how the abstract
workload of a task translates into actual CPU cycles. System dependent factors such as cache
size and virtual memory paging can easily skew the execution time for a task by a large factor.
One way to overcome the performance peculiarities of a particular architecture is to measure
the load of a task by directly timing it. One can use timing facilities to profile each task, providing
accurate measurements in the categories of execution time and communication overhead. These
timings can easily be provided by a library or runtime system: Such systems label any execution
time between communication operations as runtime and any execution time actually sending or
receiving data as communication time. A system-only approach may fall short when it comes to
load prediction, however, because past behavior may be a poor predictor of future performance.
For applications in which the load evolves in a relatively smooth fashion, data modeling techniques
from data modeling and statistics, such as robust curve fitting, can be used. ("Robust"
techniques are those which have some tolerance to noise, for example, by discarding spurious
values in a rigorous way.) However, if the load evolves in a highly unpredictable manner, given
that the system has no knowledge of the quantities affecting the load, additional information may
be required.
The most robust and flexible approach is perhaps a hybrid of both the application- and system-
only methods. By combining application-specific information with system timing facilities, it is
much more practical to predict performance in a complex application. In a particle simulation,
for example, the time required in one iteration on a partition of the problem may be a function
of the number of grid cells as well as the number of particles contained in those grid cells. By
using timing routines, the application can determine how to weight each in predicting the execution
time for the next iteration.
Given the limitations of application-only and system-only approaches, a general purpose
load balancing framework must allow the use of an application-specific load prediction model
and provide the profiling routines necessary to make that model accurate. The system can provide
a set of generic models that are adequate for broad classes of applications. Our own experience
has been that simple techniques such as keeping enough load history to predict the load as
a linear or quadratic function are often sufficient. In any case, the system should provide feed-back
on the quality of the load prediction model being used. If the load predictions are inaccurate
relative to the actual run times, the system should generate appropriate warnings.
Whatever the load prediction model used, the output of load evaluation is the following: For
a given task j, the workload of that task is determined to be l j . The load of a computer i is
therefore
l j .
4.2 Load Balance Initiation
For load balancing to be useful, one must first determine when to load balance. Doing so is comprised
of two phases: detecting that a load imbalance exists and determining if the cost of load
balancing exceeds its possible benefits.
The load balance (or efficiency) of a computation is the ratio of the average computer load
to the maximum computer load,
. A load balancing framework might, therefore,
consider initiating load balancing whenever the efficiency of a computation is below some user-specified
threshold eff min . In applications where the total load is expected to remain fairly con-
stant, load balancing would be undertaken only in those cases where the load of some computer
exceeds Lavg
eff min
, where L avg is calculated initially or provided by the application. A similar approach
was described in [10, 11, 22] in which load balancing was initiated whenever a com-
puter's load falls outside specified upper and lower limits.
The above method is poorly suited for situations in which the total load is changing. For
example, if a system is initially balanced and the load of every computer doubles, the system is
still balanced; the above method would load cause load balancing to be initiated if eff min was
less than 50 percent. Another method that has been suggested is to load balance if the difference
between a computer's load and the local load average (i.e., the average load of a computer
and its neighbors) exceeds some threshold [22]. The problem with this technique is that it may
fail to guarantee global load balance. Consider, for example, the case of a linear array. If computer
i has load iL const , then the local load average at any of the non-extremal computers would
be (i\Gamma1)+i+(i+1)
const , so load balancing would not be initiated even for a very small
threshold, despite the fact that the global efficiency is only 50 percent. (I.e., L const and
const .) Load balancing would only be initiated by the extremal computers if the relative
threshold was O
, which would be unreasonably small even for moderate
values of eff min on large arrays. The same analysis applies in the case where a computer would
initiate load balancing whenever the relative difference between its load and that of one of its
neighbors exceeded some threshold. Once again, to guarantee an efficiency of eff min , the relative
difference must in general be less than O
. The problem with such a tight
bound is that, in many cases when it is violated, load balancing may actually be unnecessary.
The reason these ad-hoc methods have been suggested is that they are inexpensive and completely
local. They also introduce no synchronization point into an otherwise asynchronous ap-
plication. Certainly these are qualities for which to strive. Given the increasing availability of
threads and asynchronous communication facilities, global load imbalance detection may be less
costly than previously perceived. By using a separate load balancing thread at each computer,
the load imbalance detection phase can be overlapped with an application. If the load balancing
threads synchronize, this would have no affect on the application. Thus, the simplest way to determine
the load balance may be to calculate the maximum and average computer loads using
global maximum and sum operations, which will complete in O(log 2 P ) steps on most architec-
tures. Using these quantities, one can calculate the efficiency directly.
Even if a load imbalance exists, it may be better not to load balance, simply because the cost
of load balancing would exceed the benefits of a better work distribution. The time required to
load balance can be measured directly using available facilities. The expected reduction in run
time due to load balancing can be estimated loosely by assuming efficiency will be increased
to eff min or more precisely by maintaining a history of the improvement in past load balancing
steps. If the expected improvement exceeds the cost of load balancing, the next stage in the load
balancing process should begin [22].
4.3 Work Transfer Vector Calculation
After determining that it is advantageous to load balance, one must calculate how much work
should ideally be transferred from one computer to another. In the interest of preserving communication
locality, these transfers should be undertaken between neighboring computers. Of
the transfer vector algorithms presented in the literature, three in particular stand out: the hierarchical
balancing method, the generalized dimensional exchange and diffusive techniques.
The hierarchical balancing (HB) method is a global, recursive approach to the load balancing
problem [7, 22]. In this algorithm, the set of computers is divided roughly in half, and the total
load is calculated for each partition. The work transfer vector between those partitions is that
required to make the load per computer in each equal. I.e., for one partition of P 1 computers
with total workload L 1 and another partition of P 2 computers having an aggregate workload of
the transfer from the first partition to the second is given by
\DeltaL
(1)
Once the transfer vector has been calculated, each partition is itself divided and balanced recur-
sively, taking into account transfers calculated at higher levels. The HB algorithm calculates the
transfer vectors required to achieve "perfect" load balance in O(log 2 (P steps.
One disadvantage of the HB method is that all data transfer between two partitions occurs
at a single point. While this may be acceptable on linear array and tree networks, it will fail
to fully utilize the bandwidth of more highly connected networks. A simple generalization of
the HB method for meshes and tori is to perform the algorithm separately in each dimension.
For example, on a 2-D mesh, the computers in each column could perform the HB method (re-
sulting in each row having the same total load), then in each row (resulting in each computer
having the same total load). For general, d-dimensional meshes and tori, this algorithm requires
O
steps. Note that, in the case of hypercubes, this dimensional hierarchical balancing
(DHB) method reduces to the dimensional exchange (DE) method presented in [2, 22].
In the DE method, the computers of a hypercube pair up with their neighbors in each dimension
and exchange half the difference in their respective workloads. This results in balance in
log steps. The authors of [24] present a generalization of this technique for arbitrary connected
graphs, which they call the generalized dimensional exchange (GDE). For a network of
maximum degree jN max j, the links between neighboring computers are minimally colored so
that no computer has two links of the same color. For each edge color, a computer exchanges
with its neighbor across that link - times their load difference. This process is repeated until a
balanced state is reached.
\DeltaL (k+1)
where
\DeltaL (0)
For the particular case where - is 0.5, the GDE algorithm is called the averaging GDE method
(AGDE) [24]. (The AGDE method was also presented in [7] but was judged to be inferior to the
HB method because of the latter's lower time complexity.) The authors of [24] also present a
method for determining the value of - for which the algorithm converges most rapidly; they call
the GDE method using this parameter the optimal GDE method (OGDE). While these methods
are very diffusion-like and have been described as "diffusive" in the literature [7], they are not
based on diffusion, as the authors of [24] rightly point out.
Diffusive methods are based on the solution of the diffusion equation, @L
was first presented as a method for load balancing in [2]. Diffusion was also explored
in [22] and was found to be superior to other load balancing strategies in terms of its perfor-
mance, robustness and scalability. A more general diffusive strategy is given in [5]; unlike previous
work, this method uses a fully implicit differencing scheme to solve the heat equation on a
multi-dimensional torus to a specified accuracy. The advantage of an implicit scheme is that the
timestep size in the diffusion iteration is not limited by the dimension of the network. For explicit
schemes, the timestep size is limited to 2 \Gammad on a d-dimensional mesh or torus. While the
algorithm in [5] quickly decimates large load imbalances, it converges slowly once a smooth,
low-frequency state is reached. One way to overcome this difficulty is to increase the timestep
size as the load imbalance becomes less severe. A rigorous technique to do this can be borrowed
from work on integrating ordinary differential equations (ODE's) [13]. In particular, view the
problem as a system of ODE's, @L
apply two methods to calculate ffiL for a particular
ffit. (Recognize that although A is the matrix that results from the spatial discretization
of the curvature operator in partial differential equation (PDE), the load balancing problem is
actually spatially discrete to begin with and is thus a system of ODE's instead of a PDE. The
analogy to the diffusion PDE only guides the construction of A.) The first method for calculating
ffiL is the first-order accurate implicit technique described in [5]. This method produces a
local error of O(ffit 2 ), where ffit is the timestep size. The second-order accurate method in [18]
produces a local error of O(ffit 3 ). Thus, if we take a timestep with both methods, the difference
between the values produced by each gives us an estimate of the error for that ffit. Taking the
maximum such difference at any computer to be denoted err max , we will take the relative error
to be err rel = errmax
. Using this error estimate, which is proportional to ffit 2 , we can adjust ffit to
be as large as possible to achieve the desired error
s ff
err rel
is the desired accuracy. (The is a safety factor to avoid having
to readjust the timestep size if the previous adjustment was too large.) The resulting adaptive
timestepping diffusion algorithm is given in Figure 1.
4.4 Task Selection
Once work transfer vectors between computers have been calculated, it is necessary to determine
which tasks should be moved to meet those vectors. The quality of task selection directly impacts
the ultimate quality of the load balancing.
There are two options in satisfying a transfer vector between two computers. One can attempt
to move tasks unidirectionally from one computer to another, or one can exchange tasks between
the two computers, resulting in a net transfer of work. If the tasks' average workload is high
relative to the magnitude of the transfer vectors, it may be very difficult to find tasks that fit the
vectors. On the other hand, by exchanging tasks one can potentially satisfy small transfer vectors
by swapping two sets of tasks with roughly the same total load. In cases where there are enough
tasks for one-way transfers to be adequate, a cost metric such as that described below can be
used to eliminate unnecessary exchanges.
The problem of selecting which tasks to move to satisfy a particular transfer vector is weakly
NP-complete, since it is simply the subset sum problem. Fortunately, approximation algorithms
exist which allow the subset sum problem to be solved to a specified non-zero accuracy in polynomial
time [12]. Before considering such an algorithm, it is important to note that other concerns
may constrain task transfer options. In particular, one would like to avoid costly trans-
diffuse(.)
\DeltaL i;j := 0 for each neighbor j 2 N i
send L i to all neighbors j 2 N i
receive L j from all neighbors j 2 N i
while Lavg
do
neighbors
do
for k := 1 to m do
send hL (k\Gamma1)
i i to all neighbors j 2 N i
receive hL (k\Gamma1)
j i from all neighbors j 2 N i
end for
err
if errmax
while errmax
send L i to all neighbors j 2 N i
receive L j from all neighbors j 2 N i
\DeltaL i;j := \DeltaL i;j
for each neighbor j 2 N i
if errmax
while
diffuse
Figure
1: The adaptive timestepping diffusion algorithm, executed at each computer i.
fers of either large numbers of tasks or large quantities of data unless absolutely necessary. One
would also like to guide task selection to preserve, as best possible, existing communication locality
in an application. In general, one would like to associate a cost with the transfer of a given
set of tasks and then find the lowest cost set for a particular desired transfer. This problem can be
attacked by considering a problem related to the subset sum problem, namely the 0-1 knapsack
problem. In the latter problem, one has a knapsack with a maximum weight capacity W and a
set of n items with weights w i and values v i , respectively. One seeks to find the maximum-value
subset of items whose total weight does not exceed W . In the context of task selection, one has
a set of tasks each with loads l i and transfer costs c i . (It is important to note that l i can be negative
if task i is being transferred onto a given computer and that c i can also be negative if it is
actually advantageous to transfer task i to or from that computer.) For a given transfer, \DeltaL, one
wishes to find the minimum-cost set of tasks whose exchange achieves that transfer. One can
specify a cost function, C(l; i), which is the minimum cost of a subset of tasks 0 through
that achieves a net transfer of l. Letting C(0; 0) be zero, and C(l; 0) be 1 for l 6= 0, one can
find the values of C(l; i) by computing, in order of increasing i, the following:
In the end, the lowest cost for transfer l is given by C(l; n). The problem with this algorithm is
that the runtime is O(n 2 l max ), where l max is the largest absolute value any of l i . The algorithm is
therefore pseudopolynomial [12]. One can overcome this difficulty by approximating the values
l i . If we truncate the lower b bits of each l i , where
l
log ffll max
, the relative deviation from
the optimal load transfer is at most
l
. The proof of this follows in same manner as the
proof given for the 0-1 knapsack approximation algorithm in [12]; for the sake of space, we do
not produce it here. The run time is thereby reduced to O( n 2 l max
). So, for any positive,
non-zero ffl, we can find the lowest-cost transfers in polynomial time.
Now that function C(l; n) has been calculated, the question becomes which transfer to use.
The value of C(l; n) which is finite and for which l is closest to \DeltaL, without exceeding it, is
lowest cost of a transfer within ffl of the transfer actually closest to \DeltaL (i.e., the transfer that would
have been found by an exact search). One might be tempted to take the subset that yielded that
value. However, by using a subset that is somewhat further away from \DeltaL, one can potentially
achieve a much lower cost. A rigorous approach for this is as follows: Given a target accuracy
ffl, define ffl
ffl. If we perform the above approximation algorithm to accuracy ffl 0 , we
will have the lowest cost of the transfer closest, within an accuracy of ffl 0 , to \DeltaL. If we then take
the subset with the lowest cost that is within ffl 0 of that closest transfer, we will have the lowest
cost subset that is within ffl of the transfer actually nearest to \DeltaL.
The next question is how to determine the target accuracy ffl. In general, it may be unnecessary
for a computer to fully satisfy its transfer vectors. The work transfer vectors given by
the algorithms in the previous section are eager algorithms. That is, they specify the transfer of
work in instances where it may be unnecessary. In the case of a large point disturbance, for ex-
ample, the failure of two computers far from that disturbance to satisfy their own transfer vectors
may have little or no effect on the global load balance. One way of determining to what extent
a computer must satisfy its transfer vectors is the following. In general, a computer has a set
of outgoing (positive) transfer vectors and a set of incoming (negative) transfer vectors. For a
particular computer i, denote the sum of the former by \DeltaL
i and the sum of the latter by \DeltaL \Gamma
In order to achieve the desired efficiency, a computer must guarantee that its new load is less
than Lavg
eff min
. Assuming that all of its incoming transfer vectors are satisfied, either by necessity
or by chance, its new load will be at least L
. Thus, in order to guarantee that its new
load is less than Lavg
eff min
, a computer must leave at most a fraction ffl of its outgoing transfer vectors
unsatisfied, according to
eff min
Solving for the maximum such ffl gives
eff min
In practice, ffl max should have a lower limit of 10 \Gamma2 or 10 \Gamma3 , since a value of zero is possible,
especially in the case of the computer with the maximum load. Also, note that using ffl max in the
approximation algorithm does not guarantee that a satisfactory exchange of tasks will be found.
No accuracy can guarantee that, since such an exchange may be impossible with a given set of
tasks. Instead, it merely provides some guidance as to how hard the approximation algorithm
should try to find the best solution and the degree to which tradeoffs with cheaper exchanges are
acceptable.
Since the selection algorithm cannot, in general, satisfy a transfer vector in a single attempt, it
is necessary to make multiple attempts. For example, in the worst-case scenario where all of the
tasks are on one computer, only those computers that are neighbors of the overloaded computer
can hope to have their incoming transfer vectors satisfied in the first round of exchanges. In
such a case, one would expect that at least O(D) exchange rounds would be necessary. The
algorithm we propose for task selection is thus as follows. The transfer vectors are colored in
the same manner as described for the GDE algorithm. For each color, every computer attempts
to satisfy its transfer vector of that color, adjusting ffl max to account for the degree to which its
transfer vectors have thus far been fulfilled. The algorithm is repeated when the colors have been
exhausted. Termination occurs when no more progress is made in reducing the transfer vectors.
Termination can occur earlier if all of the computers have satisfied the minimum requirement
of their outgoing transfer vectors (i.e., if ffl max is one at every computer). The first termination
condition is guaranteed to be met: For a given configuration of tasks, there is some minimum
non-zero exchange. The total of outstanding transfer vectors will be reduced by at least that
amount at each step. Since the transfer vectors are finite in size, the algorithm will terminate.
This is admittedly a very weak bound. In typical situations, we have never seen task selection
require more than a few iterations-at most it has required O(D) steps in the case of severe load
imbalance. A safe approach would be to bound the number of steps by some multiple of D.
As the above selection algorithm suggests, a task may move multiple hops in the process of
satisfying transfer vectors. This movement may be discouraged by appropriate cost functions.
Since the data structures for a task may be large, this store-and-forward style of remapping may
prove costly. A better method is to instead transfer a token, which contains information about a
task such as its load and the current location of its data structures. Once task selection is complete
and these tokens have arrived at their final destinations, the computers can send the tasks' states
directly to their final locations. Note that if a cost function is used that encourages locality a
token may be moved back to the computer on which the corresponding task actually resides,
eliminating the need for any data transfer.
4.5 Task Migration
In addition to selecting which tasks to move, a load balancing framework must also provide
mechanisms for actually moving those tasks fromone computer to another. Task movement must
preserve the integrity of a task's state and any pending communication. Transport of a task's
state typically requires assistance from the application, especially when complex data structures
such as linked lists or hash tables are involved. For example, the user may be required to write
routines which pack, unpack and free the state of a task.
5 Parametric Experiments
This section presents the results of various parametric experiments, exposing the trade-offs between
different load balancing mechanisms. In particular, comparisons are drawn between the
various transfer vector algorithms, and the influence of cost metrics on task migration and application
locality is demonstrated.
5.1 Comparison of Transfer Vector Algorithms
Some of the transfer vector algorithms presented in Section 4.3 have been previously compared
in terms of their execution times [2, 7, 22, 24]. What has been poorly studied, with the exception
of experiments in [22], is the amount of work transfer these algorithms require to achieve
load balance. The algorithms in Section 4.3 were implemented using the Message Passing Interface
[16] and were run on up to 256 processors of a Cray T3D. The HB algorithm was mapped to
the three-dimensional torus architecture of the T3D by partitioning the network along the largest
dimension at each stage and transferring work between the processors at the center of the plane
of division. The GDE and diffusion algorithms took advantage of the wrap-around connections.
Figure
2 compares the total work transfer and execution times for the above transfer vector algorithms
on varying numbers of processors. In this case, a randomly chosen computer contained all
of the work in the system, and the transfer vector algorithms improved the efficiency to at least
percent. This scenario was intended to illustrate the worst-case behavior of the algorithms and
is the case for which much analysis of the algorithms has been done. In Figure 3 the same quantities
are compared, except that the load was continuous random variable distributed uniformly
between 0.8 and 1.2. The goal here was to illustrate the algorithms' performance characteristics
in a more realistic situation-in particular, that of balance maintenance.
As
Figure
shows, with the exception of the HB method, all of the algorithms transferred a
fairly judicious amount of work. The diffusion and AGDE algorithms transferred the least work,
with the DHB and OGDE algorithms transferring up to 30 and 12 percent more work, respec-
Total
work
transfer
Number of processors
OGDE
diff-1
diff-20.100.300.500.700.9050 100 150 200 250 300
Time
(sec)
Number of processors
OGDE
diff-1
diff-2
Figure
2: Worst-case total work transfer (left) and execution times (right) of various transfer
vector algorithms for varying numbers of processors.
tively. (diff-1 denotes the diffusion algorithm presented in [5], and diff-2 is the diffusion algorithm
presented here.) In this case, the AGDE algorithm seems to be the best bet, transferring
the same amount of work as the diffusion algorithms and doing so at least ten times faster. It is
on such basis that the GDE algorithm has been considered superior to diffusion [24]. However,
the more typical case in Figure 3 tells a somewhat different story. In this case we see that the
diffusion algorithms transferred the least work. Specifically, the other algorithms transferred up
to 127 percent more work in the case of the HB method, 80 percent more for the DHB technique,
percent more for AGDE and 60 percent more for OGDE. As the number of processors grew,
however, the speed advantage of the non-diffusive algorithms was much less apparent than in
the point disturbance scenario. Given that the transfer of tasks can be quite costly in applications
involving gigabytes of data, the small performance advantage (at most 14 milliseconds in
this case) offered by the non-diffusive algorithms is of questionable value.
A few other important points to note are these: Although the OGDE algorithm was somewhat
faster that the AGDE algorithm, as its proponents in [24] have shown, it transferred around 20
percent more work in the above test cases. Also, despite the speed of the HB algorithm, which
was the primary consideration in [7], the algorithm transfers an extraordinary amount of work
in order to achieve load balance, as was also illustrated in [22]. There thus appears to be little to
recommend it, except perhaps in the case of tree or linear array networks.
Total
work
transfer
Number of processors
OGDE
diff-1
diff-20.020.060.100.1450 100 150 200 250 300
Time
(sec)
Number of processors
OGDE
diff-1
diff-2
Figure
3: Average-case total work transfer (left) and execution times (right) of various transfer
vector algorithms for varying numbers of processors.
5.2 Task Movement Reduction and Locality Preservation
If cost is not used to constrain task movement, a prodigious number of tasks will often be trans-
ferred, and the transfer of those tasks will negatively impact communication locality. The following
experiments demonstrate that, by providing an appropriate cost function for task move-
ment, one can drastically reduce the impact of load balancing on an application.
If task movement is deemed "free," a large number of tasks will often be transferred in order
to achieve load balance. For example, in 100 trials of an artificial computation on 256 nodes of
an Intel Paragon with 10 tasks per node and a mean efficiency of 70 percent, an average of 638
tasks were transferred to achieve an efficiency of at least 90 percent. Certainly one would not
expect that 25 percent of the tasks needed to be transferred for such an improvement. By setting
the transfer cost of a task to be one instead of zero, the average number of tasks transferred was
reduced to by a factor of four to 160. This is approximately six percent of the tasks in the system.
Reducing the size of the tasks transferred may prove more important than reducing the number
of tasks transferred. For example, it may be less expensive to transfer two very small tasks
than a single, but much larger one. In an experiment the same as the above where the size of the
tasks' data structures were uniformly distributed on the interval between 128 and 512 kilobytes,
taking a task's transfer cost to be the size of its data structures reduced the average time to migrate
all of the tasks from 8.4 to 3.8 seconds. Similar results were obtained in the simulation of a
silicon wafer manufacturing reactor running on a network of 20 workstations. This application
is briefly described in Section 6.2. In that case, using unit task transfer cost reduced the transfer
time by 50 percent over zero cost, and using the tasks' sizes as the transfer cost reduced the
transfer time by 61 percent.
Another concern in the transfer of tasks is that such transfers not disrupt the communication
locality of an application. If the communication costs of an application are significantly
increased by relocating tasks far from the tasks with which they communicate, it may be better
not to load balance. Under random load conditions, several locality-preserving cost metrics
were compared. In the first case, a task's transfer cost was taken to be the change in the distance
from the actual location of its data structures to its proposed new location. I.e., the transfer for
task i was
old (i))
where dist is a function which gives the network distance between any two computers, and M old ,
M cur and M new are the original task mapping, the current proposed task remapping and the new
proposed task remapping, respectively. In short, the cost of a transfer is positive if it increases the
distance between the proposed new location of a task and its old location, and the cost is negative
if that distance decreases. This cost takes nothing into account regarding the location of a task's
communicants. So, once a task has moved away from its neighbors, there is no encouragement
for it to move back. Thus, one would expect this metric to retard locality degradation but not to
prevent it.
Another metric considered was that the cost be the change in a task's distance from its original
location when the computation was first started.
In this case, a task is encouraged to move back to where it began. If the locality was good to
begin with, one would expect this metric to preserve that locality. One would not expect it to
improve locality that was poor initially.
The final cost metric used was based on the idea of a center of communication. In other
words, for each task, the ideal computer at which to relocate it was determined by finding M center
which minimized
old (j))
where V i;j is the cost of communication between tasks i and j. In a two-dimensional mesh,
for example, one would calculate the weighted average of the row/column locations of a task's
neighbors. The cost of moving a task is then the change in distance from its ideal location.
center (i))
Of course, a task's neighbors are moving at the same time, so the ideal location is changing somewhat
during the selection process. In most cases, however, one would expect the ideal location
of a task not to change greatly even if its neighbors move about somewhat. One would expect
that this metric would improve poor locality as well as maintain existing locality.
The three metrics described above were compared with the zero-cost metric in a synthetic
computation similar to that described above. Once again, the computation was begun on a 16 by
mesh of Paragon nodes with 10 tasks each. The tasks were connected in a three-dimensional
grid, with each task having an average of two neighbors on the local computer and one neighbor
on each of the four adjacent computers. Thus, the initial locality was high. After load balancing
had brought the efficiency to 90 percent, the task loads were changed so that the efficiency
was reduced to around 70 percent, and each task would calculate the average distance between
itself and its neighbors. Figure 4 shows this average distance as a function of the number of
load balancing steps. As one can see, locality decays rapidly if no attempt is made to maintain
it. The first cost metric slows that decay but does not prevent it. The second and third metrics
limit the increase in the average distance metric to factors of 2.1 and 2.6, respectively. A case is
also presented in Figure 4 in which the locality was poor initially-tasks were assigned to random
computers. The third metric was used to improve the locality, and ultimately reduced the
average distance between communicating tasks by 79 percent. This is within 23 percent of the
locality obtained when the problem was started with high locality. As these figures show, a cost
metric can have a tremendous impact on the locality of an application. The metrics used were
fairly simple; more complex metrics might yield even better results.
6 Applications Experiments
The load balancing algorithm presented in Section 4 was applied to two large-scale applications
running under the Scalable Concurrent Programming Library, which was formerly called the
2.06.010.00 200 400 600 800 1000
Average
distance
Number of load balancing steps
zero-cost
dist-cur
dist-orig
Average
distance
Number of load balancing steps
dist-center
Figure
4: Average distance between communicating tasks as a function of load balancing steps
for various locality metrics (left) and the improvement of initially poor locality (right).
Concurrent Graph Library [18]. This chapter gives a brief overview of that programming library
as well as the applications, including their algorithms and the specific problems to which they
were applied. It also provides performance numbers before and after load balancing, demonstrating
the practical efficacy of the load balancing framework for one application and exposing
an interesting problem in the second case.
6.1 The Scalable Concurrent Programming Library
The Scalable Concurrent Programming Library (SCPlib) provides basic programming technology
to support irregular applications on scalable concurrent hardware. Under SCPlib, tasks communicate
with one another over unidirectional channels. The mapping of tasks to computers is
controlled by the library and is hidden from the user by these communication channels. Since the
mapping of work to computers is not explicit, it is possible to dynamically change this mapping,
so long as the user provides some mechanism for sending and receiving the context of a task
(i.e., the task's state). SCPlib uses a general abstraction in which the user can reuse their existing
checkpointing routines to read/write the data from/to a communication port (instead of a file
port).
Figure
5 shows an example computational graph and its mapping to a set of computers,
as well as a schematic representation of the software structure of an individual task.
The above functionality is layered on top of system-specific message-passing, I/O, thread
and synchronization routines. As a result of its small implementation interface, porting the li-
Routines
Routines
Physics
State
USER
Other
Comm.
Figure
5: A computational graph of nine tasks (represented by shaded discs) mapped onto four
The user portion is comprised of a task's state and routines that act upon it. The library portion
is comprised of a communication list and auxiliary routines such as load balancing, granularity
control and visualization functions.
braries to a new architecture typically requires only a few days. In fact, the libraries have been
used on a wide range of distributed-memory multicomputers, such as the Cray T3D and T3E, the
Intel Paragon and the Avalon A12, shared-memory systems, such as the Silicon Graphics PowerChallenge
and Origin 2000, and networked workstations and PC's, the latter running Windows
NT.
6.2 Plasma Reactor Simulations
Direct Simulation Monte Carlo (DSMC) is a technique for the simulation of collisional plasmas
and rarefied gases. The DSMC method solves the Boltzmann equation by simulating individual
particles. Since it is impossible to simulate the actual number of particles in a realistic system, a
small number of macroparticles are used, each representing a large number of real particles. The
simulation of millions of these macroparticles is made practical by decoupling their interactions.
First, the space through which the particles move is divided into a grid. Collisions are considered
only for those particles within the same grid cell. Furthermore, collisions themselves are not
detected by path intersections but rather are approximated by a stochastic model, for which the
parameters are the relative velocities of the particles in question. Statistical methods are used to
recover macroscopic quantities such as temperature and pressure. By limiting and simplifying
dsmc compute(.)
do
move particles
send away particles that exit current partition
receive particles from neighboring partitions
collide particles
gather/scatter to obtain global statistics
calculate termination condition based on global statistics
while not converged
Figure
Concurrent DSMC Algorithm
the interactions in this fashion, the order of the computation is drastically reduced.
Hawk is a three-dimensional concurrent DSMC application which has been used to model
neutral flow in plasma reactors used in VLSI manufacturing [14, 18]. The DSMC algorithm
that executes at each partition of the problem is given in Figure 6. Each task in the concurrent
graph represents a partition of physical space and executes this algorithm. The state of a task is
the collection of grid cells and particles contained in a region. The physics routines incorporate
associated collision, chemistry and surface models. The communication list is used to implement
inter-partition transfers resulting from particle motion.
The Gaseous Electronics Conference (GEC) reactor is a standard reactor design that is being
studied extensively. In an early version of Hawk which used regular, hexahedral grids, a
simulation of the GEC reactor was conducted on a 580,000-cell grid. Of these cells, 330,000
cells represented regions of the reactor through which particles may move; the remaining "dead"
(particle-less) cells comprise regions outside the reactor. Simulations of up to 2.8 million particles
were conducted using this grid. As this description details, only 57 percent of the grid cells
actually contained particles. Even for those cells that did contain particles, the density can varied
by up to an order of magnitude. Consequently, one would expect that a standard spatial decomposition
and mapping of the grid would result in a very inefficient computation. This was indeed
the case. The GEC grid was divided into 2,560 partitions and mapped onto 256 processors of
an Intel Paragon. Because of the wide variance in particle density for each partition, the overall
efficiency of the computation was quite low, at approximately 11 percent. This efficiency was
improved to 86 percent by load balancing, including the cost of load balancing. This resulted
in an 87 percent reduction in the run time. Figure 7 shows the corresponding improvement in
workload distribution.
On a more recent version of the Hawk code, which uses irregular, tetrahedral grids, a simulation
was conducted on a 124,000-cell grid of the GEC reactor. This problem was run on 128 processors
of an Intel Paragon. Each processor had approximately five partitions mapped to it. Although
the load changed rapidly during the early timesteps, as the number of particles increased
from zero to 1.2 million, load balancing was able to maintain an efficiency of 82 percent, reducing
the runtime by a factor of 2.6. Load balancing for this problem required, on average, 12
seconds per attempt.
Hawk has also been used in the simulation of proprietary reactor designs at the Intel Cor-
poration. These simulations were conducted on networks of between 10 and 25 IBM RS6000
workstations. Without load balancing, the efficiency of these computations was typically between
percent. Load balancing was able to maintain an efficiency of over 80 percent,
increasing throughput by as much a factor of two.
Many of the load balancing techniques described in this paper have also been incorporated
into another DSMC code, developed by researchers at the Russian Institute of Theoretical and
Applied Mechanics [8]. In this case, however, the work transfer vectors were not satisfied by
transferring entire partitions fromone computer to another, but rather by exchanging small groups
of cells along the partition interfaces between adjacent computers. A feature of this approach is
that locality is naturally maintained since all one is doing, in effect, is adjusting partition bound-
aries. For a problem of space capsule reentry running on up to 256 processors of an Intel Paragon,
percent of linear speedup was obtained with dynamic load balancing, versus 55 percent of
ideal speedup for a random static mapping, and 10 percent of ideal speedup with no load bal-
ancing. It is interesting to note that the random static mapping actually achieved fairly good
load balance, but that the communication between the widely distributed cells was very costly,
reducing scalability.
6.3 Ion Thruster Simulations
Particle-in-Cell (PIC) is a computational technique used for simulating highly rarefied particle
flows in the presence of an electromagnetic field. The fundamental feature of PIC is the order-
0Number
of
processors
Percent utilization
before
after
Figure
7: Utilization distributions for 100 time steps of the DSMC code before and after load
balancing.
reducing method of calculating the interaction between particles and the field. A grid is super-imposed
on the computational domain. The electromagnetic effect of each particle with respect
to the vertices of the grid cell containing it is calculated. Then, the governing field equations are
solved over grid points, typically using an iterative solver. Once the field solver has converged,
the effects of the new field are propagated back to the particles by adjusting their trajectories ac-
cordingly. This reciprocal interaction is calculated repeatedly throughout the computation until
some termination criteria (such as particle concentration) is reached.
The Scalable Concurrent Programming Laboratory, in collaboration with the Space Power
and Propulsion Laboratory of the MIT Department of Aeronautics and Astronautics, developed
a 3-D concurrent simulation capability called PlumePIC [15]. The PIC algorithm for a partition
of the problem is presented in Figure 8. The state associated with a task is comprised of a portion
of the grid and the particles contained within the corresponding physical space. The communication
list associated with each task of the graph describes possible destinations for particles
that move outside a partition and data dependencies required to implement the field solver. The
physics routines used in Figure 8 describe the dynamics of particle movement and the solution
of the field.
The phenomenon of ion thruster backflow was studied in a simulation of the ESEX/Argos
satellite configuration using parameters for a Hughes thruster. The grid used contained 9.4 million
axially aligned hexahedra and was partitioned into 1,575 blocks, which were mapped onto
pic compute(.)
while time not exhausted do
move particles
send away particles that exit current partition
receive particles from neighboring partitions
update charge density based on new particle positions
gather/scatter to obtain global norm
calculate termination condition based on global norm
while global norm ! termination condition do
send boundary potentials to neighbors
receive boundary potentials from neighbors
compute single iteration of the field solver
gather/scatter to obtain new global norm
while
while
pic compute
Figure
8: Concurrent PIC Algorithm
256-processor Cray T3D. During the course of the simulation, up to 34 million particles were
moving through the domain. The distribution of those particles was highly irregular. Moreover,
this distribution changed dramatically over time. As a result, any static mapping of grid partitions
to computers would result in large inefficiencies at some point in the computation. This
fact is illustrated in Figure 9, which shows that each computer spends a large percentage of its
time idle. Even after load balancing, the idle time for each computer, while often better, was still
very high. Certainly, the load balancing algorithm did not improve the work distribution to the
same extent that it did with the DSMC code. Closer examination reveals that this shortcoming
was due to the two-phase nature of the PIC code. The DSMC application is a single-phase com-
putation, so load balancing it was fairly straight-forward. The PIC code has two phases, particle
transport and field solution, each with very different load distribution characteristics. As a result,
balancing the total load of these two phases on any given computer did not balance the individual
phases of the computation. This fact is graphically illustrated in Figure 10. As one can see,
while the load distribution for the total load at each computer was improved dramatically (at least
in the sense that the variance is greatly reduced), the load distributions for the two component
phases remained very poor. Consequently, the overall efficiency was low. To see this effect on
a smaller scale, consider the case of two computers as shown in Figure 11: One has 50 units of
number
Time
(sec)
idle
communication
particle push
field solve
Figure
9: Run time breakdowns for 100 time steps of the PIC code, starting at several different
time steps. Each pair of adjacent bars show the average time components for each computer before
and after load balancing, respectively. (The decrease in field solve time in the last time step
is due to the fact that the field is reaching a steady state, and hence the iterative solver converges
more quickly.)50150250
Number
of
processors
Percent utilization
total
field solve
particle push50150250
Number
of
processors
Percent utilization
total
field solve
particle push
Figure
10: Pre-load balancing (left) and post-load balancing (right) utilization distributions for
computers based on total work, field solver work and particle push work.
phase one work and 100 units of phase two work. The other computer has 100 phase one and
50 phase two units. Obviously, both computers have the same total amount of work. However,
because there is synchronization between the completion of phase one by both computers before
phase two can begin, the computation is inefficient: The first computer must wait for the second
before both can start phase two, and the second computer must wait for the first before the computation
can complete. The above examples both suggest that what one needs is a load balancing
strategy that jointly balances each phase of the computation. It is impractical alternate between
two distributions, for example, because the phases may be finely interleaved, making the cost
of frequent redistribution of work prohibitive. One way of doing this is to consider load to be
a vector, instead of a scalar, where each vector component is the load of a phase of the compu-
tation. If each component is balanced separately, then the problems encountered above would
be circumvented: Each computer would have a roughly equal amount of work for each phase
(implying that the total amount of work is also equal). Hence, little or no idle time would occur
at synchronization points between phases. Notice that the characteristics of the PIC code also
imply that, in general, one must assign multiple partitions to each computer. Some regions of
the grid will have a high particle-to-cell ratio. A partition in such a region must be paired with
a partition with a low particle-to-cell ratio to achieve effective load balancing of both phases.
A similar situation is illustrated in Figure 12, which shows how a simple domain cannot be divided
into two contiguous pieces that balance the loads of the two computers to which they are
mapped.
7 Related Work
Presented here is a summary of work related to the methodology and techniques used in this
paper.
Gradient load balancing methods have been explored extensively in the literature [10, 11,
22]. As pointed out in [11, 22], the basic gradient model may result in over- or undertransfers of
work to lightly loaded processors. The authors of [11] present a workaround in which computers
check that an underloaded processor is still underloaded before committing to the transfer, which
is then conducted directly from the overloaded to underloaded processor. While the method does
have the scalability of diffusive and GDE strategies, it has been shown to be inferior in its per-
Computer 1 Computer 2
Phase 2
Phase 2
Phase 1
Computer 1 Computer 2
Phase 1
Add synchronization
Figure
11: Demonstration of low efficiency in a "balanced" system. In the above example, both
computers have the same total amount of work (i.e., they are load balanced in some sense). How-
ever, because of synchronization interposed between the unbalanced phases, idle time is introduced
formance [22].
Recursive bisection methods operate by partitioning the problem domain to achieve load balance
and to reduce communication costs. Most presentations of these techniques appear in the
context of static load balancing [1, 23], although formulations appropriate for dynamic domain
repartitioning do exist [19, 20]. While many methods exist for repartitioning a computation,
including various geometrically based techniques, the most interesting methods utilize the spectral
properties of a matrix encapsulating the adjacency in the computation. Unfortunately, these
methods have a fairly high computational cost. They also blur the distinct phases of load balancing
presented in Section 1. The combination of these limitations makes such techniques unsuitable
for use in a general purpose load balancing framework.
Heuristics for load balancing particle simulations (relevant here because of the two applications
targeted in Section are presented in [4, 9]. It is interesting to note that the authors of
[4] observed the same phenomenon in applying their method to a PIC application as was seen in
Section 6.3. Namely, their methods only worked well when the particle push phase substantially
dominated over the field solve phase. This is due to the fact that any imbalance in the field phase
was completely neglected by their inherently scalar approach. The authors suggested no remedy
for the situation, however.
Other task-based approaches to load balancing include a scalable task pool [6], a heuristic for
Phase 2
Imbalanced
Balanced
Phase 1
Computer 1 Computer 2
Phase 2
Phase 1
Figure
12: The above bars represent a one-dimensional space in which phase one dominates at
one end and phase two dominates at the other. This domain cannot be divided evenly between
two computers by a single cut. A cut down the middle would balance the total load, but neither
of the component phases would be balanced. A cut anywhere else might either balance the first
or second phase but not both. The only way to achieve a balance is to assign multiple partitions
to each computer.
transferring tasks between computers based on probability vectors [3] and a scalable, iterative
bidding model [17]. All of these techniques make assumptions, such as that of complete task
independence or task load uniformity, that are not applicable in the context of our work.
8 Conclusion and Future Work
This paper describes a practical, comprehensive approach to load balancing that has been applied
to non-trivial applications. Incorporated into the approach are a new diffusion algorithm, which
offers a good trade-off between total work transfer and run time, and a task selection mechanism,
which allows task size and communication costs to guide task movement. More work remains
to be done, however. The following three areas of improvement could dramatically increase the
effectiveness and utility of the strategy presented here:
Consider load as a vector rather than a scalar quantity. The experiments with the PIC
code in Section 6 clearly demonstrate the limitations of the scalar view of load. While
the load balancing algorithm clearly achieved a good balance for the total load on each
computer, it failed to balance the components of the load. As a result, the overall efficiency
was low. Only by jointly balancing the phases comprising a computation can one hope
to achieve good overall load balance; viewing load as a vector is one way to accomplish
this [21].
load balancing to the heterogeneous case. For the case of computers with heterogeneous
processing capacity, the relative capabilities of the computers must be taken
into account in work movement decisions. For the load diffusion algorithm, the situation is
analogous to heat diffusion in heterogeneous media. Task selection must also be modified
to account for the change in a task's runtime as it migrates from one computer to another.
Use dynamic granularity control. Task-based load balancing strategies fail whenever
the load of a single task exceeds the average load over all computers. No matter where
such a task is moved, the computer to which it is mapped will be overloaded. By dividing
that task into smaller subtasks, one can alleviate this problem by providing viable work
movement options. In general, task division and conglomeration can be used to dynamically
manage the granularity of a computation so as to maintain the best number of tasks-
increasing or decreasing the available options as necessary.
By incorporating the above changes, a load balancing framework could be applied in a greater
variety of situations. In the meantime, the methods described here are useful for fine- and medium-
grain, single-phase applications running on homogeneous computing resources.
Acknowledgements
Access to an Intel Paragon was provided by the California Institute of Technology Center for
Advanced Computing Research. Access to a Cray T3D was provided by the National Aeronat-
ics and Space Administration Jet Propulsion Laboratory and was facilitated by the California
Institute of Technology.
--R
"A fast multilevel implementation of recursive spectral bisection for partitioning unstructured problems,"
"Dynamic load balancing for distributed memory multiprocessors,"
"Dynamic load balancing using task-transfer probabilities,"
"Dynamic load balancing for a 2D concurrent plasma PIC code,"
"A parabolic load balancing algorithm,"
"A distributed implementation of a task pool,"
"A multi-level diffusion method for dynamic load balancing,"
"Parallel DSMC Strategies for 3D Com- putations,"
"Dynamic load balancing for parallelized particle simulations on MIMD com- puters,"
"The gradient model load balancing method,"
"Parallel load-balancing: an extension to the gradient model,"
Computational Complexity.
Numerical Recipes in C.
"Concurrent Simulation of Plasma Reac- tors,"
"Three-dimensional plasma paricle-in-cell calculations of ion thruster backflow contamination,"
MPI: The Complete Ref- erence
"A partially asynchronous and iterative algorithm for distributed load balancing,"
"The concurrent
"An improved spectral bisection algorithm and its application to dynamic load balancing,"
"Dynamic load-balancing for PDE solvers on adaptive unstructured meshes,"
"A Load Balancing Technique for Multiphase Computa- tions,"
"Strategies for dynamic load balancing on highly parallel computers,"
"Performance of dynamic load balancing algorithms for unstructured mesh calculations."
Load Balancing in Parallel Computers.
--TR
--CTR
J. Ray, Optimization of distributed, object-oriented systems, Addendum to the 2000 proceedings of the conference on Object-oriented programming, systems, languages, and applications (Addendum), p.153-154, January 2000, Minneapolis, Minnesota, United States
Javier Roca , J. Carlos Ortega , J. Antonio lvarez , Julia Mateo, Data neighboring in local load balancing operations, Proceedings of the 9th WSEAS International Conference on Computers, p.1-6, July 14-16, 2005, Athens, Greece
K. Hering , J. Lser , J. Markwardt, dibSIM: a parallel functional logic simulator allowing dynamic load balancing, Proceedings of the conference on Design, automation and test in Europe, p.472-478, March 2001, Munich, Germany
Bin Fu , Zahir Tari, A dynamic load distribution strategy for systems under high task variation and heavy traffic, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
A. Corts , A. Ripoll , F. Ced , M. A. Senar , E. Luque, An asynchronous and iterative load balancing algorithm for discrete load model, Journal of Parallel and Distributed Computing, v.62 n.12, p.1729-1746, December 2002
Marco Conti , Enrico Gregori , Fabio Panzieri, QoS-based Architectures for Geographically Replicated Web Servers, Cluster Computing, v.4 n.2, p.109-120, April 2001
K. Antonis , J. Garofalakis , I. Mourtos , P. Spirakis, A hierarchical adaptive distributed algorithm for load balancing, Journal of Parallel and Distributed Computing, v.64 n.1, p.151-162, January 2004
G. George Yin , Cheng-Zhong Xu , Le Yi Wang, Optimal Remapping in Dynamic Bulk Synchronous Computations via a Stochastic Control Approach, IEEE Transactions on Parallel and Distributed Systems, v.14 n.1, p.51-62, January
Fong , Cheng-Zhong Xu , Le Yi Wang, Optimal periodic remapping of dynamic bulk synchronous computations, Journal of Parallel and Distributed Computing, v.63 n.11, p.1036-1049, November
Hans-Heinrich Ngeli, Dynamic load balancing by diffusion in heterogeneous systems, Journal of Parallel and Distributed Computing, v.64 n.4, p.481-497, April 2004
Lap-sun Cheung , Yu-kwok Kwok, On Load Balancing Approaches for Distributed Object Computing Systems, The Journal of Supercomputing, v.27 n.2, p.149-175, February 2004
Alexander E. Kostin , Isik Aybay , Gurcu Oz, A Randomized Contention-Based Load-Balancing Protocol for a Distributed Multiserver Queuing System, IEEE Transactions on Parallel and Distributed Systems, v.11 n.12, p.1252-1273, December 2000
Yu-Kwong Kwok , Lap-Sun Cheung, A new fuzzy-decision based load balancing system for distributed object computing, Journal of Parallel and Distributed Computing, v.64 n.2, p.238-253, February 2004
Saeed Iqbal , Graham F. Carey, Performance analysis of dynamic load balancing algorithms with variable number of processors, Journal of Parallel and Distributed Computing, v.65 n.8, p.934-948, August 2005
Arnaud Legrand , Hlne Renard , Yves Robert , Frdric Vivien, Mapping and Load-Balancing Iterative Computations, IEEE Transactions on Parallel and Distributed Systems, v.15 n.6, p.546-558, June 2004
Changxun Wu , Randal Burns, Handling Heterogeneity in Shared-Disk File Systems, Proceedings of the ACM/IEEE conference on Supercomputing, p.7, November 15-21,
Faouzi Kamoun, Toward best maintenance practices in communications network management, International Journal of Network Management, v.15 n.5, p.321-334, September 2005
Kirk Schloegel , George Karypis , Vipin Kumar, Wavefront Diffusion and LMSR: Algorithms for Dynamic Repartitioning of Adaptive Meshes, IEEE Transactions on Parallel and Distributed Systems, v.12 n.5, p.451-466, May 2001
Changxun Wu , Randal Burns, Tunable randomization for load management in shared-disk clusters, ACM Transactions on Storage (TOS), v.1 n.1, p.108-131, February 2005
Jack Dongarra , Ian Foster , Geoffrey Fox , William Gropp , Ken Kennedy , Linda Torczon , Andy White, References, Sourcebook of parallel computing, Morgan Kaufmann Publishers Inc., San Francisco, CA, | massively parallel computing;dynamic load balancing;diffusion;irregular problems |
629439 | Low Expansion Packings and Embeddings of Hypercubes into Star Graphs. | AbstractWe discuss the problem of packing hypercubes into an n-dimensional star graph S(n), which consists of embedding a disjoint union of hypercubes U into S(n) with load one. Hypercubes in U have from $\lfloor n/2 \rfloor$ to $(n+1)\cdot \left\lfloor {\log_2\,n} \right\rfloor -2^{\left\lfloor {\log_2n} \right \rfloor +1}+2$ dimensions, i.e., they can be as large as any hypercube which can be embedded with dilation at most four into S(n). We show that U can be embedded into S(n) with optimal expansion, which contrasts with the growing expansion ratios of previously known techniques.We employ several performance metrics to show that, with our techniques, a star graph can efficiently execute heterogeneous workloads containing hypercube, mesh, and star graph algorithms. The characterization of our packings includes some important metrics which have not been addressed by previous research (namely, average dilation, average congestion, and congestion). Our packings consistently produce small average congestion and average dilation, which indicates that the induced communication slowdown is also small. We consider several combinations of node mapping functions and routing algorithms in S(n), and obtain their corresponding performance metrics using either mathematical analysis or computer simulation. | Introduction
The star graph [1] was proposed as an attractive interconnection
network for parallel processing, featuring smaller degree
and diameter than a hypercube [2] of comparable size.
However, the earlier introduction of hypercube networks,
along with their interesting characteristics, has given to such
networks considerable popularity. A number of hypercube-
configured parallel computers was built in recent years [2],
and many hypercube-compatible algorithms have been proposed
[3]. Despite the fact that some parallel algorithms
have also been specifically devised for the star graph (e.g.,
sorting [4], FFT [5]), we believe that the repertory of star
graphs algorithms can be significantly increased via hyper-cube
embeddings.
Research on embedding hypercubes into star graphs was
initiated by Nigam, Sahni, and Krishnamurthy [6]. The interesting
techniques introduced in [6] reveal a major challenge
for embedding hypercubes into star graphs. Namely,
topological differences between the two networks (e.g., degree
and minimum cycle length) make it difficult to obtain an
This research is supported in part by Conselho Nacional de Desenvolvimento
under the grant
No. 200392/92-1.
embedding that simultaneously achieves small dilation and
expansion (see Sec. 2 for definitions).
In this paper, we present a hypercube embedding technique
that reduces the trade-off between dilation and expansion
significantly. Our technique is referred to as packing, and
consists of embedding a disjoint union
k=kmin
containing p k many copies of each k-dimensional hypercube
Q(k), kmin k kmax , into an n-dimensional star graph
S(n). Our packings lie in two categories, namely fixed-sized
packings (i.e., packings in which all of the embedded hypercubes
have the same size, or kmin = kmax ) and multiple-sized
packings (i.e., packings in which the embedded hypercubes
are of various sizes, or kmin 6= kmax ). All of our packings
have load 1, i.e. any node in S(n) is image to at most one
node of U .
Packings support multiple tasks, providing a means by
which a star graph can be efficiently used to run hypercube
algorithms. Moreover, they can also be used as a foundation
for implementing node allocation and task migration
strategies, making it possible to handle such issues as load
balancing and fault tolerance (see [7] for more on this topic).
The main contribution of our packing techniques relates to
expansion, which can be either: 1) a slow growing function of
n, in the case of fixed-sized packings (an expansion between
1 and 2.46 is obtained for n 10), or 2) optimal (i.e., 1),
in the case of multiple-sized packings. These small expansion
ratios need not sacrifice dilation (e.g., Sec. 4 presents
packing techniques that produce dilation 3 for all embedded
hypercubes).
We also consider in this paper an extension of our packing
techniques, which we refer to as variable-dilation embed-
dings. Variable-dilation embeddings address the necessity of
accommodating tasks requiring hypercubes that are larger
than each of the copies made available through our basic
packing techniques. Larger hypercubes Q(k + ') are formed
from 2 ' packed copies of lower-dimensional hypercubes Q(k),
which assures that small expansion is still achieved. While
these Q(k)'s are embedded with a fixed dilation d base = 3,
larger dilation (e.g., 4, 6, and so on) is produced along higher
dimension links of Q(k + '). However, the average dilation
of such an embedding (d avr ) is not much larger than d base
(e.g., d avr ranges from 3 to 4.25 for k
This paper is organized as follows. Sec. 2 introduces basic
definitions and the terminology used in the paper. Sec. 3
presents some background information. Sec. 4 presents our
techniques for packing hypercubes into the star graph. Sec. 5
discusses variable-dilation embeddings. A comparison with
related work is given in Sec. 6 and Sec. 7 concludes the paper.
Definitions and Terminology
Let G(k) be a k-dimensional graph with hierarchical struc-
ture, such that G(k + 1) is obtained recursively from c(k)
many copies of G(k). Several graphs belonging to the class
of Cayley graphs have this recursive decomposition property,
such as the hypercube and the star graph [1, 2]. The links
connecting the c(k) copies of G(k) that exist within G(k+1)
are referred to as dimension links.
We denote the set of nodes and the set of links of G(k)
by V (G(k)) and E(G(k)), respectively. An embedding of
into H(n), which we denote by F : G(k) 7! H(n),
is a mapping of V (G(k)) into V (H(n)) and of E(G(k)) into
paths of H(n). G(k) and H(n) are respectively referred to
as the guest and the host of F [3]. The node image of F
is F (G(k))g. The load of F is the
maximum number of nodes of G(k) that are mapped to any
single node of H(n), and is denoted by The dilation
of F is
where dist H (a; b) is the distance in H(n) between two vertices
a and b of H(n). The expansion of F is X(F
k=kmin
disjoint union of p k
many copies of each G(k), with k ranging from kmin to kmax .
For each k, we index G j (k) with 0 . The set of nodes
in U is V
g. Accordingly, the set of links in U is
g. A packing of U
into H(n), which we denote by P : U 7! H(n), is a mapping
of V (U) into V (H(n)) and of E(U) into paths of H(n). P
is a fixed-sized packing if Otherwise, P is
a multiple-sized packing. The node image of P is P
(U)g. The load of P is the maximum number
of nodes in U that are mapped to any single node of H(n),
and is denoted by (P ). We denote the embedding of G j (k)
into H(n), in the context of P , by P j;k . The dilation of P j;k
is (k))g.
The base dilation of P is d base (P
g. P is referred to as a template packing
if . The
expansion of P , denoted by X(P ), is:
(1)
Embeddings of guest graphs whose dimensionality exceeds
can often be built from a packing P , and are defined as
follows. For
the number of G(k)'s needed
to hierarchically compose one G(k '). We denote
the disjoint union of G(k)'s that compose G
dilation embedding of G which we denote
is a mapping of V
into V (H(n)) and of E(G into paths of H(n),
constrained by a packing P : U 7! H(n), such that
Equivalently, W can be
thought of as applying a transformation to P as follows. Let
be the disjoint union produced
from U , when c(k hierarchically compose
packing produced
when W is constructed within P . In general, one can apply
a series of similar transformations to P , producing a packing
P general that will hold several variable-dilation embed-
dings. Previously given definitions also apply to packings
holding variable-dilation embeddings. In particular, note
that
0;k+' .
Some terms that more precisely characterize a variable-
dilation embedding are defined as follows. The dilation
of W along the i th dimension of G
denotes the set of dimension i links of
'g. The dilation vector of W is d(W
)]. The average dilation of W is:
d avr (W
(2)
A major advantage of variable-dilation embeddings, as opposed
to conventional embedding methods, is that the dilation
can be made significantly smaller on the average. Since
many algorithms use a limited number of dimensions at any
given step of their execution, a smaller communication slow-down
is obtained.
3 Background
3.1 The hypercube
A k-dimensional hypercube graph
E(Q(k))g contains 2 k nodes, which are labeled with binary
strings of length k. A node connected
to k distinct nodes, respectively labeled with strings
denotes the binary
negation of bit q i [2]. The link connecting OE and OE i is
a dimension i link of Q(k).
3.2 The star graph
An n-dimensional star graph
contains n! nodes which are labeled with the n! possible
permutations of n distinct symbols. In this paper, we use
the integers f1, ng to label the nodes of S(n).
A node is connected to (n \Gamma 1) distinct
nodes, respectively labeled with permutations
label is obtained
by exchanging the first and the i th symbol of 's la-
bel) [1]. The link connecting and i is a dimension i link
of S(n).
3.3 Embedding of a mesh into S(n)
The packing and embedding techniques presented in this
paper use a two-step mapping algorithm, in which hypercubes
are initially packed into an (n \Gamma 1)-dimensional mesh
of size 2 \Theta 3 \Theta
embedded
into S(n) with load 1, dilation 3, and expansion 1 via Algorithm
1 below, which is inspired by a mapping algorithm
proposed by Ranka et al. [8]. An interesting property of
Alg. 1 is presented later in this paper (see Lemma 2).
Algorithm 1 (Mapping
mesh to star (int n, m[
f int i, h, temp;
for
for
for
the width of M(n \Gamma 1) along its
th dimension. We label the nodes of M(n \Gamma 1) with an
Alg.
(S(n)). The pseudocode represents
node labels with vectors, such that
4 Template Packings
4.1 Preliminaries
In this section, we discuss template packings of hypercubes
into S(n), which have load 1 and base dilation 3. We present
both fixed-sized and multiple-sized packings, which respectively
embed into S(n): 1) a disjoint union
for some fixed k 2 [bn=2c; n \Gamma 1], and 2) a disjoint union
is the largest hypercube
considered in this paper as far as template packings into S(n)
are concerned. In addition, because fixed-sized packings of
Q(bn=2c) and multiple-sized packings produce expansion 1,
we do not discuss the cases k ! bn=2c, which can be obtained
by tearing larger hypercubes after they are packed into S(n).
From the viewpoint of how hypercubes are packed into
classify our packings as symmetric or asym-
metric. Symmetric packings are those in which all dimension
a links of E(U) are mapped to dimension b links of
Accordingly,
asymmetric packings are those in which two dimension a
links (u; v) and (x; y) of E(U) may be mapped to links of
different dimensions b and c in E(M(n \Gamma 1)), unless (u; v)
and (x; y) belong to the same Q j (k) 2 U .
The dimension mapping rules that characterize a given
packing technique are not preserved when M(n \Gamma 1) is ultimately
embedded into S(n) via Alg. 1. In both symmetric
and asymmetric template packings, a dimension a link of
E(U) is mapped either to a dimension b link of E(S(n)), or
to a path b ! c ! b in S(n), where b, c can be any of the dimensions
of S(n). Although the symmetry (or asymmetry)
of a particular technique used to pack Q(k) into S(n) can
not be distinguished unless for the intermediary step where
the hypercubes are packed into M(n \Gamma 1), we use throughout
the paper the terms symmetric (or asymmetric) packings of
into S(n).
Our discussion about fixed-sized packings considers both
the symmetric and the asymmetric cases. However, our
multiple-sized packings are all asymmetric. Symmetric packings
provide a very regular arrangement of the copies of Q(k)
in M(n \Gamma 1), which is particularly useful for constructing
variable-dilation embeddings. However, they do not achieve
as small expansion as asymmetric packings do. In order to
combine the desired features of low expansion and support to
variable-dilation embeddings, we build our asymmetric packings
as an extension of their symmetric counterparts. Hence,
an asymmetric packing will often be the method of choice to
pack hypercubes into the star graph.
Intuitively, one should expect smaller expansion when: 1)
smaller hypercubes are packed into S(n), and 2) asymmetric
techniques are used.
4.2 Embedding of Q(k) into
Our packing techniques take advantage of the regular structure
of to achieve low expansion ratios. A preliminary
result on which our techniques are based is given
below:
can be embedded into load
dilation 1, if k n \Gamma 1.
To prove the lemma, we present an algorithm which
produces the desired embedding (see Alg. 2). Assume that
the argument origin[ ] taken by Alg. 2 is set to all 0's. In
addition, let use dim[ ] be a binary string with exactly k 1's.
The image of the embedding contains the mesh nodes that
match the pattern m
ae origin[i]; if use
use
This image is available if k since: 1) the width
of along any of its dimensions is at least 2, which
guarantees that the range selected for the coordinates of the
image nodes exists, and 2) at least k different mesh coordinates
are needed, which is satisfied when k n \Gamma 1. To
show that Alg. 2 embeds Q(k) into M(n \Gamma 1) with load 1 and
dilation 1, it suffices to note that: 1) each node q 2 V (Q(k))
has a unique image node in V (M(n \Gamma 1)), and 2) if q, q i are
adjacent in Q(k), then their respective image nodes m, m i
are adjacent in M(n \Gamma 1). 2
Algorithm 2 (Embedding Q(k) onto M(n \Gamma 1)):
cube to mesh (int k, n, q[ use dim[ ])
for
use dim[i] \Theta q[i cube
if (use dim[i] ==
Alg. 2 uses only a limited range of the coordinates available
in M(n \Gamma 1), producing expansion n!=2 k . With multiple calls
to Alg. 2 and a proper selection of arguments origin[ ] and
use dim[ ], one can embed multiple image-disjoint copies of
1). This is exactly the basis for the packing
techniques we present in this section. Naturally, our ultimate
goal is to pack hypercubes into S(n), which can be
accomplished with Alg. 1.
4.3 Symmetric fixed-sized packings
In this subsection, we present a symmetric fixed-sized packing
which embeds the disjoint union U
into S(n) with load 1 and base dilation 3, for some k 2
To make our discussion simpler, we define
Theorem 1 For 1 t b(n + 1)=2c, there is a symmetric
fixed-sized packing P f which embeds the disjoint union
dilation d base (P f
and (4)
bn=2c
Noting that the width of M(n\Gamma1) along its i th dimension
is we refer to i as an even-sized dimension if s i
is even, and as an odd-sized dimension if s i is odd.
has bn=2c even-sized dimensions and b(n \Gamma 1)=2c odd-sized
dimensions. We partition slices of width 1
along the first t \Gamma 1 odd-sized dimensions of M(n \Gamma 1), and
slices of width 2 along all other dimensions of M(n \Gamma 1).
This partitioning process produces p f
sional induced submeshes, which we denote by M j (n \Gamma
1)[m
n\Gammat . Each of
these submeshes contain all of the nodes of M(n \Gamma 1) whose
labels match the pattern m
even
is an invariant such that
is an invariant such
that 0 b
Due to the partitioning process, these submeshes are dis-
joint. Each induced submesh has width 2 along its (n \Gamma t)
dimensions, and can host a copy of Q(n \Gamma t) with load 1,
dilation 1, which follows from Lemma 1. To embed a copy of
into an induced submesh M j (n \Gamma 1)[m
one can use Alg. 2 with arguments:
use
ae 0; for even i
otherwise, and (6)
use
use
We denote the number of usable slices produced by partitioning
its i th dimension by i , where:
The number of induced submeshes produced by the partitioning
process equals the number of packed hypercubes,
and is given by:
Alg. 2 produces the same dimension assignment for each
copy of Q(n \Gamma t), which characterizes a symmetric packing.
If we now embed using Alg. 1, a packing
with load 1 and base dilation 3 results. To complete the
proof, we note that the expansion of the packing can be
obtained by direct application of Eq. 1. 2
An algorithm that symmetrically packs Q(n \Gamma t) into S(n)
is given in [9], and is not presented here due to space constraints
The partitioning technique described in the proof of
Theor. 1 discards 1=(i along each odd-sized
which poses a restriction on the expansion
ratios that can be achieved. Each unused slice has width
along dimension i, and is in fact an (n \Gamma 2)-dimensional sub-mesh
of size 2 \Theta 3 \Theta n. For even i ?
we denote the submesh discarded along dimension i by
is the induced sub-mesh
formed by all nodes
such that m
Discarded submeshes occur along the last b(n
odd-sized dimensions i of M(n \Gamma 1). For
(or there are no discarded submeshes. This
characterizes a symmetric, fixed-sized packing of Q(bn=2c)
into S(n) with load 1, base dilation 3, and expansion 1.
Because discarded submeshes are (n \Gamma 2)-dimensional, they
cannot be used to pack additional hypercubes Q(k) with the
desired load and base dilation when
from Lemma 1. However, we can use such submeshes if
which produces an asymmetric fixed-sized
packing.
4.4 Asymmetric fixed-sized packings
In this subsection, we present an asymmetric fixed-sized
packing P f+ which embeds the disjoint union U
into S(n) with load 1 and base dilation 3, for
some As in the symmetric case, we
define
Theorem 2 For 2 t b(n \Gamma 1)=2c, there is an asymmetric
fixed-sized packing P f+ which embeds the disjoint union
base dilation d base (P f+
bn=2c
h=th
first pack p f
as described in
the proof of Theor. 1, which produces discarded submeshes
2. Because the
dimensions of interest are odd-sized (i.e., i is even), we define
i=2. For each h such that t h b(n \Gamma 1)=2c, let
n\Gammat h2hi denote the number of Q(n \Gamma t)'s that can be packed
into
slices of width 1 along the first t \Gamma 2 odd-sized
dimensions of M(n \Gamma 1)[m slices of width
along all other dimensions of M(n \Gamma 1)[m
partitioning method produces p f
n\Gammat h2hi (n \Gamma t)-dimensional
submeshes of width at least 2 along any dimension, where:
Partitioning shown above allows us
to pack p f
n\Gammat h2hi additional copies of Q(n \Gamma t) into M(n \Gamma 1),
with load 1 and dilation 1. Note, however, that the first
do not have any of their links mapped onto
dimension links of M(n \Gamma 1). The p f
n\Gammat h2hi extra
copies, however, use dimension links of M(n \Gamma 1).
Hence, the resulting packing is asymmetric.
We can use the technique just described for all induced
submeshes
This produces a total of p f+
h=t
n\Gammat h2hi packed copies of Q(n \Gamma t). The theorem
4.5 Results on fixed-sized packings
Fig. 1 depicts expansion ratios produced by our fixed-sized
packing techniques. Note that by reducing the size of the
hypercubes being packed, one achieves smaller expansion.
For example, the expansion of our
symmetric packings drops from 2.46 to 1.13 as we vary t from
to 4. Asymmetry also proves to be an efficient technique
to achieve denser packings, resulting in an expansion of at
most 1.20 among all asymmetric packings shown in Fig. 1.
4.6 Multiple-sized packings
In this subsection, we present an asymmetric multiple-
sized packing P m which embeds the disjoint union
into S(n) with load 1, base dilation
3, and expansion 1. P m supports hypercube tasks with
different node allocation requirements, and guarantees 100%
utilization of S(n), 8n.
Dimensionality of the star graph (n)1.52.5
Expansion
of
the
packing
Q(n-4), asym.
Q(n-4), sym.
Q(n-3), asym.
Q(n-3), sym.
Q(n-2), asym.
Q(n-2), sym.
Q(n-1), sym.
Figure
1: Expansion ratios of packings of Q(n \Gamma t) into S(n)
The technique used to construct P m can be summarized as
follows. Initially, we pack Q(n \Gamma 1)'s symmetrically into S(n)
as described in the proof of Theor. 1. Using the submeshes
that are left after this step, we pack Q(n \Gamma 2)'s asymmet-
rically. This process continues with asymmetric packings of
using in each step
nodes of M(n \Gamma 1) that were not used in the previous steps.
The resulting packing uses all of the nodes in S(n).
Theorem 3 There is an asymmetric multiple-sized packing
which embeds the disjoint union U
dilation
d base
nk
Y
Omitted due to space constraints. The interested
reader is referred to [9] for a proof.
5 Variable-Dilation Embeddings
5.1 Basic techniques
In this section, we describe how a ')-dimensional hypercube
can be embedded with variable dilation into S(n),
As described in Sec. 2, a variable-dilation embedding
produced by grouping
copies of Q(k), embedded into S(n) by a packing
packing
S(n) with dilation d(P 0
at least one variable-dilation embedding are not template
packings. As we shall see, the average dilation d aver (P 0
of the embedding of can often be made
close to d base (P ).
One particularity of our variable-dilation embedding techniques
is that the copies of Q(k) needed to compose one
must have
been packed symmetrically by P into S(n). Formally, P
should map all dimension a links of E(U 0;k+' ) to dimension
b links of E(M(n \Gamma 1)), where 1 a k and 1 b ! n. This
requirement can be met by any template packing P containing
a sufficiently large group of symmetrically packed Q(k)'s.
Hence, even in the case of asymmetric template packings, one
will often find one or more usable groups of packed Q(k)'s.
To keep our discussion short, we derive the main results of
this section for the case In Subsec. 5.4, we give examples
of variable-dilation embeddings that are formed from
Q(k)'s for which k ! n \Gamma 1.
Our variable-dilation embeddings of Q(n \Gamma 1+') into S(n)
are supported by two template packings that were presented
in Sec. 4, namely: 1) the fixed-sized packing of Q(n \Gamma 1)
into S(n), which we denote by P f , and 2) the multiple-sized
packing of Q(k) into S(n) (bn=2c k which we
denote by P m . Both P f and P m contain
symmetrically
packed which limits the number of
additional hypercube dimensions that can be produced by a
variable-dilation embedding of Q(n
log 2
(15)010011110010110124
1000 1010
Hypercube
dimensions:
Figure
2: A variable-dilation embedding of Q(4) into M(3)
Fig. 2 depicts an example of the technique we will be describing
in this section. Q(4) is embedded into M(3) by
grouping two packed Q(3)'s, which produces dilation 2, average
dilation 1.25, and dilation vector [1; If we now
embed M(3) into S(4) using Alg. 1, the corresponding embedding
of Q(4) into S(4) has dilation 4, average dilation
3.25, and dilation vector [3; 3; 3; 4], which is justified by the
following lemma:
be a pair of nodes
separated by ' links along the i th dimension of M(n \Gamma 1),
Alg. 1 produces an embedding
of S(n), such that the images of ! and ! i;'
are connected by a path containing at most ' links.
Omitted due to space constraints. The interested
reader is referred to [9] for a proof.
Theorem 4 Let f(x;
h, and ' be integers such that n 4, 2 h blog 2 nc, and
n). There is a variable-dilation
embedding W of Q(n \Gamma 1+') into S(n), with load (W
dilation along dimension i of Q(n \Gamma 1+') d i (W ), average dilation
d avr (W ), dilation d(W ), and expansion X(W ), where:
d avr (W
and X(W
Omitted due to space constraints. The interested
reader is referred to [9] for a proof.
We now present an algorithm that produces a variable-
dilation embedding of Q(n Such an embedding
has the properties specified in Theor. 4.
Algorithm 3 (Embedding of Q(n
var embed cube (int n,
f int i, m[ ], last;
for
for
last
for
mesh to star (n, m[ ], p[ ]); g
int f(int x, y)
5.2 Advanced techniques
Let Q(k a ) denote the largest hypercube that can be embedded
into S(n) via Alg. 3. Due to the restrictions 2 h
n) in Theor. 4, we
have 2.
Accordingly, let Q(k b ) denote the largest hypercube that can
be embedded with load 1 and variable dilation into S(n), considering
only the availability of Q(n \Gamma 1)'s that are produced
by either of the packings P f or P m . From Eq. 15, we have
\Delta\Pi . Finally, let Q(kmax ) denote
the largest hypercube that can be embedded with load
We note that an embedding
of Q(kmax ) into S(n) can be obtained trivially by a
random one-to-one mapping algorithm. Such an approach,
however, may result in a dilation of up to \Phi(S(n)), where
1)=2c is the diameter of S(n).
Table
lists values of k a , k b , and kmax for star graphs of
practical size. Note that Alg. 3 matches the upper limit k b ,
for 4 n 6, and produces a maximum dimensionality k a
that is one less than the upper limit k b , for 7 n 10.
Table
1: Upper limits k a , k b and kmax
Reference [9] discusses how a load 1, variable-dilation embedding
of Q(k b ) into S(n) can be constructed from packed
Dilation vectors for embeddings
of Q(k b ) into S(n), for 7 n 10, are given in Table 2.
We note that, for 6 n 10, the maximum hypercube
dimensionality achieved by our variable-dilation embeddings
is still one less than kmax . To produce an additional hyper-cube
dimension via a variable-dilation embedding, one needs
more Q(n\Gamma1)'s than those produced by packings P f and P m .
Additional Q(n \Gamma 1)'s can be obtained by grouping smaller
hypercubes that are available in P m , which is discussed in [9].
The details of how these Q(n \Gamma 1)'s can compose Q(kmax ),
however, are beyond the scope of this paper.
5.3 Average dilation
Fig. 3 depicts the average dilation d avr produced by the
variable-dilation embeddings presented in Subsecs. 5.1 and
5.2. We consider the cases 3 k 20 and 4 n 10, which
correspond respectively to hypercubes of sizes 8 through
1,048,576, and star graphs of sizes 24 through 3,628,800.
As explained in Sec. 2, parallel algorithms that employ a
limited amount of hypercube dimensions at any given step
may benefit from the smaller average dilation produced by
variable-dilation embeddings. Moreover, improved performance
can be obtained by reassigning hypercube dimensions
prior to the embedding into S(n), which is possible due to
the symmetry properties of Q(k). Namely, one can relabel
the hypercube nodes, such that in the final embedding into
S(n) the most frequently used hypercube dimensions have
the smallest dilation.
5.4 Compound var.-dilation embeddings
The variable-dilation embeddings discussed earlier in this
section are formed by grouping packed Q(n \Gamma 1)'s. In what
follows, we give an example that illustrates how this concept
can also be adopted in the case of smaller packed hyper-
cubes. Moreover, we use the example to demonstrate the
Dimensionality of the embedded hypercube (k)3.24.2
Average
dilation
Figure
3: Average dilation of embeddings of Q(k) into S(n)
inherent flexibility that results from combining our multiple-
sized packing and variable-dilation embedding techniques.
We consider the template multiple-sized packing P m presented
in Subsec. 4.6. Although P m is an asymmetric pack-
ing, it uses a partitioning technique that produces groups of
symmetrically packed Q(k)'s [9]. These groups of Q(k)'s can
form variable-dilation embeddings into S(n).
Table
3 lists a few among the many possible transformations
that can be produced by applying a series of variable-
dilation embeddings to P m , for the case tables
can be constructed for other values of n). Quantities marked
with an in Table 3 are obtained via variable-dilation embedding
techniques. Details of how such quantities are computed
are given in [9]. Note that all multiple-sized packings
shown in Table 3 have expansion 1.
Packing
# of Q(4)'s
# of Q(5)'s 144 12 -
# of Q(6)'s 264 72 6
# of Q(7)'s 144 132 36
# of Q(8)'s - 72 66
# of Q(9)'s - 36
# of Q(10)'s -
# of Q(11)'s -
# of Q(12)'s -
# of Q(13)'s -
# of Q(14)'s -
Table
3: Some mult.-sized packings of hypercubes into S(8)
6 Comparison with related work
Research on embedding hypercubes into star graphs was pioneered
by Nigam, Sahni, and Krishnamurthy, who proposed
dilation 2, 3, and 4 embeddings of Q(k) into S(n) [6]. Table
4 lists the largest Q(k)'s that can be embedded into
Embedding (W ) Dilation vector (d(W dilation (d avr (W
Table
2: Variable-dilation embeddings of Q(k b ) into S(n), 7
with dilation 4, using the techniques of Nigam et al., for
were chosen because
they produce the smallest expansion ratios among the
embeddings presented in [6]. Also listed in Table 4 are the
corresponding average dilation and expansion produced by
our techniques.
Our variable-dilation embedding techniques are an interesting
alternative to the dilation 4 embeddings of [6]. For
4 n 10, we achieve an average dilation ranging from
3.25 to 3.95. Our techniques produce dilation 4, for the cases
dilation 6, for the cases 8 n 10. Note
also that, for the cases 8 n 10, dilation 6 is produced
only along: 1) dimension 13 links of Q(13), 2) dimension 15
and 16 links of Q(16), and links
of Q(19).
Dilation Aver. dil. Expan. Expan.
Embedding (Nigam (this (Nigam (this
et al.) paper) et al.) paper)
Table
4: Comparison with related work
If we consider solely the embedding listed at the left of
each row in Table 4, then clearly the expansion ratios resulting
from the techniques of [6] and the techniques presented in
this paper should be equal. However, as discussed in Secs. 4
and 5, our multiple-sized packings achieve 100% utilization
of S(n), meaning that any node of the star graph not used
in the embeddings listed in Table 4 can still be used to embed
some other hypercube. This certainly allows an efficient
use of the star graph, and hence we compute our expansion
ratios within the context of a packing (see Eq. 1). From this
viewpoint, our optimal expansion (i.e., 1) is always smaller
than that achieved in [6].
7 Conclusion
This paper presented novel techniques for packing hypercubes
into star graphs, which achieve small expansion and
dilation. In particular, the expansion of our multiple-sized
packings is optimal. Variable-dilation embeddings resulting
from connecting packed Q(n \Gamma 1)'s into S(n) demonstrated
the possibility of embedding large hypercubes into the star
graph, with corresponding small expansion while maintaining
a small dilation on the average. Our techniques can
provide the required support for node allocation and task
migration strategies in applications where S(n) must handle
a workload of parallel algorithms originally devised for the
hypercube.
--R
"The Star Graph: An Attractive Alternative to the n-Cube,"
"Hypercube Supercomput- ers,"
Introduction to Parallel Algorithms and Architectures: Arrays
"An Efficient Sorting Algorithm for the Star Graph Interconnection Network,"
"A Parallel Algorithm for Computing Fourier Transforms on the Star Graph,"
"Embed- ding Hamiltonians and Hypercubes in Star Interconnection Networks,"
"Task Allocation in the Star Graph,"
"Embedding Meshes on the Star Graph,"
Spaceand Time-Efficient Packings and Embeddings of Hypercubes into Star Graphs
--TR | routing;graph embeddings;star graphs;interconnection networks;hypercubes |
629443 | An Efficient Dynamic Scheduling Algorithm for Multiprocessor Real-Time Systems. | AbstractMany time-critical applications require predictable performance and tasks in these applications have deadlines to be met. In this paper, we propose an efficient algorithm for nonpreemptive scheduling of dynamically arriving real-time tasks (aperiodic tasks) in multiprocessor systems. A real-time task is characterized by its deadline, resource requirements, and worst case computation time on p processors, where p is the degree of parallelization of the task. We use this parallelism in tasks to meet their deadlines and, thus, obtain better schedulability compared to nonparallelizable task scheduling algorithms. To study the effectiveness of the proposed scheduling algorithm, we have conducted extensive simulation studies and compared its performance with the myopic [8] scheduling algorithm. The simulation studies show that the schedulability of the proposed algorithm is always higher than that of the myopic algorithm for a wide variety of task parameters. | Introduction
Multiprocessors have emerged as a powerful computing means for real-time applications such as avionic
control and nuclear plant control, because of their capability for high performance and reliability. The
problem of multiprocessor scheduling is to determine when and on which processor a given task ex-
ecutes. This can be done either statically or dynamically. In static algorithms, the assignment of tasks
to processors and the time at which the tasks start execution are determined a priori. Static algorithms
This work was supported by the Indian National Science Academy, and the Department of Science and Technology.
[9, 12] are often used to schedule periodic tasks with hard deadlines. The main advantage is that, if a
solution is found, then one can be sure that all deadlines will be guaranteed. However, this approach is
not applicable to aperiodic tasks whose characteristics are not known a priori. Scheduling such tasks in
a multiprocessor real-time system requires dynamic scheduling algorithms. In dynamic scheduling [3, 8],
when new tasks arrive, the scheduler dynamically determines the feasibility of scheduling these new tasks
without jeopardizing the guarantees that have been provided for the previously scheduled tasks. Thus
for predictable executions, schedulability analysis must be done before a task's execution is begun. A
feasible schedule is generated if the timing and resource constraints of all the tasks can be satisfied, i.e.,
if the schedulability analysis is successful. Tasks are dispatched according to this feasible schedule.
@
@
@
@ @R
OEAE
OEAE
OEAE
Current schedule
Dispatch queues
Processors
Min. length of
dispatch queues
Fig.1 Parallel execution of scheduler and processors
oe
tasks
Task queue
Scheduler
Dynamic scheduling algorithms can be either distributed or centralized. In a distributed dynamic
scheduling scheme, tasks arrive independently at each processor. When a task arrives at a processor,
the local scheduler at the processor determines whether or not it can satisfy the constraints of the
incoming task. The task is accepted if they can be satisfied, otherwise the local scheduler tries to find
another processor which can accept the task. In a centralized scheme, all the tasks arrive at a central
processor called the scheduler, from where they are distributed to other processors in the system for
execution. In this paper, we will assume a centralized scheduling scheme. The communication between
the scheduler and the processors is through dispatch queues. Each processor has its own dispatch queue.
This organization, shown in Fig.1, ensures that the processors will always find some tasks in the dispatch
queues when they finish the execution of their current tasks. The scheduler will be running in parallel
with the processors, scheduling the newly arriving tasks, and periodically updating the dispatch queues.
The scheduler has to ensure that the dispatch queues are always filled to their minimum capacity (if
there are tasks left with it) for this parallel operation. This minimum capacity depends on the average
time required by the scheduler to reschedule its tasks upon the arrival of a new task [10].
It was shown in [3] that there does not exist an algorithm for optimally scheduling dynamically
arriving tasks with or without mutual exclusion constraints on a multiprocessor system. These negative
results motivated the need for heuristic approaches for solving the scheduling problem. Recently, many
heuristic scheduling algorithms [8, 13] have been proposed to dynamically schedule a set of tasks with
computation times, deadlines, and resource requirements. In [13], it was shown for uniprocessor systems
that a simple heuristic which accounts for resource requirements significantly outperforms heuristics,
such as scheduling based on Earliest Deadline First (EDF), which ignore resource requirements. For
multiprocessor systems with resource constrained tasks, a heuristic search algorithm, called myopic
scheduling algorithm, was proposed in [8]. The authors of [8] have shown that an integrated heuristic
which is a function of deadline and earliest start time of a task performs better than simple heuristics
such as the EDF, least laxity first, and minimum processing time first.
Meeting deadlines and achieving high resource utilization are the two main goals of task scheduling
in real-time systems. Both preemptive [12] and non-preemptive algorithms [8, 9] are available in the
literature to satisfy such requirements. The schedulability of a preemptive algorithm is always higher
than its non-preemptive version. However, the higher schedulability of a preemptive algorithm has to
be obtained at the cost of higher scheduling overhead. Parallelizable task scheduling, considered in this
paper, is an intermediate solution which tries to meet the conflicting requirements of high schedulability
and low overhead. Most of the known scheduling algorithms [8, 9, 12] consider that each task can be
executed on a single processor only. This may result in missing of task deadlines due to poor processor
utilization. Moreover, tasks would miss their deadlines when their total computation time requirement
is more than the deadline. These are the motivating factors to go in for parallelizable task scheduling.
However, in our simulation study, we assume that the computation times of tasks are less than their
deadlines in order to compare the results with a non-parallelizable task scheduling algorithm.
The parallelizable task scheduling problem has been studied earlier for non-real-time systems [2, 4, 11]
and is proved to be NP-complete. Many researchers, assuming sublinear task speedups due to inter-processor
communication overhead, have proposed approximation algorithms [2, 4] for the problem. A
heuristic algorithm for precedence constrained tasks with linear speedup assumption is reported in [11].
In real-time systems tasks have additional constraints, namely, ready times and deadlines. This makes
the real-time scheduling problem harder than non-real-time scheduling. In [5], with linear overhead
assumption, an optimal pseudo-polynomial time algorithm is proposed to schedule imprecise computational
tasks in real-time systems. This solution is for static scheduling and cannot be applied to the
dynamic case considered in this paper. In [1], algorithms for scheduling real-time tasks on a partitionable
hypercube multiprocessor are proposed. Here the degree of parallelization of a task is not determined
by the scheduler, rather it is specified as part of the requirements of the task itself, i.e., the scheduler
cannot change the degree of parallelization of a task for meeting its deadline. In [7], a parallelizable
task scheduling algorithm, for dynamically scheduling independent real-time tasks on multiprocessors
is proposed. This work assumes the tasks to be periodic. Most importantly, the algorithms reported
in [1, 5, 7] do not consider resource constraints among tasks which is a practical requirement in any
complex real-time system.
Parallelizable real-time task scheduling has wide applicability in problems such as robot arm dynamics
and image processing. For example, the robot arm dynamics problem consists of two computational
modules: computation of the dynamics and the solution of a linear system of equations, both of them
exhibit high degree of parallelism and have real-time constraints [14]. Similarly, a typical real-time image
processing application involves pixel-level operations such as convolution which can be carried out in
parallel on different portions of the image, and operations in a task such as matching, grouping, and
splitting of objects can also be done in parallel.
The objective of our work is to propose a dynamic scheduling algorithm which exploits parallelism
in tasks in order to meet their deadlines thereby increasing the performance of the system. The rest of
the paper is structured as follows: The task model and definitions are stated in Section 2. In Section
3, we present our dynamic scheduling algorithm and in Section 4, we evaluate its performance through
simulation studies. Finally, in Section 5, some concluding remarks are made.
Basics
2.1 Task Model
We assume that the real-time system consists of m processors, where m ? 1.
1. Each task T i is aperiodic and is characterized by its arrival time (a i ), ready time (r i ), worst case
computation time
is the worst case computation time of T i , which is the
upper bound on the computation time, when run on j processors in parallel, where 1 - j - m.
2. A task might need some resources such as data structures, variables, and communication buffers
for its execution. Every task can have two types of accesses to a resource: (a) exclusive access, in
which case, no other task can use the resource with it or (b) shared access, in which case, it can
share the resource with another task (the other task also should be willing to share the resource).
Resource conflict exists between two tasks T i and T j if both of them require the same resource and
one of the accesses is exclusive.
3. When a task is parallelized, all its parallel subtasks, also called split tasks, have to start at the
same time in order to synchronize their executions.
4. Tasks are non-preemptable, i.e., once a task or a split task starts execution, it finishes to its
completion.
5. For each task T i , the worst case computation time for any j and k with
i . This is called the sublinear speedup assumption and the sublinearity is due to the overheads
associated with communication and synchronization among the split tasks of a task.
2.2 Terminology
Definition 1: A task is feasible in a schedule if its timing constraint and resource requirements are met
in the schedule. A schedule for a set of tasks is said to be a feasible schedule if all the tasks are feasible
in the schedule.
Definition 2: A partial schedule is a feasible schedule for a subset of tasks. A partial schedule is said
to be strongly feasible if all the schedules obtained by extending the current schedule by any one of the
remaining tasks are also feasible [8].
Definition 3: EAT s
) is the earliest time when resource R k becomes available for shared (or
exclusive) usage [8].
Definition 4: Let P be the set of processors, and R i be the set of resources requested by task T i . Earliest
start time of a task T i , denoted as EST(T i ), is the earliest time when its execution can be started, defined
as: EST (T i
(EAT u
k )), where avail time(j) denotes the
earliest time at which the processor P j becomes available for executing a task, and the third term denotes
the maximum among the earliest available times of the resources requested by task T i , in which
for shared mode and exclusive mode.
3 The Dynamic Task Scheduling Algorithm
In this section, we present our dynamic scheduling algorithm which exploits parallelism in tasks to meet
their deadlines. In the context of this paper, dynamic scheduling algorithm has complete knowledge about
currently active set of tasks, but not about the new tasks that may arrive while scheduling the current
set. The proposed parallelizable task scheduling algorithm is a variant of myopic algorithm proposed in
[8]. The myopic algorithm is a heuristic search algorithm that schedules dynamically arriving real-time
tasks with resource constraints. A vertex in the search tree represents a partial schedule. The schedule
from a vertex is extended only if the vertex is strongly feasible. The strong feasibility is checked for the
first K tasks in the current task queue, which is also known as feasibility check window. Larger the size
of the feasibility check window higher the scheduling cost and more the look ahead nature.
The proposed scheduling algorithm is similar to the myopic algorithm except that it parallelizes a
task whenever its deadline cannot be met, and has the same scheduling cost of myopic algorithm. The
degree of parallelization (i.e., the number of split tasks) of a task is chosen in a way that the task's
deadline is just met. For scheduling a task, the processor(s) and the resource(s) which have minimum
earliest available time are selected. The scheduling cost of the parallelizable task scheduling algorithm
is made equal to the myopic algorithm by performing feasibility check for only K tasks, where K - K,
as compared to K in the myopic algorithm. The value of K depends on the number of tasks parallelized
and their degrees of parallelization, i.e., the feasibility check is done till the sum of the degrees of parallelization
of these tasks reaches K. In other words, in a parallelizable task scheduling algorithm, the
number of tasks checked for feasibility is less than or equal to the size of the feasibility check window
(K). In the worst case, if none of the tasks need to be parallelized, the parallelizable task scheduling
algorithm behaves like the myopic algorithm, in which case K. The parallelizable task scheduling
algorithm for scheduling a set of currently active tasks is given below:
Parallelizable Task Scheduling(K, max-split) /* K: size of feasibility check window,
max-split: maximum degree of parallelization of a task; both are input parameters. */
begin
1. Order the tasks (in the task queue) in non-decreasing order of their deadlines and then start with an empty partial
schedule.
2. Determine whether the current vertex (schedule) is strongly feasible by performing feasibility check for K or less
than K tasks in the feasibility check window as given below:
ffl Let K be the count of the number of tasks for which feasibility check has been done.
be the (K 1)-th task in the current task queue.
ffl Let num-split be the maximum degree of parallelization permitted for the current task T i .
ffl Let cost be the sum of degree of parallelization over all the K tasks for which feasibility check has been done
so far.
(a)
(b) While (feasible is TRUE)
i. If (K \Gamma cost ! num-split) then
ii. Compute
iii. Find the smallest j such that EST (T i
iv. If (such j exists) then cost
v. Else if (num-split ! max-split) then break.
vi. Else feasible = FALSE.
3. If (feasible is TRUE) then
(a) Compute the heuristic function (H) for the first K tasks, where
(b) Extend the schedule by the task having the best (smallest) H value.
4. Else
(a) Backtrack to the previous search level.
(b) Extend the schedule by the task having the next best H value.
5. Move the feasibility check window by one task.
6. Repeat steps (2-5) until termination condition is met.
end.
The termination conditions are either (a) a complete feasible schedule has been found, or (b) maximum
number of backtracks or H function evaluations has been reached, or (c) no more backtracking is possible.
The heuristic function H is an integrated heuristic which captures the deadline and
resource requirements of task T k , where W is a constant which is an input parameter. The algorithm
backtracks only when the deadline of a task cannot be met using any degree of parallelization upto
max-split. The time complexity of the proposed algorithm for scheduling n tasks is O(Kn) which is
same as that of the myopic algorithm.
Fig.2b is a feasible schedule produced by the parallelizable task scheduling algorithm for the task set
given in fig.2a with 4 processors and 1 resource having 2 instances. The arrival time (a i ) of all the tasks
in Fig.2a is 0. For this, the input values for K, W, number of backtracks, and max-split are taken as 4,
1, 1, and 2, respectively. Fig.2c and Fig.2d show the search tree constructed by the myopic scheduling
algorithm for the task set given in Fig.2a. The myopic algorithm is unable to produce a feasible schedule
for this task set, whereas the proposed algorithm is able to produce a feasible schedule for the same task
set. The search tree constructed by the proposed algorithm is given in Fig.2c and Fig.2e. In Fig.2b, the
tasks T 11 and T 13 are parallelized and scheduled on processors P 2 and P 4 , and P 1 and P 3 , respectively.
task ready time comp. time deadline resource
share
28
exclusive
Fig.2a An example of real-time task set
T2T 48
Fig.2b Feasible schedule produced by the parallelizable task scheduling algorithm
In Fig.2c-2e, each node of the search tree is represented by two boxes: the left box shows the
earliest available time of processors for executing a new task, and the right box which has two entries
(separated by a comma) correspond to earliest available time of resource instances with one entry per
resource instance. In each entry, the value within (without) parenthesis indicates the available time of
that particular resource instance in exclusive (shared) mode. For example, entry such as 0(30),34(34)
indicates that the first instance of the resource is available for shared mode at time 0 and for exclusive
mode at time 30, and the second instance of the resource is available for shared and exclusive mode at
time 34. In Fig.2c-2e, the forward arcs correspond to extending the schedule, whereas the backward arcs
correspond to backtracking. The label T a (b) on a forward arc denotes that the task T a is scheduled on
processor P b . For example, T 13 (3; 1) denotes that the task T 13 is parallelized and scheduled on processors
(1)
Infeasible due to T 11
Infeasible due to T 13
(1)
41,37,41,37
Fig.2c Portion of the search tree common to both the algorithms
Fig.2d Portion of the search tree specific to myopic algorithm
Fig.2e Portion of the search tree specific to
parallelizable task scheduling algorithm
For illustrating the working of myopic algorithm, consider the first vertex of Fig.2d. At that point,
tasks are in the feasibility check window. EST
Similarly the other three tasks are also feasible. Therefore, the current schedule is
strongly feasible. The heuristic values for these four tasks are 58, 61, 61, and 64, respectively. The best
task is T 10 and the schedule is extended by scheduling it on P 3 . The new vertex thus obtained is not
strongly feasible because T 11 is not feasible, hence the algorithm backtracks to the previous vertex, and
extends the schedule from there using the next best task T 11 .
For parallelizable task scheduling, consider the vertex after scheduling T 10 in Fig.2e. The feasibility
check window size will be 3 since only three tasks are to be scheduled. Out of these three, only T 11
(with are checked for feasibility since feasibility checking of T 13 exceeds K.
Therefore, the schedule is extended by scheduling T 12 on P 1 . Now,
both T11 and T13 have the same H value. That is, First, the schedule is extended by
T 11 by parallelizing it and scheduling its split tasks on P 2 and P 4 . Finally, T 13 is scheduled. Note that,
all the tasks are feasible in the schedule.
4 Simulation Studies
To study the effectiveness of task parallelization in meeting task's deadline, we have conducted
extensive simulation studies. Here, we are interested in whether or not all the tasks in a task set can
finish before their deadlines. Therefore, the most appropriate metric is the schedulability of task sets
[8], called success ratio, which is defined as the ratio of the number of task sets found schedulable (by a
scheduling algorithm) to the number of task sets considered for scheduling. The parameters used in the
simulation studies are given in Fig.3. Schedulable task sets are generated for simulation using the
following approach.
1. Tasks (of a task set) are generated till schedule length, which is an input parameter, with no idle
time in the processors, as described in [8]. The computation time c 1
i of a task T i is chosen
randomly between MIN C and MAX C.
2. The deadline of a task T i is randomly chosen in the range SC and (1 +R) SC, where SC is the
shortest completion time of the task set generated in the previous step.
3. The resource requirements of a task are generated based on the input parameters USeP and
ShareP.
4. The computation time c j
of a task T i when executed on j processors, j - 2, is equal to
1. For example, when c 1
the computation times c 2
are 7, 5,
and 4, respectively.
Each point in the performance curves (Figs.4-8) is the average of 5 simulation runs each with 200 task
sets. Each task set contains approximately 175 to 200 tasks by fixing the schedule length to 800 during
the task set generation. For all the simulation runs, the number of instances of every resource is taken
as 2.
Figs.4-8, represent the success ratio by varying R, W, UseP, num-btrk, and K, respectively. When
max-split is 1, the task is considered to be non-parallelizable and the algorithm behaves like the
myopic algorithm. Note that the scheduling costs for different values of max-split are equal. This is
achieved by making the number of tasks checked for feasibility (K) as a variable, as discussed in the
previous section, i.e., when max-split=1, Figs.4-8, it is
interesting to note that an increase in degree of parallelization increases the success ratio for the
speedup function used.
parameter explanation
MIN C minimum computation time of tasks, taken as 30.
MAX C maximum computation time of tasks, taken as 60.
R laxity parameter denotes the tightness of the deadline.
UseP probability that a task uses a resource.
ShareP probability that a task uses a resource in shared mode, taken as 0.5.
K size of feasibility check window.
W weightage given to EST(T i ) for H calculation.
num-btrk number of backtracks permitted in the search.
num-proc number of processors considered for simulation.
num-res number of resource types considered for simulation.
max-split maximum degree of parallelization of a task.
Fig.3 Simulation parameters
4.1 Effect of Laxity Parameter
Fig.4 shows the effect on success ratio by the laxity parameter (R). This helps in investigating the
sensitivity of task parallelization on varying laxities. From Fig.4, it is clear that lower values of
max-split are more sensitive to change in R than higher values of max-split. For example, the success
ratio offered by max-split=1 varies from 47.2%-99.4% as compared to the variation in success ratio
offered by max-split=4. This is due to the fact that tasks experience more degree of
parallelization (in order to meet their deadlines) when their laxities (deadlines) are tight, but the same
tasks with higher laxities rarely need parallelization since their deadlines can be met without
parallelizing them. This shows that task parallelization is more effective for tasks having tighter laxities.
4.2 Effect of W Parameter
The sensitivity of the integrated heuristic for various degrees of task parallelization is studied in Fig.5.
The effect of W for different values of max-split offers similar trend as the success ratio increases
initially with increasing W and saturates for larger values of W. Increasing W beyond 6.0 would
decrease the success ratio (which is not shown in Fig.5). This is because when when W is very large,
the integrated heuristic behaves like a simple heuristic which takes care of only the availability of
processors and resources ignoring task's deadline. Similarly, when W=0, the success ratio would be
very poor as the integrated heuristic reduces to EDF which is also a simple heuristic.
Success
Laxity parameter
Fig.4 Effect of R paramter
Success
parameter
Fig.5 Effect of W parameter
4.3 Effect of Resource Usage
The effect of resource usage on success ratio is shown in Fig.6 by fixing R , K, num-btrk, and ShareP
values as 0.09, 7, 10, and 0.5, respectively. From Fig.6, we observe that the success ratio decreases with
increasing UseP. This is due to more resource conflicts among tasks which make the value of EST(T i ),
for task T i , decided by the availability of required resources rather than by the availability of processors
and ready time of the task T i . For lower values of resource usage (UseP), the difference between
success ratio offered by max-split=4 and max-split=1 is less compared to at higher values of UseP.
This shows that task parallelization is more effective when the resource constraints among tasks are
high. Another study (not shown) is when UseP is fixed and ShareP is varied, in which the success ratio
increases with increasing ShareP.
4.4 Effect of Number of Backtracks
In Fig.7, the impact of number of backtracks on the success ratio is plotted for various values of
max-split. From the plots, it is interesting to note that the success ratio does not improve significantly
with increasing values of num-btrk for all values of max-split. This clearly motivates the need for
finding techniques which increase the success ratio with increasing scheduling cost by fixing the number
of backtracks. The exploitation of parallelism in a task proposed in this paper is one such technique as
demonstrated by our simulation results for different values of max-split.
Success
Resource usage probability
Fig.6 Effect of resource usage probability
Success
Number of backtracks
Fig.7 Effect of number of backtracks
4.5 Effect of Size of the Feasibility Check Window
Fig.8 shows the effect of varying feasibility check window (K) on success ratio for different values of
max-split. Note that for larger values of K, the algorithm has more look ahead nature, and the number
of H function evaluations in a feasibility window is more which also means an increase in scheduling
cost. Increasing the size of the feasibility check window (K) increases the success ratio for all values of
max-split. This effect is more for lower values of max-split, i.e., larger values of max-split is less
sensitive to changes in K. This indicates that more task sets can be feasibly scheduled by allowing task
parallelization than by non-parallelizable task scheduling for the same scheduling cost.
Conclusions
Meeting deadlines and achieving high resource utilization are the two main goals of task scheduling in
real-time systems. Both preemptive and non-preemptive algorithms are available in the literature to
satisfy such requirements. The schedulability of a preemptive algorithm is always higher than its
non-preemptive version. However, the higher schedulability of a preemptive algorithm has to be
obtained at the cost of higher scheduling overhead. Parallelizable task scheduling, considered in this
paper, is an intermediate solution which tries to meet the conflicting requirements of high
schedulability and low overhead. In this paper, we have proposed a new algorithm based on
parallelizable task model for dynamic scheduling of tasks in real-time multiprocessor systems. We have
demonstrated through simulation that the task parallelization is a useful concept for achieving better
schedulability than allowing more number of backtracks without parallelization. The simulation studies
show that the success ratio offered by our algorithm is always higher than that of the myopic algorithm
for a wide variety of task parameters. From the simulation studies, the following inferences are drawn.
ffl Task parallelization is more effective when tasks have tighter laxities and when resource
constraints among tasks are high. The sensitivity of W for different values of task parallelization
offers similar trend. Increasing the size of the feasibility check window (K) increases the success
ratio for all values of max-split.
ffl The impact of number of backtracks on the success ratio is less significant compared to the other
parameters. This clearly indicates the need for spending the scheduling cost on task
parallelization rather than on backtracking.
Success
Size of feasibility check window
Fig.8a Effect of K with 8 processors
Success
Size of feasibility check window
Fig.8b Effect of K with processors
Resource reclaiming from a schedule consisting of parallelizable real-time tasks is possible when the
actual computation times of tasks are less than their worst case computation times. The resource
reclaiming algorithms proposed in [6, 10] cannot be applied here because these algorithms are ignorant
of the fact that some of the tasks are parallelized and scheduled on more than one processor. Therefore,
the problem of resource reclaiming from parallelized tasks needs further research.
--R
"On-line hard real-time scheduling of parallel tasks on partitionable multiprocessors,"
"Approximate algorithms for the partitionable independent task scheduling problem,"
"Multiprocessor on-line scheduling of hard real-time tasks,"
"An approximate algorithm for scheduling tasks on varying partition sizes in partitionable multiprocessor systems,"
"Algorithms for scheduling imprecise computations,"
"A new approach for scheduling of parallelizable tasks in real-time multiprocessor systems,"
"Efficient scheduling algorithms for real-time multiprocessor systems,"
"Allocation and scheduling of precedence-related periodic tasks,"
"Resource reclaiming in multiprocessor real-time systems,"
"A heuristic of scheduling parallel tasks and its analysis,"
"Scheduling processes with release times, deadlines, precedence, and exclusion relations,"
"Scheduling tasks with resource requirements in hard real-time systems,"
"Parallel processing for real-time simulation: a case study,"
--TR
--CTR
Wei Sun , Chen Yu , Xavier Defago , Yuanyuan Zhang , Yasushi Inoguchi, Real-time Task Scheduling Using Extended Overloading Technique for Multiprocessor Systems, Proceedings of the 11th IEEE International Symposium on Distributed Simulation and Real-Time Applications, p.95-102, October 22-26, 2007
Mohammed I. Alghamdi , Tao Xie , Xiao Qin, PARM: a power-aware message scheduling algorithm for real-time wireless networks, Proceedings of the 1st ACM workshop on Wireless multimedia networking and performance modeling, October 13-13, 2005, Montreal, Quebec, Canada
G. Manimaran , Shashidhar Merugu , Anand Manikutty , C. Siva Ram Murthy, Integrated scheduling of tasks and messages in distributed real-time systems, Engineering of distributed control systems, Nova Science Publishers, Inc., Commack, NY, 2001
G. Manimaran , C. Siva Ram Murthy, A Fault-Tolerant Dynamic Scheduling Algorithm for Multiprocessor Real-Time Systems and Its Analysis, IEEE Transactions on Parallel and Distributed Systems, v.9 n.11, p.1137-1152, November 1998
R. Al-Omari , Arun K. Somani , G. Manimaran, Efficient overloading techniques for primary-backup scheduling in real-time systems, Journal of Parallel and Distributed Computing, v.64 n.5, p.629-648, May 2004
Wan Yeon Lee , Sung Je Hong , Jong Kim, On-line scheduling of scalable real-time tasks on multiprocessor systems, Journal of Parallel and Distributed Computing, v.63 n.12, p.1315-1324, December
Xiao Qin , Hong Jiang, A dynamic and reliability-driven scheduling algorithm for parallel real-time jobs executing on heterogeneous clusters, Journal of Parallel and Distributed Computing, v.65 n.8, p.885-900, August 2005 | real-time systems;multiprocessor;dynamic scheduling;parallelizable tasks;resource constraints |
629444 | On Supernode Transformation with Minimized Total Running Time. | AbstractWith the objective of minimizing the total execution time of a parallel program on a distributed memory parallel computer, this paper discusses how to find an optimal supernode size and optimal supernode relative side lengths of a supernode transformation (also known as tiling). We identify three parameters of supernode transformation: supernode size, relative side lengths, and cutting hyperplane directions. For algorithms with perfectly nested loops and uniform dependencies, for sufficiently large supernodes and number of processors, and for the case where multiple supernodes are mapped to a single processor, we give an order n polynomial whose real positive roots include the optimal supernode size. For two special cases, 1) two-dimensional algorithm problems and 2) n-dimensional algorithm problems, where the communication cost is dominated by the startup penalty and, therefore, can be approximated by a constant, we give a closed form expression for the optimal supernode size, which is independent of the supernode relative side lengths and cutting hyperplanes. For the case where the algorithm iteration index space and the supernodes are hyperrectangular, we give closed form expressions for the optimal supernode relative side lengths. Our experiment shows a good match of the closed form expressions with experimental data. | Introduction
Supernode partitioning is a transformation technique that groups a number of iterations
in a nested loop in order to reduce the communication startup cost. This paper
addresses the problem of finding the optimal grain size and shape of the supernode
transformation so that the total running time, which is the sum of communication
time and computation time, is minimized.
A problem in distributed memory parallel systems is the communication startup
cost, the time it takes a message to reach transmission media from the moment of its
This research was supported in part by the National Science Foundation under Grant CCR-9502889
and by the Clare Boothe Luce Assistant Professorship from Henry Luce Foundation.
y Supported by HaL Computer Systems, 1315 Dell Ave., Campbell, CA 95008.
initiation. The communication startup cost is usually orders of magnitude greater
than the time to transmit a message across transmission media or to compute data in
a message. Supernode transformation has been studied [4, 5, 6, 7, 8, 9, 15] to reduce
the number of messages sent between processors by grouping multiple iterations into
supernodes. 1 After the supernode transformation, several iterations are grouped into
one supernode and this supernode is assigned to a processor as a unit for execution.
The data of the iterations in the same supernode, which need to be sent to another
processor, are grouped as a single message so that the number of communication
startups is reduced from the number of iterations in a supernode to one. A supernode
transformation is characterized by the hyperplanes which slice the iteration index
space into parallelepiped supernodes, the grain size of the supernode and the relative
lengths of the sides of a supernode. All the three factors mentioned above affect the
total running time. A larger grain size reduces communication startup cost more,
but may delay the computation of other processors waiting for the message. Also,
a square supernode may not be as good as a rectangular supernode with the same
grain size. In this paper, how to find an optimal grain size and an optimal relative
side length vector, or an optimal shape of a supernode is addressed.
Algorithms considered in this paper are nested loops with uniform dependences
[12]. Such an algorithm can be described by its iteration index space consisting of all
iteration index vectors of the loop nest and a dependence matrix consisting of all uniform
dependence vectors as its columns. Two communication models are considered.
In the one parameter communication model, communication cost is approximated by
a constant startup penalty and in the two parameter model, communication cost is
a function of the startup penalty and the message size. The first model can be used
for the case where the startup penalty dominates the communication cost and the
second model is for the case where the message size is too large to be ignored.
The approach in this paper is as follows. Unlike other related work where a supernode
transformation is specified by n side lengths of the parallelepiped supernode, and
the n partitioning hyperplanes, a supernode transformation in this paper is specified
by a grain size g of a supernode, the relative side length vector R which describes the
side lengths of a supernode relative to the supernode size, and n partitioning hyper-planes
described by matrix H which contains the normal vectors of the n independent
hyperplanes as rows. This approach allows us, for given partitioning hyperplanes, to
find the optimal grain size and the optimal shape separately in our formulation.
Based on our formulation, we derived a closed form analytical expression for the optimal
supernode size for the one parameter model and the two parameter model with
doubly nested loops. A closed form expression for the optimal relative length vector
is also provided for the one parameter model with constant bounded loop iteration
space.
The rest of the paper is organized as follows. Section 2 presents necessary defi-
nitions, assumptions, terminology, and models. Section 3 discusses a general model
transformation is conceptually similar to LSGP (Locally Sequential Globally Parallel) partitioning
[4, 7, 8, 9, 11, 15], where a group of iterations is assigned to a single processing element for sequential
execution while different processing elements execute respectively assigned computations in parallel.
of the total running time and how its components depend on different parameters.
Section 4 briefly presents our results on how to find the optimal grain size and shape
of a supernode transformation assuming one parameter communication model. Section
5 discusses how to find the optimal supernode size assuming two parameter
communication model where communication cost is modeled as an affine function of
the amount of data to be transferred. Section 6 briefly describes related work and
the contribution of this work compared to previous work. Section 7 concludes this
paper.
Basic definitions, models and assumptions
In a distributed memory parallel computer, each processor has access only to its
local memory and is capable of communicating with other processors by passing
messages. The cost of sending a message can be modeled by t s s is the
startup time, b is the amount of data to be transmitted, and t t is the transmission
rate, i.e., the transmission time per unit data. The computation speed of a single
processor in a distributed memory parallel computer is characterized by the time it
takes to compute a single iteration of a nested loop. This parameter is denoted by
t c .
We consider algorithms which consist of a single nested loop with uniform dependences
[12]. Such algorithms can be described by a pair (J; D), where J is an iteration
index space and D is an n \Theta m dependence matrix. Each column in the dependence
matrix represents a dependence vector. We assume that m - n, matrix D has full
rank (which is equal to the number of loop nests n), and the determinant of the Smith
normal form of D is equal to one. As discussed in [13], if the above assumptions are
not satisfied, then the iteration index space J contains independent components and
can be partitioned into several independent sub-algorithms with the above assumptions
satisfied. Furthermore, we assume that only true loop carried dependences [14]
are included in matrix D since only those dependences cause communication, while
other types of dependences can be eliminated using known transformation techniques
(e.g. variable renaming).
In a supernode transformation, the iteration space is sliced by n independent families
of parallel equidistant hyperplanes. These hyperplanes partition the iteration
index space into n-dimensional parallelepiped supernodes. Hyperplanes can be defined
by the normal vectors orthogonal to each of the hyperplanes. The n \Theta n matrix
containing the n normal vectors as rows is denoted by H which is of full rank because
these hyperplanes are independent. Supernodes can be defined by matrix H
and distances between adjacent parallel hyperplanes. Let l i be the distance of two
adjacent hyperplanes with normal vector H i and the supernode side length vector
The grain size, or supernode volume, denoted by g, is defined as
the number of iterations in one supernode. The supernode volume and the length l i
are related as follows:
Y
l i
depends on the angles between hyperplanes. For example, when
supernodes are cubes, 1. We also define the supernode relative side length vector,
s
and clearly
Y
Vector R describes side lengths of a supernode relative to the supernode size. For
example, then the supernode is a square. A supernode transformation
is completely specified by H, R, and g, and denoted by (H; R; g).
After the supernode transformation, we obtain a new dependence structure (J s ; D s )
where each node in the supernode index space J s is a supernode. The supernode
dependence matrix D s in general is different from D. As discussed in [6], partitioning
hyperplanes defined by matrix H have to satisfy HD - 0, i.e., each entry in the
product matrix is greater than or equal to zero, in order to have (J s ; D s ) computable.
That is, all dependence vectors in D s are contained in a convex cone. Matrix
is a matrix of extreme vectors of dependence vectors. In other words, each dependence
vector d i
in D s can be expressed as a non-negative linear combination of columns
in E. Also, the n columns of E are the n vectors collinear with the n sides of the
parallelepiped supernode.
For an algorithm A = (J; D), a linear schedule [12] is defined as oe
that
disp(-)
Jg. A linear schedule assigns each node j 2 J an execution step
with dependence relations respected. The length of a linear schedule is defined as
Note that j 1
for which oe - (j 1
is maximum are always extreme points
in the iteration index space.
The execution of an algorithm (J; D) is as follows. We apply a supernode transformation
(H; R; g) and obtain (J s ; D s ). An optimal linear schedule - can be found for
execution based on the linear schedule alternates between computation
phases and communication phases. That is, in step i, we assign supernodes
with the same oe - to available processors. After each processor finishes all the
computations of a supernode, processors communicate to pass messages. After the
communication is done, we go to step i+1. The total amount of data to be transferred
in a communication phase by a single processor is denoted by V (g; R). Hence, the
total running time of an algorithm depends on all of the following: (J; D), H, g, R, -,
T be the total running time. The problem of finding an optimal
supernode transformation is an optimization problem of finding parameters H,
g, R, and -, such that the total running time, T ((J; D); H;
Figure
1. The iteration index space before supernode transformations.
is minimized. This paper addresses the problems of how to find an optimal grain
size g and optimal supernode relative length vector R. How to find an optimal linear
schedule for (J s ; D s ) can be found in [1, 12]. How to find an optimal H in general
remains open, and discussed in [6, 7].
Once the supernode partitioning parameters are chosen, the optimal number of
processors in a system can be determined as the maximum number of independent
supernodes in a computation phase, which is the maximum number of supernodes to
which the linear schedule oe - assigns the same value:
where i is the constant assigned by the linear schedule oe - to all supernodes in the
same phase, and J s is the supernode index space.
Example 2.1 To illustrate the notions introduced above, we show an example
of supernode transformation applied to an algorithm and how it affects algorithm's
total running time. Let's assume t 0:1-s. Consider
algorithm (J; D) where
The algorithm consists of two loops with three dependence vectors:
and
. The optimal linear schedule vector for this algorithm is 1). The length
of the schedule is -
Figure
1 shows the iteration index
space with linear schedule wave fronts. If a supernode consists of only one iteration,
then, in the execution of the algorithm, there are 299 computation phases and 298
communication phases. One computation phase takes t 10-s. Assuming that each
iteration produces one eight byte data item, and that there are 50 bytes of header
overhead, communication time takes t s 305:8-s. The
Figure
2. The supernode index space where each supernode is a square containing
four iterations.
total running time is: processors.
The sequential running time is T 200ms.
Consider now a supernode transformation applied to the algorithm. Let the hyperplane
matrix
Matrix H defines two families of lines parallel to the two axes. For this matrix H,
because the two lines are orthogonal. Let
the length vector and each supernode is a square containing four iterations.
Figure
1 shows one supernode outlined with a dashed square. The supernode index
space and dependence matrix are then J
which is shown in Figure 2 and D D. The optimal linear schedule vector for
this (J s ; D s ) is and the schedule length is -
Figure
shows supernode index space with the linear schedule wave fronts. The
computations of one supernode take gt 40-s. Communication in one phase takes
300+0:1 \Theta assuming a processor has to send two data items to the
neighboring processor. The total running time is
with 50 processors. In this simple example speedup between T 1 and T 2 is close to 2.
Note that supernode transformation parameters are chosen arbitrarily and are not
optimal. Later, it is shown that the total running time can be improved further by
an optimal solution.
3 The total running time
This section discusses a total running time model and how its components depend
on the grain size and shape of a supernode transformation.
ct
@
@
\Theta
\Theta
\Theta
schedule
vector
which
p and q
distance size shape size
shape shape size
Figure
3. Dependences between the total running time components and the supernode
size and shape.
The total running time is a sum of the computation time and the communication
time which are multiples of the number of phases in the execution. The linear
schedule length, as defined in the previous section, corresponds to the number of communication
phases in the execution. The number of computation phases is, usually,
one more than the number of communication phases. Supernode transformations often
generate supernodes containing fewer iterations at the boundary of index space.
Thus, the first and/or the last computation phases are often shorter than other computation
phases. For this reason, we will assume that the number of computation
phases and the number of communication phases are equal, and are equal to the
linear schedule length, denoted by P . The total running time is then the sum of the
computation time T comp and communication time T comm in one phase, multiplied by
the number of phases
Figure
3 shows how the components in the total running time depend on the
supernode grain size and shape. Computation time T comp depends on supernode
size g only. Communication time T comm in a single communication phase depends
on c, the number of different neighbors which a processor has to send a message
to, and V (g; R), the total amount of data transmitted by a single processor in a
communication phase. The number of messages c depends on H, the algorithm,
and how supernodes are assigned to processors. V , in general, depends on both the
supernode size and shape. The scheduling length P is a function of the schedule vector
(a point in J such that (a point in J such that
and the distance between p and q. As proven later in this section, -,
p and q depend on the shape only. In other words, for two supernode transformations
(as relative points of the supernode index space)
should be the same with possible different distances between p and q.
Consider algorithm A space J s and dependence
matrix D s obtained by applying a supernode transformation (H; R; g) to algorithm
D). According to [6], in a valid supernode transformation with hyperplanes
H, all dependence vectors d 2 D should be contained in the convex cone of the n
extreme directions forming the parallelepiped supernode. Hence, if the supernode
grain size is reasonably large, and the components of d 2 D are reasonably small,
then all dependence vectors d 2 D, if originating at the extreme point of the convex
cone of the parallelepiped supernode, should be contained inside the parallelepiped
supernode. Therefore, it is reasonable to assume in this paper that the components
of the dependence matrix D s are 0; 1, or \Gamma1. The following lemma gives sufficient
conditions for this assumption to be true:
Lemma 3.1 Components of dependence matrix D s , in the transformed algorithm,
the j-th component of vector Hd.
Consider two supernode transformations, (H; R; with the same
H and R, and different supernode sizes, g 1 and g. Let J s1 and J s be the corresponding
supernode index spaces. Lemma 3.2 shows how an index space changes as the
supernode size changes.
Lemma 3.2 Let J bg be the supernode index set with g 1 and J s be
the supernode index set with g. Then
s
where n is the number of loop nests in the original algorithm.
The dependence matrix does not change as supernode size changes from g 1 to g.
This follows from the assumption that components of dependence vectors in transformed
algorithm take values from f0; 1; \Gamma1g and from the fact that supernode shape
defined by H and R does not change. The following lemma shows that an optimal
linear schedule does not change as supernode size g changes.
Lemma 3.3 Optimal linear schedule vectors for two supernode transformations
which differ only in supernode size, are identical.
The proofs of the above three lemmas can be found in [2].
4 One parameter communication model
In this section we briefly summarize how to find the optimal grain size g and relative
length vector R when the one parameter communication model is used. This model
applies to the cases where message startup time is much larger than data transmission
time, and data transmission can be overlapped by other useful processing. In this
case, the total running becomes T ct s ). Proofs of theorems and detailed
derivation can be found in [2].
Theorem 4.1 For an algorithm (J; D) and supernode transformation with H and
R, the optimal supernode size is:
ct s
The shape of supernodes is defined by two supernode transformation parameters:
H and R. For a given H and an optimal grain size g o , the problem of finding an
optimal R for general cases is formulated as a nonlinear programming problem [2].
The Optimal relative length vector R and optimal linear schedule vector are given in
the following for a special case where index set J is constant-bounded (loop bounds
are constant), and the partitioning hyperplane matrix I, the identity
matrix, i.e., both the index space and the supernodes are hyperrectangles. Also, let
1 be a vector whose components are all one.
Theorem 4.2 Consider algorithm (J; D) where
supernode transformation (H; R; g) and transformed algorithm (J s ; D s ). If
I, the identity matrix, and I ' D s , then an optimal linear schedule vector, -, and
supernode relative length vector, are given by:
or
The above theorem implies that the optimal supernode shape is similar to the
shape of the original index set J so that the resulting supernode index set J s is a
hypercube with equal sides.
As derived in [2], the optimal grain size for the algorithm in Example 2.1 is
and the optimal relative length vector is
2).
Table
shows how the total
running time varies for different supernode grain sizes with a fixed square supernode
shape. The total running time is the shortest at the optimal grain size. Further
improvement in the total running time is achieved when an optimal relative side
length vector is used, as shown in Table 2 where the total running time is computed
for different supernode relative vectors with the supernode size close to the optimal.
Note that the values for supernode size and supernode side lengths computed
based on Theorems 4.1 and 4.2 may not be integral. We should choose approximate
integral values for supernode side lengths, L, such that they are close to the optimal
values and the volume of the resulting supernode is close to the optimal grain size.
A simple heuristic is to use approximate values for L such that the volume of the
resulting supernode is greater than or equal to g o , because the total running time
increases faster for values of g ! g o and slower for values g ? g o . Alternatively, the
total running time can be evaluated for different approximate values and the best
approximation should be used.
L Processors Time
Table
1. Total running time for different supernode sizes and square supernode
shape (or close to square).
L Processors Time
28 ( 4
Table
2. Total running time for different supernode shapes and supernode sizes
close to 30.
5 Two parameter communication model
This section discusses how to find the optimal grain size and shape when the two
parameter communication model is used. In this model, the communication cost is
modeled by an affine function of the amount of data to be transferred by a single
processor in a communication phase.
The amount of data to be transferred V (g; R), in general, is a complicated function
of both the supernode size and supernode shape [7]. To simplify the problem, we
consider the amount of data to be transferred as a function of the supernode size only.
Intuitively, neighboring supernodes may need data from the surface of a supernode.
In general, the amount of data should be proportional to the area of surfaces of all
dimensionalities of a supernode. Thus we use the following expression as the amount
of data to be transferred:
where ff's are constant. Hence the amount of data to be transferred depends on the
supernode size only and the total running time of supernode transformation (H; R; g)
is:
ct
According to Lemmas 3.1, 3.2 and 3.3, in section 3, the number of phases P
where P g1 is the scheduling length of supernode transformation (H; R; g 1 ).
To find the optimal supernode size, we solve equation @T
s
ct
@g
@g
ct
@g
ct
)ct
ct
Substituting
ct
ct
A real positive root of equation (6) to the power n will give the optimal supernode
size.
In case of two dimensional algorithm, 2, the optimal supernode size becomes:
ct s
Example 5.1 For the algorithm in Example 2.1, assuming V
g,
with the optimal supernode size is g
Taking approximate supernode size 5), we get supernode computation time
of 309-s and the
total running time of T and we need 20 processors.
Table
3 shows the total running time computed for different supernode sizes. It shows
that the total running time is shorter for supernode sizes closer to the optimal value.
Since we assumed the data amount to be transferred depend on the supernode
size only, an optimal supernode shape can be computed using the nonlinear program
discussed for the case of constant communication time. The nonlinear program and
its derivation is given in [2].
L Processors Time
34.3
Table
3. Total running time for different supernode sizes and square supernode
shape (or close to square).
6 Related work
In this section we give a brief overview of previous related work. Irigoin and Triolet
[4] proposed the supernode partitioning technique for multiprocessors in 1988. The
idea was to combine multiple loop iterations in order to provide vector statements,
parallel tasks and data reference locality. Ramanujam and Sadayappan [6] studied
tiling multidimensional iteration spaces for multiprocessors. They showed the equivalence
between the problem of finding a partitioning hyperplane matrix H, and the
problem of finding a cone for a given set of dependence vectors, i.e., finding a matrix
of extreme vectors E. They presented an approach to determining partitioning
hyperplanes to minimize communication volume. They also discussed a method for
finding an optimal supernode size. Reference [7] discusses the choice of cutting hyper-planes
and supernode shape with the goal of minimizing the communication volume
in a scalable environment. It includes a very good description of the tiling technique.
In [5], the optimal tile size is studied under different model and assumptions. It is
assumed that an N 1 \Theta ::: \Theta N n hypercube index space is mapped to a P 1 \Theta ::: \Theta P
processor space and the optimal side lengths of the hypercube tile are given as N i
for certain kind of dependence structure. In [8], an approach to optimizing tile size
and shape in two dimensional algorithms, based on the space-time mapping used in
systolic synthesis. In [2], we give a detailed analysis of optimal supernode size and
shape in a model with constant communication time.
Compared to the related work, our optimization criterion is to minimize the total
running time, rather than communication volume or ratio between communication
and computation volume, further we used a different approach where we specify a
supernode transformation by a grain size of supernodes, the relative side length vector
R and n partitioning hyperplanes so that the three variables become independent
(their optimal values are interdependent in general though).
Hence our method can be applied to find the optimal grain size for any uniform
dependence algorithm with any partitioning hyperplanes. After we have the optimal
grain size, we can determine the optimal shape and the partitioning hyperplanes.
7 Conclusion
In this paper, how to find the optimal supernode size and shape is studied, in the
context of supernode transformations, with the goal of minimizing the total running
time. A general model of the total running time is described. We derived closed form
analytical expressions for the optimal supernode size for the one parameter model
and the two parameter model with doubly nested loops. A closed form expression for
the optimal relative length vector is also provided for the one parameter model with
constant bounded loop iteration space. A nonlinear program formulation which can
be solved by numeric methods is provided for general cases. The results can be used
by a parallelizing compiler for a distributed computer system to decide the grain size
and shape when breaking a task into subtasks.
--R
"Linear Scheduling is Close to Optimality,"
"Modeling Optimal Granularity When Adapting Systolic Algorithms to Transputer Based Supercomputers,"
"Supernode Partitioning,"
"Optimal Tile Size Adjustment in Compiling General DOACROSS Loop Nests,"
"Tiling Multidimensional Iteration Spaces for Multicom- puters,"
"(Pen)-Ultimate Tiling"
"Optimal Tiling,"
"Automatic Blocking of Nested Loops"
"Evaluating Compiler Optimizations for Fortran D,"
New Jersey
"Time Optimal Linear Schedules for Algorithms With Uniform Dependencies,"
"Independent Partitioning of Algorithms with Uniform Depen- dencies,"
Supercompilers for Parallel and Vector Computers
"More Iteration Space Tiling,"
--TR
--CTR
Panayiotis Tsanakas , Nectarios Koziris , George Papakonstantinou, Chain Grouping: A Method for Partitioning Loops onto Mesh-Connected Processor Arrays, IEEE Transactions on Parallel and Distributed Systems, v.11 n.9, p.941-955, September 2000
Jingling Xue , Wentong Cai, Time-minimal tiling when rise is larger than zero, Parallel Computing, v.28 n.6, p.915-939, June 2002
Maria Athanasaki , Aristidis Sotiropoulos , Georgios Tsoukalas , Nectarios Koziris, Pipelined scheduling of tiled nested loops onto clusters of SMPs using memory mapped network interfaces, Proceedings of the 2002 ACM/IEEE conference on Supercomputing, p.1-13, November 16, 2002, Baltimore, Maryland
N. Koziris , A. Sotiropoulos , G. Goumas, A pipelined schedule to minimize completion time for loop tiling with computation and communication overlapping, Journal of Parallel and Distributed Computing, v.63 n.11, p.1138-1151, November
Georgios Goumas , Nikolaos Drosinos , Maria Athanasaki , Nectarios Koziris, Automatic parallel code generation for tiled nested loops, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
R. Andonov , S. Balev , S. Rajopadhye , N. Yanev, Optimal semi-oblique tiling, Proceedings of the thirteenth annual ACM symposium on Parallel algorithms and architectures, p.153-162, July 2001, Crete Island, Greece
Rashmi Bajaj , Dharma P. Agrawal, Improving Scheduling of Tasks in a Heterogeneous Environment, IEEE Transactions on Parallel and Distributed Systems, v.15 n.2, p.107-118, February 2004
Georgios Goumas , Nikolaos Drosinos , Maria Athanasaki , Nectarios Koziris, Message-passing code generation for non-rectangular tiling transformations, Parallel Computing, v.32 n.10, p.711-732, November, 2006
Edin Hodzic , Weijia Shang, On Time Optimal Supernode Shape, IEEE Transactions on Parallel and Distributed Systems, v.13 n.12, p.1220-1233, December 2002
Maria Athanasaki , Aristidis Sotiropoulos , Georgios Tsoukalas , Nectarios Koziris , Panayiotis Tsanakas, Hyperplane Grouping and Pipelined Schedules: How to Execute Tiled Loops Fast on Clusters of SMPs, The Journal of Supercomputing, v.33 n.3, p.197-226, September 2005
Peizong Lee , Zvi Meir Kedem, Automatic data and computation decomposition on distributed memory parallel computers, ACM Transactions on Programming Languages and Systems (TOPLAS), v.24 n.1, p.1-50, January 2002 | distributed memory multicomputer;supernode partitioning;minimizing running time;parallelizing compilers;tiling |
629452 | Power-Aware Localized Routing in Wireless Networks. | AbstractRecently, a cost aware metric for wireless networks based on remaining battery power at nodes was proposed for shortest-cost routing algorithms, assuming constant transmission power. Power-aware metrics, where transmission power depends on distance between nodes and corresponding shortest power algorithms were also recently proposed. We define a new power-cost metric based on the combination of both node's lifetime and distance-based power metrics. We investigate some properties of power adjusted transmissions and show that, if additional nodes can be placed at desired locations between two nodes at distance d, the transmission power can be made linear in d as opposed to d^\alpha dependence for \alpha\geq 2. This provides basis for power, cost, and power-cost localized routing algorithms where nodes make routing decisions solely on the basis of location of their neighbors and destination. The power-aware routing algorithm attempts to minimize the total power needed to route a message between a source and a destination. The cost-aware routing algorithm is aimed at extending the battery's worst-case lifetime at each node. The combined power-cost localized routing algorithm attempts to minimize the total power needed and to avoid nodes with a short battery's remaining lifetime. We prove that the proposed localized (where each node makes routing decisions based solely on the location of itself, its neighbors, and destination) power, cost, and power-cost efficient routing algorithms are loop-free and show their efficiency by experiments. | Introduction
In this paper we consider the routing task, in which a message is to be sent from a source node
to a destination node (in a sensor or ad hoc wireless network). Due to propagation path loss, the
transmission radii are limited. Thus, routes between two hosts in the network may consist of hops
through other hosts in the network. The nodes in the network may be static (e.g. thrown from an
aircraft to a remote terrain or a toxic environment), static most of the time (e.g. books, projectors,
furniture) or moving (vehicles, people, small robotic devices). Wireless networks of sensors are
likely to be widely deployed in the near future because they greatly extend our ability to monitor
and control the physical environment from remote locations and improve our accuracy of
information obtained via collaboration among sensor nodes and online information processing at
those nodes. Networking these sensors (empowering them with the ability to coordinate amongst
themselves on a larger sensing task) will revolutionize information gathering and processing in
many situations. Sensor networks have been recently studied in [EGHK, HCB, HKB, KKP]. A
similar wireless network that received significant attention in recent years is ad hoc network [IETF,
MC]. Mobile ad hoc networks consist of wireless hosts that communicate with each other in the
absence of a fixed infrastructure. Some examples of the possible uses of ad hoc networking include
soldiers on the battlefield, emergency disaster relief personnel, and networks of laptops.
Macker and Corson [MC] listed qualitative and quantitative independent metrics for judging the
performance of routing protocols. Desirable qualitative properties [MC] include: distributed
operation, loop-freedom, demand-based operation and 'sleep' period operation, while hop count and
delivery rates are among quantitative metrics. We shall further elaborate on these properties and
metrics, in order to address the issue of routing in wireless networks while trying to minimize the
energy consumption and/or reduce the demands on nodes that have significantly depleted batteries.
This is an important problem since battery power at each node is limited. Our final goal is to design
routing protocols with the following properties.
a) Minimize energy required per routing task. Hop count was traditionally used to measure
energy requirement of a routing task, thus using constant metric per hop. However, if nodes can
adjust their transmission power (knowing the location of their neighbors) then the constant metric
can be replaced by a power metric that depends on distance between nodes [E, RM, HCB]. The
distance between neighboring nodes can be estimated on the basis of incoming signal strengths (if
some control messages are sent using fixed power). Relative coordinates of neighboring nodes can
be obtained by exchanging such information between neighbors [CHH]. Alternatively, the location
of nodes may be available directly by communicating with a satellite, using GPS (Global
Positioning System), if nodes are equipped with a small low power GPS receiver. We will use
location information in making routing decisions as well, to minimize energy per routing task.
b) Loop-freedom. The proposed routing protocols should be inherently loop-free, to avoid
timeout or memorizing past traffic as cumbersome exit strategies.
c) Maximize the number of routing tasks that network can perform. Some nodes participate in
routing packets for many source-destination pairs, and the increased energy consumption may
result in their failure. Thus pure power consumption metric may be misguided in the long term
[SWR]. A longer path that passes through nodes that have plenty of energy may be a better solution
[SWR]. Alternatively, some nodes in the sensor or ad hoc network may be temporarily inactive,
and power consumption metric may be applied on active nodes.
d) Minimize communication overhead. Due to limited battery power, the communication
overhead must be minimized if number of routing tasks is to be maximized. Proactive methods that
maintain routing tables with up-to date routing information or global network information at each
node are certainly unsatisfactory solution, especially when node mobility is high with respect to
data traffic. For instance, shortest path based solutions are too sensitive to small changes in local
topology and activity status (the later even does not involve node movement).
memorizing past traffic or route. Solutions that require nodes to memorize route or
past traffic are sensitive to node queue size, changes in node activity and node mobility while
routing is ongoing (e.g. monitoring environment). Flexibility in selecting routes is thus preferred.
f) Localized algorithms. Localized algorithms [EGHK] are distributed algorithms that resemble
greedy algorithms, where simple local behavior achieves a desired global objective. In a localized
routing algorithm, each node makes decision to which neighbor to forward the message based
solely on the location of itself, its neighboring nodes, and destination. While neighboring nodes
may update each other location whenever an edge is broken or created, the accuracy of destination
location is a serious problem. In some cases, such as monitoring environment by sensor networks,
the destination is a fixed node known to all nodes (i.e. monitoring center). Our proposed algorithms
are directly applicable in such environments. All non-localized routing algorithms proposed in
literature are variations of shortest weighted path algorithm (e.g. [CN, LL, RM, SWR]).
g) Single-path routing algorithms. The task of finding and maintaining routes in mobile
networks is nontrivial since host mobility causes frequent unpredictable topological changes. Most
previously proposed position based routing algorithms (e.g. [BCSW, KV]) for wireless ad hoc
networks were based on forwarding the actual message along multiple paths toward an area where
destination is hopefully located, hoping to achieve robustness. However, we have shown in our
previous work that single-path strategies may be even more robust (for instance, they can guarantee
delivery [BMSU]) and with less communication overhead. The significant communication
overhead can be avoided if a variant of source-initiated on-demand routing strategy [BMJHJ, RT]
is applied. In the strategy, the source node issues several search 'tickets' (each ticket is a 'short'
message containing sender's id and location, destination's id and best known location and time
when that location was reported, and constant amount of additional information) that will look for
the exact position of destination node. When the first ticket arrives at the destination node D, D will
report back to the source with brief message containing its exact location, and possibly creating a
route for the source. The source node then sends full data message ('long' message) toward exact
location of destination. The efficiency of destination search depends on the corresponding location
update scheme. A quorum based location update scheme is being developed in [S2]. Other schemes
may be used, with various trade-offs between the success and flooding rates (including an
occasional flooding). If the routing problem is divided as described, the mobility issue is
algorithmically separated from the routing issue, which allows us to consider (in this paper) only
the case of static networks with known destination in our algorithms and experiments. The choice is
justified whenever the destination does not move significantly between its detection and message
delivery, and information about neighboring nodes is regularly maintained. Yet another routing
method may forward message toward imprecise destination location, hoping that closer nodes will
locate destination more accurately.
Maximize delivery rate. Our localized algorithms achieve a very high delivery rates for dense
networks, while further improvements are needed for sparse networks. We have designed power,
cost, and power-cost routing algorithms that guarantee delivery, which is an extension to be
reported elsewhere [SD].
The final important goal of a routing algorithm is to handle node mobility with proper location
update schemes. This issue seems to be the most complex of all discussed here, as argued in an
upcoming report [S].
Ad hoc and sensor networks are best modeled by minpower graphs constructed in the
following way. Each node A has its transmission range t(A). Two nodes A and B in the network are
neighbors (and thus joined by an edge) if the Euclidean distance between their coordinates in the
network is less than the minimum between their transmission radii (i.e. d(A,B) < min {t(A), t(B)}).
If all transmission ranges are equal, the corresponding graph is known as the unit graph. The
minpower and unit graphs are valid models when there are no obstacles in the signal path. Ad hoc
and sensor networks with obstacles can be modeled by subgraphs of minpower or unit graphs. The
properties of power metrics, the proposed algorithms and their loop free properties in this paper are
valid for arbitrary graphs. We have used, however, only unit graphs in our experiments.
A number of protocols for achieving efficient routing have been recently proposed. They differ
in the approach used for searching a new route and/or modifying a known route, when hosts move.
The surveys of these protocols, that do not use geographic location in the routing decisions, are
given in [BMJHJ, IETF, RS, RT]. The power awareness in these protocols is limited to the amount
of control messages sent and degree of message flooding. While the computational power of the
devices used in the network is rapidly increasing, the lifetime of batteries is not expected to
improve much in the future. We see a clear need for improvement in power consumption in
existing MAC protocols and routing algorithms [SWR].
In the next section, we shall review known power aware metrics and routing algorithms. In
section 3, existing routing protocols that use geographic location or consider power in their route
decisions are reviewed. Section 4 discusses the effect of power awareness on the routing decisions
in GPS based algorithms. Section 5 proposes three distributed (localized) algorithms aimed at
extending node and/or network life. In section 6, we prove that these algorithms are loop-free.
Their performance evaluation is given in sections 7 and 8.
2. Existing power aware metrics and routing algorithms
In most of routing protocols the paths are computed based on minimizing hop count or delay.
When transmission power of nodes is adjustable, hop count may be replaced by power
consumption metric. Some nodes participate in routing packets for many source-destination pairs,
and the increased energy consumption may result in their failure. A longer path that passes through
nodes that have plenty of energy may be a better solution [SWR].
The algorithm [SWR] proposed to use a function f(A) to denote node A's reluctance to forward
packets, and to choose a path that minimizes the sum of f(A) for nodes on the path. This routing
protocol [SWR] addresses the issue of energy critical nodes. As a particular choice for f, [SWR]
proposes f(A)=1/g(A) where g(A) denotes the remaining lifetime (g(A) is normalized to be in the
interval [0,1]). Thus reluctance grows significantly when lifetime approaches 0. The other metric
used in [SWR] is aimed at minimizing the total energy consumed per packet. However, [SWR]
merely observes that the routes selected when using this metric will be identical to routes selected
by shortest hop count routing, since the energy consumed in transmitting (and receiving) one
packet over one hop is considered constant. For each of the two proposed power consumption
metrics (cost and hop count), [SWR] assigns weights to nodes or edges, and then refers to non-localized
Dijkstra's algorithm for computing shortest weighted path between source and
destination. We also observed that the validation of power aware metrics in [SWR] was done on
random graphs where each pair of nodes is joined by an edge with a fixed probability p.
Rodoplu and Meng [RM] proposed a general model in which the power consumption between
two nodes at distance d is u(d)=d a +c for some constants a and c, and describe several properties of
power transmission that are used to find neighbors for which direct transmission is the best choice
in terms of power consumption. In their experiments, they adopted the model with u(d)= d 4 +2*10 8 ,
which will be referred to as RM-model. They discuss that large-scale variations (modeled by
lognormal shadowing model) can be incorporated into the path loss formula, and that small-scale
variations (modeled by a Rayleigh distribution) may be handled by diversity techniques and
combiners at the physical layer. Rodoplu and Meng [RM] described a power aware routing
algorithm which runs in two phases. In the first phase, each node searches for its neighbors and
selects these neighbors for which direct transmission requires less power than if an intermediate
node is used to retransmit the message. This defines so called enclosure graph. In the second phase,
each possible destination runs distributed loop-free variant of non-localized Bellman-Ford shortest
path algorithm and computes shortest path for each possible source. The same algorithm is run
from each possible destination. The algorithm is thus proactive, resulting in significant overhead
for low data traffic volumes. We observe that, since the energy required to transmit from node A to
node B is the same as energy needed to transmit from node B to node A, the same algorithm may be
applied from each possible source, and used to discover the best possible route to each destination
node. Alternatively, it may be used to find the location of destination and the best route to it. Such
on-demand variant is a competitive routing protocol but requires path memorization, and may not
be energy efficient since a single transmission at larger radius may reach more nodes at once.
Ettus [E] showed that minimum consumed energy routing reduces latency and power
consumption for wireless networks utilizing CDMA, compared to minimum transmitted energy
algorithm (shortest path algorithm was used in experiments). Heizelman, Chandrakasan and
Balakrishnan [HCB] used signal attenuation to design an energy efficient routing protocol for
wireless microsensor networks, where destination is fixed and known to all nodes. They propose to
utilize a 2-level hierarchy of forwarding nodes, where sensors form clusters and elect a random
clusterhead. The clusterhead forwards transmissions from each sensor within its own cluster. This
scheme is shown to save energy under some conditions. However, clustering requires significant
communication overhead, routing algorithm is not localized, and the destination is not necessarily
fixed. Nevertheless, their simple radio model and metric is adopted in our paper, as follows.
In the simple radio model [HCB], radio dissipates E elec =50 nJ/bit to run the transceiver
circuitry. Both sender and receiver node consume E elec to transmit one bit between them. Assuming
loss, where d is the distance between nodes, transmit amplifier at the sender node
consumes further E ampd
. Thus, to transmit one bit message at distance
d, the radio expends and to receive the message, the radio expends E elec . In order to
normalize the constants, divide both expressions by E amp , so that radio expands T=E
transmission and P=E for reception, where E= E elec /E amp=500 m 2 . Note that T/P= 1+ d 2 /E and
Therefore, in this model, referred to as HCB-model, the power needed for
transmission and reception at distance d is u(d)= 2E+ d 2 .
Chang and Tassiulas [CT1, CT2] independently proposed combining power and cost into a
single metric. Preliminary versions of this paper were published as technical report [SL-tr] and
presented at a conference [SL2]. In [CT1] they experimented with
is energy for transmission on link ij with length d, l and c are small constants, and
remaining battery power at node i. In [CT2] they proposed a general
c ,
initial energy at node i, and a , b and c are constants. They consider routing tasks with
fixed source-destination pairs, one-to-one [CT1] and one-to-many [CT2] cases. The power needed
for reception is not considered. Distributed non-localized Bellman-Ford shortest weighted path
algorithm is used. Their experiments indicate (a,b,c)=(1,50,50) as values that are close to optimal
one. Network lifetime is maximized when traffic is balanced among the nodes in proportion to their
energy reserves, instead of routing to minimize the absolute consumed power [CT1, CT2].
3. Existing GPS based routing methods
Most existing routing algorithms do not consider the power consumption in their routing
decisions. They are reviewed here in order to compare experimentally their power savings
performance with newly proposed algorithms. All described routing algorithms are localized,
demand-based and adapt well to 'sleep' period operation. Several GPS based methods were
proposed in 1984-86 by using the notion of progress. Define progress as the distance between the
transmitting node and receiving node projected onto a line drawn from transmitter toward the final
destination. A neighbor is in forward direction if the progress is positive; otherwise it is said to be
in backward direction. In the random progress method [NK], packets destined toward D are routed
with equal probability towards one neighboring node that has positive progress. In the NFP method
[HL], packet is sent to the nearest neighboring node with forward progress. Takagi and Kleinrock
proposed MFR (most forward within radius) routing algorithm, where packet is sent to the
neighbor with the greatest progress. The method is modified in [HL] by proposing to adjust the
transmission power to the distance between the two nodes. Finn [F] proposed a variant of random
progress method, called Cartesian routing, which 'allows choosing any successor node which makes
progress toward the packet's destination' [F]. The best choice depends on the complete topological
knowledge. Finn [F] adopted the greedy principle in his simulation: choose the successor node that
makes the best progress toward the destination. When no node is closer to the destination than
current node C, the algorithm performs a sophisticated procedure that does not guaranty delivery.
Recently, three articles [BCSW, KV, KSU] independently reported variations of fully
distributed routing protocols based on direction of destination. In the compass routing (or DIR)
method proposed by Kranakis, Singh and Urrutia [KSU], the source or intermediate node A uses
the location information for the destination D to calculate its direction. The location of one hop
neighbors of A is used to determine for which of them, say C, is the direction AC closest to the
direction of AD. The message m is forwarded to C. The process repeats until the destination is,
hopefully, reached. A counterexample showing that undetected loops can be created in directional
based methods is given in [SL]. The method is therefore not loop-free.
GEDIR routing algorithm [SL] is a variant of greedy routing algorithm [F] with a 'delayed'
failure criterion. GEDIR, MFR, and compass routing algorithms fail to deliver message if the best
choice for a node currently holding message is to return it to the previous node [SL]. Such criterion
reduced failure rate and provided fair comparison in our experiments. GEDIR and MFR algorithms
are inherently loop-free [SL]. The proofs are based on the observation that distances (dot products,
respectively) of nodes toward destination are decreasing. A routing algorithm that guarantees
delivery by finding a simple path between source and destination is described in [BMSU].
The 2-hop variants of three basic routing algorithms were proposed in [SL]. The delivery rate of
GEDIR, compass routing (or DIR) or MFR algorithms can be improved if each node is aware of its
2-hop neighbors (neighbors of its neighbors). The node A currently holding the message may then
choose the node closest to the destination among all 1-hop and 2-hop neighbors, and forward the
message to its neighbor that is connected to the choice. In case of ties (that is, more than one
neighbor connected to the closest 2-hop neighbor), choose the one that is closest to destination.
This review did not include various flooding based or multiple paths routing algorithms or
methods for sending control messages to update positions [BCSW, KV, S2]. Our primary interest
in this paper is to examine power consumption under assumption that nodes have accurate
information about the location of their neighbors and destination node (e.g. static networks, source-initiated
on-demand routing, or networks with superb location update scheme).
4. Some properties of power adjusted transmissions
In this section we shall study the optimality of power adjusted transmissions, using a simple and
general radio model. We shall further generalize the model of [RM] (by adding a linear factor) and
assume that the power needed for the transmission and reception of a signal is u(d)=ad a
Constant factor c in this expression for total energy consumption may also include the energy
consumed in computer processing and encoding/decoding at each station. Next, the leading
coefficient a can be adjusted to the physical environment, unit of length considered, unit size of a
signal (a bit, byte, or frame, for example) etc. In the RM-model a=4, a=1, b=0, c=2*10 8 , while in
the HCB-model a=2, a=1, b=0, c=2E. These two models were used in our experiments.
Suppose that the sender S is able to transmit the packet directly to the destination D. Let us
examine whether energy can be saved by sending the packet to an intermediate node A between the
nodes and forwarding the packet from A to D. Let |SD|=d, |SA|=x, |AD|=d-x.
Lemma 1. If d>(c/(a(1-2 1-a ))) 1/a then there exists intermediate node A between source S and
destination D so that the retransmission will save the energy. The greatest power saving is obtained
when A is in the middle of SD.
Proof. The power needed to send message directly from S to D is u(d)=ad a while the
power needed to send via A is (ax a
bd +c is satisfied for g(x)=a(x a
minimum for g(x) is obtained for g'(x)=0, i.e. a(ax a-1 - a(d-x) a-1 )=0. Thus x
or x=d/2. The minimum is <0 if g(d/2)<0, i.e. a((d/2) a or ad a (2 1-a -1)+c <0,
which is satisfied for c< ad a (1-2 1-a ), or d a > c/(a(1-2 1-a )), and lemma follows. Note that this
inequality has a solution in d if and only if a>1.
Lemma 2. If d>(c/(a(1-2 1-a ))) 1/a then the greatest power savings are obtained when the interval
SD is divided into n>1 equal subintervals, where n is the nearest integer to d(a(a-1)/c) 1/a . The
minimal power is then bd
Proof. Let SD be divided into intervals of lengths x 1 , x 2 , ., x n such that d=x 1 . The
energy needed for transmissions using these intervals is (ax 1
a
a +bx n
a
a ). For fixed x i +x j , the expression x i
a
a is minimal when x
1). Therefore the energy is minimal when x and is equal to f(n)=cn
a +bd. This expression has the minimum when f'(n)=0, or c+ a(1-a)n -a d a =0,
i.e. c=a(a-1)n -a d a , n a =a(a-1)d a /c, n=d(a(a-1)/c) 1/a (n is rounded to nearest integer).
Assuming that we can set additional nodes in arbitrary positions between the source and
destination, the following theorem gives power optimal packet transmissions.
Theorem 1. Let d be the distance between the source and the destination. The power needed for
direct transmission is u(d)=ad a bd +c which is optimal if d (c/(a(1-2 1-a ))) 1/a . Otherwise (that is,
when d > (c/(a(1-2 1-a ))) 1/a ), n-1 equally spaced nodes can be selected for retransmissions, where
(rounded to nearest integer), producing minimal power consumption of about
Corollary 1. Let a=2. The power needed for direct transmission is u(d)=ad bd +c which is
optimal if d (2c/a) 1/2 . Otherwise (that is, when d > (2c/a) 1/2 ), n-1 equally spaced nodes can be
selected for retransmissions, where n= d(a/c) 1/2 (rounded to nearest integer), producing minimal
power consumption of about v(d)=2d(ac) 1/2
Theorem 1 announces the possibility of converting polynomial function in d (with exponent a)
for power consumption (in case of direct transmission from sender to destination) to linear function
in d by retransmitting the packet via some intermediate nodes that might be available.
5. Power saving routing algorithms
If nodes have information about position and activity of all other nodes then optimal power
saving algorithm, that will minimize the total energy per packet, can be obtained by applying
Dijkstra's single source shortest weighted path algorithm, where each edge has weight u(d)=ad a
bd +c, where d is the length of the edge. This will be referred as the SP-power algorithm.
We shall now describe a corresponding localized routing algorithm. The source (or an
should select one of its neighbors to forward packet toward destination, with
the goal of reducing the total power needed for the packet transmission. Let A be a neighbor of B,
and let r=|AB|, d=|BD|, s=|AD|. The power needed for transmission from B to A is u(r)=ar a
+c, while the power needed for the rest of routing algorithm is not known. Assuming uniformly
distributed network, we shall make a fair assumption that the power consumption for the rest of
routing algorithm is equal to the optimal one (see Theorem 1). That is, the power needed for
transmitting message from A to D is estimated to be v(s)= bs
For a=2, v(s)=2s(ac) 1/2 bs. This is, of course, an unrealistic assumption. However, it is fair to all
nodes. A more realistic assumption might be to multiply the optimal power consumption by a factor
t, which is a constant that depends on the network.
The localized power efficient routing algorithm can be described as follows. Each node B
(source or intermediate node) will select one of its neighbors A which will minimize
p(B,A)=u(r)+v(s)= ar a . For a=2 it becomes
bs. If destination D is a neighbor of B then compare the
expression with the corresponding one, u(d)=ad a needed for direct transmission (s=0 for
D, and D can be treated as any other neighbor). The algorithm proceeds until the destination is
reached, if possible. A generalized power efficient routing algorithm may attempt to minimize
p(B,A)=u(r)+tv(s), where t is a network parameter.
In the basic (experimental) version of the algorithms, the transmission stops if message is to be
returned to a neighbor it came from (otherwise, a detectable loop is created). The power-efficient
routing algorithm may be formalized as follows.
Repeat
Let A be neighbor of B that minimizes p(B,A)=u(r)+ tv(s);
Send message to A
until A=D (* destination reached *) or A=B (* delivery failed *)
Let us now consider the second metric proposed in [SWR], measuring the nodes lifetime.
Recall that the cost of each node is equal to f(A)=1/g(A) where g(A) denotes the remaining lifetime
is normalized to be in the interval [0,1]). [SWR] proposed shortest weighted path algorithm
based on this node cost. It is referred to as the SP-cost algorithm in experimental data in Table 2.
The algorithm uses the cost to select the path, but the actual power is charged to nodes.
The localized version of this algorithm, assuming constant power for each transmission, can be
designed as follows. The cost c(A) of a route from B to D via neighboring node A is the sum of the
cost f(A) =1/g(A) of node A and the estimated cost of route from A to D. The cost f(A) of each
neighbor A of node B currently holding the packet is known to B. What is the cost of other nodes on
the remaining path? We assume that this cost is proportional to the number of hops between A and
D. The number of hops, in turn, is proportional to the distance s=|AD|, and inversely proportional
to radius R. Thus the cost is ts/R, where factor t is to be investigated separately. Its best choice
might even be determined by experiments. We have considered the following choices for factor t:
i) t is a constant number, which may depend on network conditions,
ii) t= f(A) (that is, assuming that remaining nodes have equal cost as A itself),
iii) t= f'(A), where f'(A) is the average value of f(X) for A and all neighbors X of A,
iv) t=1/g'(A), where g'(A) is the average value of g(X) for A and all neighbors X of A.
Note that t=t(A) depends on A. The cost c(A) of a route from S to D via neighboring node A is
estimated to be c(A)=f(A)+ts/R, for the appropriate choice of t. We also suggest to investigate the
product of two contributing elements instead of their sum, that is the cost definition c(A)= f(A)ts/R.
The localized cost efficient routing algorithm can be described as follows. If destination is one
of neighbors of node B currently holding the packet then the packet will be delivered to D.
Otherwise, B will select one of its neighbors A which will minimize c(A). The algorithm proceeds
until the destination is reached, if possible, or until a node selects the neighbor the message came
from as its best option to forward the message. The algorithm can be coded as follows.
Repeat
Let A be neighbor of B that minimizes c(A);
If D is neighbor of B
then send to D else send to A
until D is reached or A=B;
The versions of this cost routing algorithm that use choices ii) and iii) for t (t=f(A) and t=f'(A),
respectively), will be referred to as cost-ii and cost-iii algorithms in our experiments.
We may incorporate both power and cost considerations into a single routing algorithm. A new
power-cost metrics is first introduced here. What is the power-cost of sending a message from node
B the neighboring node A? We propose two different ways to combine power and cost metrics into
a single power-cost metric, based on the product and sum of two metrics, respectively. If the
product is used, then the power-cost of sending message from B to a neighbor A is equal to power-
cost(B,A)=f(A)u(r) (where |AB|=r). The sum, on the other hand, leads to a new metrics power-
suitably selected values of a and b. For example, sender node S may
fix a=f'(S) and b=u(r'), where r' is the average length of all edges going out of S. The values a and
b are (in this version) determined by S and used, without change, by other nodes B on the same
route. The corresponding shortest path algorithms can find the optimal power-cost by applying
single source shortest weighted path Dijkstra's algorithm (the node cost is transferred to the edge
leading to the node). The algorithm will be referred to as the SP-Power*Cost and SP-Power+Cost
algorithms, respectively, in Table 2.
The power-cost efficient routing algorithm may be described as follows. Let A be the neighbor
of B (node currently holding the message) that minimizes pc(B,A)=
(where s=0 for D, if D is a neighbor of B). The algorithm is named power-cost0 in Table 2 when
power-cost(B,A)=f(A)u(r), and power-cost1 when power-cost(B,A)= f'(S)u(r)+u(r')f(A). The packet
is delivered to A. Thus the packet is not necessarily delivered to D, when D is a neighbor of B. The
algorithm proceeds until the destination is reached, if possible, and may be coded as follows.
Repeat
Let A be neighbor of B that minimizes pc(B,A)=
Send message to A
until A=D (* destination reached *) or A=B (* delivery failed *);
The algorithm may be modified in several ways. The second term may be multiplied by a factor
that depends on network conditions. We tested also the version, called power-cost2, that minimizes
pc(B,A)=f(A)(u(r)+v(s)), and an algorithm, called power-costP, that switches selection criteria from
power-cost to power metric only whenever destination D is a neighbor of current node A.
6. Loop-free property
Theorem 2. The localized power efficient routing algorithm is loop-free.
Proof. Suppose that, on the contrary, there exists a loop in the algorithm. Let A 1 , A 2 , . A n
be the nodes in the loop, so that A 1 send the message to A 2 , A 2 sends the message to A 3 , ., A n-1
sends the message to A n and A n sends the message to A 1 (see Fig. 1). Let s 1 , s 2 , ., s n be the
distances of A 1 , A 2 , . A n from D, respectively, and let |A n A 1 |=r 1 , |A 1 A 2 |=r 2 , |A 2 A 3 |=r 3 , .,
|A n-1 A n |=r n . Let u(r)= ar a
bs). According to the choice of neighbors, it follows that u(r 1 )+v(s 1 )<u(r n )+v(s n-1 )
since the node A n selects A 1 , not A n-1 , to forward the message. Similarly u(r 2
since A 1 selects A 2 rather than A n . Next, u(r 3 )+v(s 3
By summing left and right sides we obtain u(r
which is a contradiction since both sides contain
the same elements. Thus the algorithm is loop-free.
A n-1
r n-1 D r 3
A 3
Figure
1. Power efficient routing algorithm is loop-free
In order to provide for loop-free method, we assume that (for this and other mentioned
methods below), in case of ties for the choice of neighbors, if one of choices is the previous node,
the algorithm will select that node (that is, it will stop or flood the mes sage). Note that the above
proof may be applied (by replacing '+' with '*') to an algorithm that will minimize p(A)=u(r)tv(s).
Theorem 3. Localized cost efficient algorithms are loop-free.
Proof. Note that the cost c(A) of sending message from B to A is only the function of A (that
is, t=t(A)), and is independent on B. In the previous proof, assume u(r i )=0 for all nodes, and let
each i. The proof then becomes the same as in the previous theorem. The proof is
valid for both formulas c(A)=f(A)+ts/R and c(A)=f(A)ts/R. Note that the proof assumes that the cost
of each node is not updated (that is, communicated to the neighbors) while the routing algorithm is
in progress. It is possible to show that, on the other hand, if nodes inform their neighbors about new
cost after every transmitted message, a loop (e.g. triangle) can be formed.
Theorem 4. Localized power-cost efficient algorithms are loop-free for the metrics power-
(where a and b are arbitrary constants), and pc(B,A)=
(where t(A) is determined by one of formulas i-iv).
Proof. The proof is again by contradiction, similar to the proof of previous theorems.
Suppose that there exists a loop A 1 , A 2 , . A n in the algorithm (see Fig. 4). Let
be the power-costs of sending message to nodes A 1 , A 2 , . A n , respectively,
from the previous node in the loop. According to the choice of neighbors in Fig. 1 it follows that
since the node A n selects A 1 , not A n-1 , to forward the message. Similarly
summing left
and right sides we obtain
This inequality is equivalent to [au(r
which is a
contradiction since both sides contain the same elements. Thus the algorithm is loop-free. Note that
the proof also assumes that the cost of each node is not updated (that is, communicated to the
neighbors) while the routing algorithm is in progress. Note that this proof does not work for the
formula power-cost(B,A)=f(A)u(r), which does not mean that the corresponding power-cost routing
algorithm is not loop-free.
7. Performance evaluation of power efficient routing algorithm
The experiments are carried using (static) random unit graphs. Each of n nodes is chosen by
selecting its x and y coordinates at random in the interval [0,m). In order to control the average node
degree k (that is, the average number of neighbors), we sort all n(n-1)/2 (potential) edges in the
network by their length, in increasing order. The radius R that corresponds to chosen value of k is
equal to the length of nk/2-th edge in the sorted order. Generated graphs which were disconnected
are ignored. We have fixed the number of nodes to n=100, and average node degree k to 10. We
have selected higher connectivity for our experiments in order to provide for better delivery rates
and hop counts and concentrate our study on power conserving effects.
The choice of route for DIR (compass routing), MFR and GEDIR methods in [SL], and their
mutual comparison, did not depend on the size m of square containing all the points. However, in
case of power consumption, the actual distances greatly impact the behavior of algorithms. More
precisely, the path selection (and the energy for routing) in our power saving algorithm depends on
the actual size of the square. We compared all methods for squares of sizes m=10, 100, 200, 500,
1000, 2000, 5000 for both HCB- and RM-models. The results are averages over 20 graphs with 100
routing pairs in each chosen at random.
In our comparisons, the power consumption (cost, power-cost, respectively) in all compared
methods was measured by assigning the appropriate weights to each edge. Our comparison for the
category of power (only) consumption involved the following GPS based distributed algorithms:
NFP, random progress, MFR, DIR, GEDIR, NC, the proposed localized power efficient routing
algorithm (with t=1), and the benchmark shortest (weighted) path algorithm (SP).
We have introduced a new routing method, called NC (nearest closer), in which node A,
currently holding the message, forwards it to its nearest node among neighboring nodes which are
closer to destination D than A. This method is an alternative to the NFP method which was
experimentally observed to have very low success rate (under 15% in our case). The reason for low
success rate seems to be the existence of many acute triangles ABD (see Fig. 2) so that A and B are
closest to each other, and therefore selected by NFP method which then fails at such nodes.
The proposed power efficient method, which will be referred as power1 method, was also
experimentally shown to have very low success rate for large m. The power efficient algorithm is
therefore modified to increase its success rate. Only neighbors that are closer to destination than the
current node are considered, and this variant will be called the power method. The success rates of
power and power1 methods are almost the same for m200. While the success rate of power
method remains at 95% level, the success rate for power1 drops to 59%, 11%, 4% and 2% only for
remaining sizes of m (numbers refer to HCB-model, and are similar for the other model). Consider
a scenario in which power1 fails (see Fig. 3, where |AD| < |BD| < |CD|). Node A sends message to
closest neighbor B. Since A is very close to B but C is not, power formula applied at B selects A to
send message back, and a loop is created.
A
A
Figure
2. NFP method fails
Figure
3. Power1 frequently fails
We included 2-hop GEDIR, DIR, MFR and NC methods in our experiments. 2-hop NC method
is defined as follows. Each current node C finds the neighboring node A whose 1-hop nearest
(closer to destination D) neighbor B has the shortest distance (between A and B). If no such node
exist (i.e. none of neighboring nodes of C has forward neighbor) then take the neighbor node E
whose backward nearest neighbor F has smallest distance (between E and F).
The delivery rates for 1-GEDIR, 1-DIR, and 1-MFR methods in our experiments were about
97%, 1-NC had 95% success, 2-GEDIR (that is, 2-hop GEDIR) and 2-MFR about 99%, 2-DIR
about 91%, 2-NC and random methods about 98%, and power method 95% success rate (for both
HCB- and RM-models). While all other methods choose the same path independently on m and
power formula applied, power method does not, and almost constant and good delivery rate for it is
a very encouraging result. The hop counts for non-power based methods were 3.8, 4.2, 3.9, 8.0, 3.8,
3.9, 4.1, 5.2, and 6.4, respectively (in above order). Hop counts for power method were 3.8, 3.8,
3.8, 3.8, 6.3, 9.0 and 9.7 for RM-model, and 3.8, 3.8, 4.0, 6.6, 8.3, 9.1, 9.6 for HCB-model, in
respective order of m. Clearly, with increased energy consumption per distance, power method
reacted by choosing closer neighbors, resulting in higher hop counts.
A
Figure
4. GEDIR consumes less power than MFR
Let us show the average case superiority of GEDIR method over MFR method and superiority
of DIR routing over random progress method. Let A and B be the nodes selected by the GEDIR and
MFR methods, respectively, when packet is to be forwarded from node S (see Fig. 4). Suppose that
B is different from A (otherwise the energy consumption at that step is the same). Therefore
|AD|<|BD|. Node B cannot be selected within triangle SAA' where A' is the projection of node A on
direction SD, since B has more progress than A (here we assume, for simplicity, that A and B are on
the same side of line SD). However, the angle SAB is then obtuse, and |SB|>|SA|. From |SB| > |SA|
and |BD| > |AD|, it follows that the packet requires more energy if forwarded to B instead of A.
Suppose now that A and B are selected neighbors in case of DIR and random progress routing
algorithms (we shall use the same Fig. 4). Since the lengths |SA| or |SB| are not considered when
selecting the neighbors, on the average we may assume that |SA|=|SB|. However, the direction of A
is closer to the direction of destination (that is, the angle ASD is smaller than the angle BAD) and
thus A is closer to D than B.
method/size
SP-Power 3577 4356 6772 20256 62972 229455 1404710
Power 3619 4457 6951 21331 69187 261832 1647964
GEDIR 3619 4460 7076 24823 89120 344792 2152891
Random 5962 7099 10626 34382 121002 465574 2896988
Table
1. Power consumption of routing algorithms
Table
shows average power assumption (rounded to nearest integers) per routing tasks that
were successful by all methods, which occurs in about 85% of cases. It is calculated as the ratio of
total power consumption (for each method) for these tasks over the total number of such deliveries.
The quadratic HCB-model formula is used (the results for the RM-model were similar).
The power consumption for GEDIR algorithm is smaller than the one for DIR routing method
for small values of square size m. The reason is that the smaller hop count is decisive when no
retransmission is desirable. However, for larger m, DIR routing performs better, since the greatest
advance is not necessarily best choice, and the closer direction, possibly with smaller advance, is
advantageous. The NC method is inferior to GEDIR or DIR for smaller values of m, because the
greatest possible advance is the better choice for neighbor than the nearest node closer to
destination. However, for larger values of m, NC outperforms significantly both, since it simulates
retransmissions in the best possible way. 2-hop methods failed to produce power savings over
corresponding 1-hop methods, and were eliminated in our further investigations.
As expected, the proposed distributed power efficient routing algorithm outperforms all known
GPS based algorithms for all ranges of m. For small m, it is minor improvement over GEDIR or
DIR algorithms. However, for large m, the difference becomes very significant, since nearest rather
than furthest progress neighbors are preferred. For large m, the only competitor is NC algorithm.
The overhead (percentage of additional energy per routing task) of power efficient algorithm
with respect to optimal SP-power one is 1.2%, 2.3%, 2.6%, 5.3%, 9.9%, 14.1% and 17.3% for the
considered values of m, respectively. Therefore, localized power efficient routing algorithm, when
successful, closely matches the performance of non-localized shortest-power path algorithm. We
have experimented also with different values of parameter t, a trade-off between success rates and
power savings is obtained. Thus the best choice of t remains an issue for further investigation.
8. Performance evaluation of cost and power-cost efficient routing algorithms
The experiments that evaluate cost and power-cost routing algorithms are designed as follows.
Random unit connected graphs are generated as in the previous section. An iteration is a routing
task specified by the random choice of source and destination nodes. A power failure occurs if a
node has insufficient remaining power to send a message according to given method. Iterations are
run until the first power failure at a node occurs (at which point the corresponding method 'dies').
Each node is initially assigned an energy level at random in the interval [minpow, maxpow], where
parameters depend on m. After sending a message from node A to node B, the energy that remained
at A (B) is reduced by the power needed to transmit (receive) the message, respectively. The
experiment is performed on 20 graphs for each method, for each of HCB- and RM-model formulas.
The success rates for unrestricted versions of cost and power-cost algorithms (where all
neighbors were considered) was again low in our experiments. For example, the success rate of
cost-iii method drops from 64% to 55% with increasing m, while power*cost method drops from
77% to 14% (data for other variants are similar; HCB-model is again used, while the other model
had very similar data). Consequently, these methods were deemed not viable. The success rate for
restricted versions (only closer neighbors considered) was in the range 92%-95% for all cost and
power-cost methods discussed here, both models, and all sizes m. The number of iterations before
each method dies, for HCB-model, is given in Table 2 (data refer to restricted versions). RM-
method gave similar results. The cost and power-cost methods are defined in section 5.
method/trial count 10 100 200 500 1000 2000 5000
SP-Power 342 865 1710 983 1114 796 482
SP-Cost 674 1703 3540 1686 1590 1066 646
SP-Power*Cost 674 1697
SPPower+Cost 647 1668 3495 1725 1688 1124 682
Power 379 954 1843 1009 1162 789 469
Power*Cost 662 1609 3118 1513 1528 1056 617
Power+Cost 660 1611 3180 1664 1757 1179 712
Random 201 481 889 546 512 312 202
Table
2. Number of iterations before one of node in each method dies
The intervals [minpow, maxpow] were set as follows: [80K, 90K], [200K, 300K], [500K,
1M], [750K, 1.5M], [3M, 4M], [8M, 10M], [30M, 40M], for given respective sizes of m, where
K=1000 and M=1000000. Our experiments confirmed the expectations on producing power savings
in the network and/or extending nodes lifetime. Both cost methods and all four power-cost methods
gave very close trial numbers, and thus it is not possible to choose the best method based on trial
number alone. However, all proposed localized cost and power-cost methods performed equally
well as the corresponding non-localized shortest path cost and power-cost algorithms (the number
of trials is sometimes even higher, due to occasional delivery failures which save power). It is also
clear that cost and power-cost routing algorithms last longer than the power algorithm.
method/power
SP-Cost 44381 133245 395592 618640 1857188 4819903 19238265
SP-Power*Cost 44437 133591 396031 642748 2067025 5686092 23187052
SPPower+Cost 46338 136490 406887 646583 1972185 5252813 21081420
Power*Cost 27434 126066 416889 712033 2286840 6030614 24419832
Power+Cost 27520 126201 409208 666907 2091211 5658144 22622947
Table
3. Average remaining power level at each node
Table
3 shows the average remaining power at each node after the network dies, for the most
competitive methods. Cost methods have more remaining power only for the smallest size m=10,
when the power formula reduces to the constant function. For larger sizes of m, two better power-
cost formulas leave about 15% more power at nodes than the cost method.
SP-cost, cost-iii and cost-ii methods have hop counts approximately 4.0, 4.5, and 4.9 for HCB-
model and all values of m. Four power-cost methods have similar hop counts, 5.8, 4.7, 5.0, 6.7, 8.4,
9.1 and 9.6, respectively, for sizes of m. Two SP-power-cost methods do not have similar hop
counts. SP-Power*Cost method has hop counts 4.0, 4.1, 4.3, 6.3, 7.8, 8.3, 8.7, while SP-
Power+Cost method has hop counts between 4.0 and 4.6.
Conclusion
This paper described several localized routing algorithms that try to minimize the total energy
per packet and/or lifetime of each node. The proposed routing algorithms are all demand-based and
can be augmented with some of the proactive or reactive methods reported in literature, to produce
the actual protocol. These methods use control messages to update positions of all nodes to
maintain efficiency of routing algorithms. However, these control messages also consume power,
and the best trade-off for moving nodes is to be established. Therefore further research is needed to
select the best protocols. Our primary interest in this paper was to examine power consumption in
case of static networks and provide basis for further study. Our method was tested only on
networks with high connectivity, and their performance on lower degree networks remains to be
investigated. Based on experience with basic methods like GEDIR [SL], improvements in the
power routing scheme to increase delivery rates, or even to guaranty delivery [BMSU, SD] are
necessary before experiments with moving nodes are justified. Power efficient methods tend to
select well positioned neighboring nodes in forwarding the message, while cost efficient method
favor nodes with more remaining power. The node movement, in this respect, will certainly assist
power aspect of the formula since the movement will cause the change in relative node positioning.
This will further emphasize the advantage of power-cost over power only or cost only methods.
The formulas for power, cost, and power-cost methods may also need some improvements. Our
experiments do not give an ultimate answer on even the selection of approach that would give the
most prolonged life to each node in the network. We will investigate this question further in our
future work [SD] which will consider a number of metrics including generalized one f(A) a u(r) b ,
which is similar to one proposed in [CT2].
--R
A distance routing effect algorithm for mobility (DREAM)
A performance comparison of multi-hop wireless ad hoc network routing protocols
Routing with guaranteed delivery in ad hoc wireless networks
Distributed quality-of-service routing in ad hoc networks
Routing for maximum system lifetime in wireless ad-hoc networks
Energy conserving routing in wireless ad-hoc networks
System capacity
Scalable coordination in sensor networks
Routing and addressing problems in large metropolitan-scale internetworks
Adaptive protocols for information dissemination in wireless sensor networks
Transmission range control in multihop packet radio networks
http://www.
Mobile networking for 'smart dust'
Compass routing on geometric networks
QoS routing in ad hoc wireless networks
Mobile ad hoc networking and the IETF
Tha spatial capacity of a slotted ALOHA multihop packet radio network with capture
Minimum energy mobile wireless networks
A survey of routing techniques for mobile communication networks
A review of current routing protocols for ad hoc mobile wireless networks
Location updates for efficient routing in wireless networks
Power aware routing algorithms with guaranteed delivery in wireless networks.
GEDIR: Loop-free location based routing in wireless networks
Power aware distributed routing in ad hoc wireless networks
A routing strategy and quorum based location update scheme for ad hoc wireless networks
Optimal transmission ranges for randomly distributed packet radio terminals
--TR
--CTR
Hseyin zgr Tan , Ibrahim Krpeolu, Power efficient data gathering and aggregation in wireless sensor networks, ACM SIGMOD Record, v.32 n.4, December
Rabi N. Mahapatra , Wei Zhao, An Energy-Efficient Slack Distribution Technique for Multimode Distributed Real-Time Embedded Systems, IEEE Transactions on Parallel and Distributed Systems, v.16 n.7, p.650-662, July 2005
S. Jayashree , B. S. Manoj , C. Siva Ram Murthy, On using battery state for medium access control in ad hoc wireless networks, Proceedings of the 10th annual international conference on Mobile computing and networking, September 26-October 01, 2004, Philadelphia, PA, USA
Seungjoon Lee , Bobby Bhattacharjee , Suman Banerjee, Efficient geographic routing in multihop wireless networks, Proceedings of the 6th ACM international symposium on Mobile ad hoc networking and computing, May 25-27, 2005, Urbana-Champaign, IL, USA
Israat Tanzeena Haque , Chadi Assi , J. William Atwood, Randomized energy aware routing algorithms in mobile ad hoc networks, Proceedings of the 8th ACM international symposium on Modeling, analysis and simulation of wireless and mobile systems, October 10-13, 2005, Montral, Quebec, Canada
Long Gan , Jiming Liu , Xiaolong Jin, Agent-Based, Energy Efficient Routing in Sensor Networks, Proceedings of the Third International Joint Conference on Autonomous Agents and Multiagent Systems, p.472-479, July 19-23, 2004, New York, New York
Xiang-Yang Li , Kousha Moaveninejad , Ophir Frieder, Regional gossip routing for wireless ad hoc networks, Mobile Networks and Applications, v.10 n.1-2, p.61-77, February 2005
Susanta Datta , Ivan Stojmenovic , Jie Wu, Internal Node and Shortcut Based Routing with Guaranteed Delivery in Wireless Networks, Cluster Computing, v.5 n.2, p.169-178, April 2002
Yu-Chee Tseng , Ting-Yu Lin, Power-conservative designs in ad hoc wireless networks, The handbook of ad hoc wireless networks, CRC Press, Inc., Boca Raton, FL,
Qun Li , Daniela Rus, Communication in disconnected ad hoc networks using message relay, Journal of Parallel and Distributed Computing, v.63 n.1, p.75-86, January
Himanshu Raj , Karsten Schwan , Ripal Nathuji, M-ECho: a middleware for morphable data-streaming in pervasive systems, Proceedings of the 2005 workshop on End-to-end, sense-and-respond systems, applications and services, June 05-05, 2005, Seattle, Washington
Johnson Kuruvila , Amiya Nayak , Ivan Stojmenovic, Greedy localized routing for maximizing probability of delivery in wireless ad hoc networks with a realistic physical layer, Journal of Parallel and Distributed Computing, v.66 n.4, p.499-506, April 2006
David Kiyoshi Goldenberg , Jie Lin , A. Stephen Morse , Brad E. Rosen , Y. Richard Yang, Towards mobility as a network control primitive, Proceedings of the 5th ACM international symposium on Mobile ad hoc networking and computing, May 24-26, 2004, Roppongi Hills, Tokyo, Japan
Chao Gui , Prasant Mohapatra, SHORT: self-healing and optimizing routing techniques for mobile ad hoc networks, Proceedings of the 4th ACM international symposium on Mobile ad hoc networking & computing, June 01-03, 2003, Annapolis, Maryland, USA
Hannes Frey , Ivan Stojmenovic, On delivery guarantees of face and combined greedy-face routing in ad hoc and sensor networks, Proceedings of the 12th annual international conference on Mobile computing and networking, September 23-29, 2006, Los Angeles, CA, USA
Joongseok Park , Sartaj Sahni, Maximum Lifetime Broadcasting in Wireless Networks, IEEE Transactions on Computers, v.54 n.9, p.1081-1090, September 2005
Silvia Giordano , Ivan Stojmenovic, Position-based ad hoc routes in ad hoc networks, The handbook of ad hoc wireless networks, CRC Press, Inc., Boca Raton, FL,
Yuan Xue , Yi Cui , Klara Nahrstedt, Maximizing lifetime for data aggregation in wireless sensor networks, Mobile Networks and Applications, v.10 n.6, p.853-864, December 2005
Xu Lin , Ivan Stojmenovic, Location-based localized alternate, disjoint and multi-path routing algorithms for wireless networks, Journal of Parallel and Distributed Computing, v.63 n.1, p.22-32, January
Hee Yong Youn , Chansu Yu , Ben Lee, Routing algorithms for balanced energy consumption in ad hoc networks, The handbook of ad hoc wireless networks, CRC Press, Inc., Boca Raton, FL,
Ivan Stojmenovic , Mahtab Seddigh , Jovisa Zunic, Dominating Sets and Neighbor Elimination-Based Broadcasting Algorithms in Wireless Networks, IEEE Transactions on Parallel and Distributed Systems, v.13 n.1, p.14-25, January 2002
Y. Thomas Hou , Yi Shi , Hanif D. Sherali, On node lifetime problem for energy-constrained wireless sensor networks, Mobile Networks and Applications, v.10 n.6, p.865-878, December 2005
Chao Gui , Prasant Mohapatra, A framework for self-healing and optimizing routing techniques for mobile ad hoc networks, Wireless Networks, v.14 n.1, p.29-46, January 2008
Jae-Hwan Chang , Leandros Tassiulas, Maximum lifetime routing in wireless sensor networks, IEEE/ACM Transactions on Networking (TON), v.12 n.4, p.609-619, August 2004
Marcel Busse , Thomas Haenselmann , Wolfgang Effelsberg, A comparison of lifetime-efficient forwarding strategies for wireless sensor networks, Proceedings of the 3rd ACM international workshop on Performance evaluation of wireless ad hoc, sensor and ubiquitous networks, October 06-06, 2006, Terromolinos, Spain
I. Kadayif , M. Kandemir , N. Vijaykrishnan , M. J. Irwin, An integer linear programming-based tool for wireless sensor networks, Journal of Parallel and Distributed Computing, v.65 n.3, p.247-260, March 2005
Shibo Wu , K. Seluk Candan, Power-aware single- and multipath geographic routing in sensor networks, Ad Hoc Networks, v.5 n.7, p.974-997, September, 2007 | routing;power management;wireless networks;distributed algorithms |
629472 | Scalable Stability Detection Using Logical Hypercube. | AbstractThis paper proposes to use a logical hypercube structure for detecting message stability in distributed systems. In particular, a stability detection protocol that uses such a superimposed logical structure is presented, and its scalability is being compared with other known stability detection protocols. The main benefits of the logical hypercube approach are scalability, fault-tolerance, and refraining from overloading a single node or link in the system. These benefits become evident both by an analytical comparison and by simulations. Another important feature of the logical hypercube approach is that the performance of the protocol is in general not sensitive to the topology of the underlying physical network. | Introduction
Reliable multicast has been recognized as a key feature in many distributed systems, as it
allows reliable dissemination of the same message to a large number of recipients. Conse-
quently, reliable multicast is supported by many middlewares such as group communication
(Isis [5], Horus [35], Transis [10], Ensemble [1], Relacs [3], Phoenix [23], and Totem [26], to
name a few), protocols like RMTP [28] and SRM [12], and in the near future standards like
CORBA [27].
Reliable multicast typically involves storing copies of each message either by several
dedicated servers, or as is typically done in group communication toolkits, by all nodes in
the system. In order to limit the size of buffers, systems and middlewares supporting reliable
multicast employ a stability detection protocol. That is, such systems and middlewares must
detect when a message has been received by all of its recipients, or in other words has become
stable, at which point it can be discarded.
Stability detection protocols must balance the tradeoff between how fast they can detect
that a message is stable once it has become stable, and the overhead imposed by the protocol.
On one hand, the faster the protocol can detect stability, the smaller the buffers need to
be. On the other hand, if the stability protocol generates too many messages, the overhead
imposed on the system will be prohibitively high. Hence, the performance and scalability of
the stability detection protocol affects the overall scalability of reliable multicast.
This paper proposes to structure stability detection protocols by superimposing a logical
hypercube [17, 21] on the system. That is, in our protocol, messages generated by the
stability detection protocol only travel along logical hypercube connections, regardless of the
underlying physical network topology. We claim that this logical hypercube approach has
several appealing properties (n below refers to the number of nodes in the system):
Scalability: Each node needs to communicate with only log(n) nodes.
Performance: The logical hypercube structure guarantees that the number of hops the
stability information need to travel is at most log(n).
Fault-Tolerance: Hypercubes offer log(n) node distinct paths between every two nodes,
therefore it can sustain up to log(n) failures.
Regularity: Hypercubes have a very regular structure, and in our protocol every node plays
exactly the same role. Thus, no node is more loaded than others. Also, code regularity
tends to decrease the potential for software bugs in protocol implementation.
In this paper, we explore the performance and scalability of our proposed protocol, and compare
it with other known stability detection protocols, namely, a fully distributed protocol, a
coordinator based protocol, and a tree-based protocol [13, 15]. The comparison is done both
analytically and by simulations. Our results show that the logical hypercube based protocol
compares favorably with other protocols we have investigated, and confirm our assumptions
about the use of logical hypercubes, as mentioned above.
During our simulations we have discovered another interesting property, which is shared
by both logical hypercube based protocols and tree based protocols: Our measurements are
carried over randomly generated network topologies, and no attempt is made to match the
underlying physical topology to the logical flow of the protocol. Yet, both the tree based
protocol and the hypercube based protocol appear to be insensitive to the network topology,
giving consistent results regardless of the topology. We believe that this is also an important
aspect of hypercube based protocols (and tree based protocols), since in practical distributed
systems, the underlying network topology is rarely known, and might change as either the
system or the network changes or both of them evolve.
1.1 Related Work
Many group communication toolkits, for example, ISIS [5], Horus [35], and Ensemble [16],
employ a fully distributed protocol, along the lines of the FullDist protocol we present in
Section 2. As we discuss later in Section 3, this protocol is not very scalable.
Guo et al investigated the scalability of a variety of stability detection protocols in [13, 14,
15]. These include, for example, the fully distributed protocol FullDist, the coordinator based
protocol we refer to as Coord, and a tree based protocol to which we refer in this work as
S Coord, all discussed in more detail in Section 2. Guo's work was also done primarily using
simulations. However, in [15] it is assumed that the physical topology of the network matches
the logical structure of the protocol, while in both [13] and in our work, we do not make
this assumption. This difference is important, since many distributed applications do not
control the underlying network topology, nor have access to the routers. Thus, investigating
the behavior of the protocol when there is no correspondence to the actual network topology
is of great interest.
Previous works on both unreliable and reliable multicast have suggested the use of a
logical ring as a form of improving performance when running on a shared bus communication
medium. These works include the Totem project [26] and the work of Cristian and
Mishra [9]. 1 Rings are useful in avoiding collisions, and offer moderate scalability since each
node only communicates with two additional nodes. However, the scalability of rings is
limited too, since information must traverse the entire ring in order to disseminate from one
node to every other node [13].
Hypercubes were originally proposed as an efficient interconnect for massively parallel
processors (MPP) [17, 21]. A great body of research has been done in solving parallel
problems on hypercubes, as described in [17, 21]. In particular, much work has been done in
the area of routing, one-to-all and all-to-all communication and gossiping in hypercubes [4,
7, 8, 11, 19, 31].
The HyperCast protocol maintains a logical hypercube structure among group mem-
bers, such that heartbeats and application messages are sent along the arcs of a logical
hypercube [22]. The HyperCast work was carried independently and concurrently to the
conference version of this paper, and did not address stability issues. Also, with the Hy-
perCast protocol, some nodes in an incomplete hypercube have smaller degree than others,
which reduces its fault-tolerance. For example, when the hypercube has 2 n +1 nodes, one of
the nodes has only a single neighbor according to HyperCast. In contrast, our construction
guarantees that even for incomplete hypercubes, the degree of each node and the number of
node independent paths between each pair of nodes is roughly log(n).
Recently there have been several bodies of work on generating an approximation of hy-
In fact, a logical ring is used for communication over a shared access medium in the IEEE 802.4 standard
[33], known also as token bus. However, IEEE 802.4 does not guarantee reliable multicast, and the use
of a logical ring is done to improve the throughput by avoiding collisions.
percube topologies in a fully distributed manner, and providing efficient routing and lookup
services in these overlay networks [29, 32, 36]. These systems can be used, for example, as
an infrastructure for large scale publish/subscribe systems.
Stability Detection Protocols
As discussed in Section 1, stability detection protocols aim to detect when messages become
stable, that is, they have been received by all of their intended recipients, so their copies can
be discarded. In this section we describe three existing stability detection protocols, to which
we later compare our logical hypercube based protocol in terms of performance, scalability,
and fault tolerance. In all cases we assume a fixed set of n nodes, known in advance, and
numbered from 0 to n \Gamma 1. We also assume that the communication channels preserve FIFO
ordering, and that there is an underlying mechanism guaranteeing reliable point-to-point
message delivery. 2
In order to present the stability detection protocols, we introduce the following notation:
We denote by ArrayMin the element-wise array minimum of n given arrays. That is, given n
arrays of length k, R implies that for each
All protocols proceed in rounds, executed sequentially. In each round of the protocol, the
nodes attempt to establish the stability of messages that were received prior to that round
(although if some messages become stable during a round, they might be detected by that
round as well). Once a round ends, the next round does not start until \Delta time units have
passed; \Delta is some integer which can be adjusted by the system administrator. The optimal
value of \Delta depends on message sending rate and buffer size of each node, as the goal of a
stability detection protocol is to release stable messages from buffers before buffer overflow
happens [13]. As will become evident shortly, we are interested in the latency of each round.
We are now ready to present the protocols.
2 Note that reliable point-to-point message delivery does not guarantee reliable multicast, since it allows
a situation in which a node fails and some of its messages were delivered by some processes, but not by all
of them.
2.1 Coord Protocol
Coord is a coordinator based protocol. That is, each node i maintains an n-element array
R i whose j-th entry R i [j] is the sequence number of the last message received by node
i from node j. One of the nodes serves as the coordinator. The coordinator multicasts a
start message. Each node that receives the start message replies with a point-to-point ack
message to the coordinator. The ack message of node i contains array R i . After receiving
ack messages from all nodes, the coordinator constructs the minimum array S using the
function ArrayMin described before. Following this, array S contains the sequence number
for the last stable message sent from each node. The coordinator then multicasts an info
message containing the array S. The pseudo code is given in Figure 1. Note that in the
pseudo code, Line 1 and Line 8 start a protocol round at the coordinator and non-coordinator
nodes, respectively.
2.2 FullDist Protocol
FullDist is a fully distributed protocol. That is, each node i keeps a stability matrix of size
n \Theta n in which M i [k; j] is the sequence number of the last message i knows about that was
received by node k from node j. M i [i; j] is the sequence number of the last message received
by node i from node j. The minimum of the j-th column across the nodes represents the
sequence number of the last message sent by node j and has been received by every node.
At the beginning of each round, the first node 3 multicasts an info message, which contains
the first row of its matrix M 0 . Each node i that receives the info message replies with a
multicast of an info message. The info message contains row i of its matrix M i . Every
node i replaces the k-th row of its matrix M i with the row it received in the info message
from node k. The pseudo code is presented in Figure 2. Note that in the pseudo code, Line
1 and Line 3 start a protocol round at node 0 and other nodes, respectively.
3 There is no significance in the choice of the first node. The first node that starts that protocol can be
chosen in any deterministic way, for example, according to node ids.
each node i maintains the following arrays:
sequence number array
stability array
ArrayMin - element-wise minimum of the input arrays
Initialization of node i
1: multicast(start);
2: wait until receive(ack,R i ) from all nodes;
3:
4: multicast(info,S j );
5: label all messages received from every node k with sequence number P - S j [k] as stable;
7: goto Step
8: wait until receive(start) from coordinator;
9: send (ack,R i ) to coordinator;
10: wait until receive(info,S) from coordinator;
12: label all messages received from every node k with sequence number P - S i [k] as stable;
13: goto Step
Figure
1: Coord protocol.
min(R) - the minimum value in array R
the j-th row of matrix M i
the j-th column of matrix M i
each node i maintains the following:
sequence number matrix
receive from i - the number of info messages node i has received in the current round
Initialization of node i
receive from i := 0;
1: wait(\Delta);
2: multicast(info,row(M 0 , 0));
3: upon receiving (info,R) from node 0 do
4:
5: receive from k := receive from k
7: done;
At every node k ?
8: upon receiving (info,R) from node i do
10: receive from k := receive from k
if (received from
12: label all messages received from every node j with sequence number P - min(col(M k ; j)) as stable;
13: receive from k := 0;
14: goto Step
15: endif;
Figure
2: FullDist protocol.
2.3 S Coord Protocol
Several tree-structured protocols were introduced in [15]. In this work we take S Coord as
representative of tree structured protocols, since it appeared to perform the best in [15]. In
S Coord, a logical tree is superimposed on the network. Each node i maintains an n-element
array R i whose j-th entry R i [j] is the sequence number of the last message received by node
i from node j. The root starts the protocol by multicasting a start message. The leaves
send ack messages to their parents, containing array R i . Each node i that receives an ack
message from one of its children, calculates the minimum of the array it has received and
its own array R i , and stores the result in array M i . After receiving ack messages from all
its children in the tree, internal node i sends M i to its parent. The root stores M i in array
multicasts an info message containing array S i . Each node i that receives an info
message containing array S, sets S i to S. Array S i contains the sequence number of the last
stable message sent from each node. See Figure 3 for pseudo code of the protocol. Note that
in the pseudo code, Line 1 and Line 2 start a protocol round.
3 Logical Hypercube Based Stability Detection
3.1 Hypercubes
An m-cube is an undirected graph consisting of 2 m vertices, labeled from 0 to
there is an edge between any two vertices if and only if the binary representation of their
labels differs in one bit position. More precisely, let Hm denote an m-dimensional hypercube,
which consists of nodes. Each node is labeled by an m-bit string,
corresponds to dimension k. Two nodes p with label
label are connected if and only if for some index j, X j
An example of a 4-dimensional hypercube (H 4 ) is given in
Figure
4.
An m-cube can be constructed recursively in the following way:
1. A 1-cube is simply 2 nodes connected by an edge. As a convention, we label the nodes
0 and 1.
Protocol for node i
Notation
children i - the indexes of node i's children in the tree
number of children node i has in the tree
parent i - the id of node i's parent in tree
ArrayMin - element-wise minimum of the input arrays
each node i maintains the following variables:
sequence number array
stability array
an array that holds the minimum so far
receive from i - the number of ack messages node i has received in the current round
Initialization
receive from i := 0;
1: multicast(start);
2: upon receiving (start) from root do
3: send(ack,R i ) to parent
4: done;
At internal node i or root ?
5: upon receiving (ack,R) from j do
7: receive from i := receive from i
8: if receive from
14: label all messages received from every node k with sequence number P - S i [k] as stable;
15: receive from i := 0;
17: goto Step
19: endif;
20: done;
At every non-root node i ?
21: upon receiving (info,S) do
23: label all messages received from every node k with sequence number P - S i [k] as stable;
24: receive from i := 0;
25: goto Step
Figure
3: S Coord protocol.
Figure
4: An example of a 4-dimensional hypercube (H 4 ).
2. An m-cube is made up of two (m \Gamma 1)-cubes A and B. The labels of A are preceded
by 0, and the labels of B are preceded by 1. We then add an edge between each node
in A and the node in B that only differs from p by the leftmost bit.
Hypercube is a powerful interconnection topology due to its many attractive features, as
pointed out in [17, 21, 30]. These features include its regularity, a small diameter (log(n)),
small fan-out/fan-in degree (log(n)), and having multiple (log(n)) node-disjoint paths between
every two nodes.
3.2 The CubeFullDist Protocol
CubeFullDist is a fully distributed protocol, in the sense that every node periodically multicasts
its information about message stability to its logical neighbors (in the hypercube).
CubeFullDist employs a gossip style mechanism (similar to [14, 20]) to disseminate stability
information. That is, in each round of the protocol, each node communicates with its logical
neighbors only, until it learns what messages were received by each node at the beginning of
the round. In order to save messages, we divide each round into multiple iterations. In each
iteration r, each node sends its stability information to all its logical neighbors, and waits
for a stability message from each one of them. At the end of the iteration, each node checks
whether it has learned what messages were received by every other node at the beginning of
the round. If it has, then the node sends its current stability information to all its neighbors
and proceed to the next round. Otherwise, the node loops to the next iteration.
A more precise pseudo code for CubeFullDist is given in Figure 5. Each node i maintains
the following variables:
ffl A sequence number array R i whose j-th element R i [j] is the sequence number of the
last FIFO message received by node i from node j.
ffl A stability array S i corresponding to i's stability information at the end of each round.
ffl A bitmap array G i recording the nodes from which i has learned about their stability
information during this round.
ffl An array M i containing the minimum sequence numbers heard so far in this protocol
round.
ffl An integer r i , which holds iteration number, so redundant messages belonging to previous
iterations can be discarded.
A protocol round starts with Step 1. In Step 1, M i is initialized to R i . Node i multicasts
to its hypercube neighbors a stability message containing r i , G i , and M i . In Step 2, node
receives messages from all its neighbors. Upon receiving a message with bitmap G and
sequence number array M , node i sets its bitmap array G i to be the bit-wise or of arrays
G and G i , and sets M i to be ArrayMin of M i and M . If G i contains all 1's, this indicates
that node i has heard from everyone in this round, and thus from i's point of view the round
is over. In this case, node i multicasts to its neighbors the last stability message in the
current round, and starts a new round. Otherwise, if the round is not finished yet but i
received stability messages from all its neighbors in this iteration, then i multicasts a
stability message to its neighbors, and starts another iteration of Step 2.
Note that due to FIFO, and since before starting a new iteration, each node sends the
last stability information known to it to all its neighbors, node i can never receive messages
with r ? r i . More precisely, node i receives messages only from its neighbors. Each time
one of i's neighbors j increases its iteration number r j , it will first multicast a message m 1
containing G j to all its neighbors where G j contains all 1's. When node i receives m 1 it
increases r i . Thus, node i increases r i each time j increases r j , and after i increases r i , both
nodes i and j will have the same iteration number. Further messages from node j to node i
will be delivered after m 1 and after i has increased its iteration number. If node i receives a
message with a lower iteration number, then this is a redundant message, probably from a
neighbor that has not advanced to the current iteration by the time the message was sent,
and hence the message can be ignored.
3.3 Incomplete Hypercubes
Hypercubes are defined for exactly 2 m nodes, for any given m. However, practical systems
may employ an arbitrary number of participants. A flexible version of the hypercube topol-
ogy, called incomplete hypercube [18], eliminates the restriction on node numbers. When
building logical connections in an incomplete hypercube, we strive to keep the properties
that make complete hypercubes so attractive. In other words, our goals in designing the
incomplete hypercubes are: (a) minimize system diameter for performance, (b) maximize
the number of parallel shortest paths between two nodes in order to be fault tolerant, and
(c) restrict the number of logical connections for each node to the limit of log(n), so the
protocol remains scalable.
We denote an incomplete hypercube by I n
are the dimension and the total
number of nodes respectively. To achieve our goals, it is useful to note that an incomplete
hypercube comprises multiple complete ones. There is a connection between node p in H i
and node q in H k when k ? i if the addresses of p and q differ in bit k. For example, consider
I 14
4 given in Figure 6. In this example there are 3 complete cubes H 3 consisting of nodes
consisting of nodes 10xx, and H 1 consisting of nodes 110x.
Our aim is to compensate for missing links in an incomplete hypercube I n
m with respect
to the complete hypercube Hm , while preserving the goals described above. This is done by
adding one edge between some pairs of nodes p and q in I n
whose Hamming distance is 2
and are connected in Hm through a node that is missing in I n
m . These nodes are chosen as
follows: For each missing node z, G z is the set of nodes that z was supposed to be connected
with. More formally, we denote by Hm n I n
m the set of nodes in Hm that do not appear in I n
.
For each node z that belongs to Hm nI n
z is defined as G z = fwjw 2 I n
m and H(z; 1g.
Protocol for node i
Notation
neighbors i - indexes of node i's neighbors in the
hypercube
number of neighbors node i has in the hypercube
ArrayMin - element-wise minimum of the input arrays
maximum of the input arrays
Each node i maintains the following variables
array
sequence number array
stability array at the end of each round
sequence number array in this round
number of the current round
receive from i - the number of stability messages
node i has received in the current iteration
Initialization
receive from i := 0;
At every node i ?
1: multicast(stability,G i ,M i ,r i ) to neighbors
2: upon receiving (stability,G;M; r) from node j do
3: if
5:
receive from i =receive from i +1;
7: if (G i contains all 1's) then /* start new round */
8: multicast(stability,G i ,M i ,r i ) to neighbors
11: for all j 6= i, G i [j] := 0;
12: receive from i := 0;
13: label all messages received from node k with sequence number P - S i [k] as stable;
14: wait(\Delta);
15: goto Step
17: if (receive from
/* start another iteration */
neighbors
19: receive from i := 0;
20: endif
22: done;
Figure
5: CubeFullDist protocol.
Figure
Incomplete hypercube with 14 nodes.
FullDist CubeFullDist
Number of protocol iterations O(log(n)) O(1) O(1) O(log(n))
Number of messages per node per round O(1) coord O(n); other O(1) O(n) O(log(n))
Total number of message O(n) O(n) O(n 2
Robustness to failures no no yes yes
Code regularity no no yes yes
Figure
7: Analytical comparison of the protocols.
If G z is empty or has only one member no connection is added. SG z is now obtained by
lexicographically sorting G z according to node ids. If jSG z j is odd, the first node in SG z is
discarded. SG z is now partitioned into two groups G 1
z and G 2
z . G 1
z contains the first half
of ids from SG z while G 2
z contains the second half of ids from SG z . A connection is added
between the i'th node of G 1
z and the i'th node of G 2
z . For example, in I 7
node 7 is missing
with respect to H 3 . Thus, G
a connection is added
between nodes 5 and 6.
Note that due to the way we label nodes, each node in Hm\Gammak is connected in Hm to at
most k nodes from Hm n I n
. Also, we add at most one connection for each missing one.
Thus, each node has a final degree between m and therefore the scalability and
fault tolerant properties are preserved. The system diameter of I n
m is m, and by adding links
we do not increase the diameter of the system. Hence, the described incomplete hypercube
matches our goals.
3.4 Analytical Comparison
A comparison of the four stability detection protocols is represented in Figure 7. The
number of protocol iterations serves as an indication for the latency of detecting stability. In
the case of S Coord, information propagates in the tree according to the levels of the nodes,
and since there are n nodes, we count this as log(n).
As far as scalability is concerned, S Coord (the tree based protocol) appears to be the
most scalable, followed closely by CubeFullDist. The problem with tree protocols is that
they are not fault tolerant. If one node fails, all nodes in the logical branch under it will
become disconnected. In this case the tree needs to be rebuilt immediately, and the current
round of the protocol is lost.
CubeFullDist uses more messages than S Coord. This message redundancy makes Cube-
FullDist more fault tolerant than S Coord since in CubeFullDist the system is logically partitioned
only after log(n) neighbors of the same node fail.
Coord and FullDist have limited scalability. In Coord, the coordinator receives O(n)
messages from all nodes, which is infeasible when n is large. In FullDist, the total number of
messages is O(n 2 ). In contrast, in CubeFullDist each node receives only O(log(n)) messages,
and in S Coord each node receives at most d messages, where d is the tree degree.
Looking at the number of iterations, theoretically, CubeFullDist is slower than Coord and
FullDist. However, simulations show that actually CubeFullDist is much faster. The reason
for this is that in Coord, the coordinator is the bottleneck of the protocol. In FullDist, the
total message number is high, and each node is loaded. Loaded nodes create long message
queues, which slow the overall performance, making FullDist and Coord much slower than
CubeFullDist and S Coord.
FullDist and CubeFullDist are fault tolerant, while the other two protocols are not.
Finally, FullDist and CubeFullDist have regular structure, and the same code is executed
by all nodes. This property tends to yield simpler, less error prone code. On the other
hand, S Coord has the advantage that it might be easier to embed a tree topology in typical
physical network topologies than it is to embed a hypercube topology.
3.5 Weakening the Network Assumptions
We now discuss the possibility of weakening the network assumptions in each of the four
stability detection protocols, to eliminate the reliable delivery requirement. Note that in
general, stability is a non-decreasing property. Since messages have sequentially increasing
sequence numbers, it is generally possible to ensure the correctness of a stability detection
protocol by periodically sending the most recent local information. However, to reduce
network load, most of our protocols advance in rounds. In order to keep this efficiency, they
may require adding a protocol round number to their messages.
Specifically, in the code of the Coord protocol in Figure 1, a round number must be
attached to the all messages. Then, Line 1 should be repeated periodically until the coordinator
receives an ack from all nodes, and Line 4 should be repeated periodically until the
coordinator receives a new type of acknowledgment message ack-info. For non-coordinator
members, between Lines 12 and 13 we should add sending an ack-info message to the co-
ordinator. Very similar modifications are also required in the code of the S Coord protocol
in
Figure
3.
A round number should also be attached to all messages in the code of the FullDist protocol
in Figure 2. Additionally, each node needs to multicast its info message periodically,
until it receives the info messages of the same round from all other nodes. Also, if a node
has already moved to round r receives an info message from some node in round r,
for some r, then the message is stale and the node in round r should ignore the message.
However, if a node in round r receives an info message in round r treat the
message as if it has a round number r, and process the message.
Finally, the CubeFullDist protocol in Figure 4 already includes round numbers, and thus
it can work correctly simply by multicasting messages periodically, in a similar fashion to the
other protocols. However, for efficiency reasons, the protocol also uses an internal iteration
that does not advance until a node has heard from all of its logical neighbors in the current
iteration. To make sure that this is indeed the case, it is advisable to add an iteration
number to the stability messages of the protocol. Similarly, receive from should only be
increased when a message is received for the first time from a node in the current iteration.
In order to eliminate the requirement for FIFO delivery for stability detection protocols,
it is possible to remember the last message received from each node, and check that the most
recent message carries any information that is at least as recent as the previous one. If it
does, the message is handled according to the protocols. Otherwise, it is ignored.
4 Experimental Performance
4.1 Simulation Model
We use the ns [25] simulator to explore the behavior of the stability detection protocols
described in Sections 2 and 3. We have measured the following indices, with the goal of
checking the effect of the number of nodes on these indices in the four protocols:
of a node is the time from the beginning of a protocol
round until the node recognizes that the current round of the protocol has finished.
The time for detecting message stability is a function of RTT and the frequency of
rounds in the stability detection protocol (1=\Delta). If the reliable multicast protocol can
deliver a message to all the receivers within D seconds, the stability detection protocol
is triggered every \Delta seconds, and it can detect the message's stability within RTT
seconds, then the maximum time to detect the message's stability is D
seconds. Therefore, the buffer size of unstable messages is proportional to the time
it takes to detect message stability and the frequency at which stability detection
messages are sent.
Total number of messages: This is the total number of messages sent in the system.
Each message is counted only once when it is sent. The total number of messages is a
good indication of processors' load.
Hop count: This is the total number of hops that all messages pass through. Each message
is counted once at each hop that it passes on its way from source to destination. This
index shows the overall protocol message overhead on all links in the system.
Network load: Here we measure the average network load, that is, the average number of
messages on all links in the network at any given time, and the maximum queue size,
that is, the maximum number of messages waiting to be sent on any of the links in
the system. Note that the average network load looks at the number of messages at
any given moment, while in hop count we are interested in the cumulative number of
messages on all links in a full run of a protocol.
Topology sensitivity: This measures the difference between best result and worst result
for each of the protocols on each of the indices described above. Since the logical flow
of the protocol does not match the physical underlying topology, this indicates how
sensitive the protocol performance is to the actual network topology.
Lastly, we have also measured the effect of node failures on the above indices on the Cube-
FullDist protocol. Note that both S Coord and Coord are not fault tolerant, so it is not
meaningful to look at the effects of failures on them. Also, the performance of FullDist, as
reported below, is so poor, that we decide to only show the results of failures on CubeFullDist.
The stability detection protocols are tested with several randomly generated network
topologies. We use the random network generator GT-ITM [6] to build random network
topologies with edge probability varying from 0.01 to 0.04; each protocol is run 6 times and
in each run a different randomly generated topology is used. The results presented are the
average of the 6 runs. The links in the simulations are chosen to be duplex 100Mbps in each
direction with uniformly distributed delay between 0 to 1 ms. We assume that the total
number of nodes is varied between 10 to 1900. In groups smaller than 50 nodes, all the
nodes are sending messages. In larger groups, however, the number of senders is fixed at 50.
The sequence number size is assumed to be 4 bytes, and thus the size of the array of stability
information carried by stability, ack, and info messages is at most 200 bytes. Also, the
message header size is set to 32 bytes, large enough for most transport protocols [2, 34].
The tree degree in S Coord is set to log(n), that is, each node has log(n) children. Node 0
is chosen as the coordinator of Coord, and as the root of the tree in S Coord. In CubeFullDist,
the logical hypercube is built according to node id's binary representation. Since the network
is generated randomly, the fitness of logical structure to the network is random as well. In
CubeFullDist, when the number of nodes is not power of two, the construction of incomplete
hypercubes described in Section 3.3 is used.
First member rtt
Number of nodes
(ms)
CubeFullDist
Coord
FullDist
S_Coord
180010305070Last member rtt
(ms)
Number of nodes
Figure
8: RTT as a function of system size.
4.2 Simulation Results
4.2.1 Round Trip Time
The RTT of the fastest and slowest nodes in detecting stability are reported in Figure 8. For
both S Coord and CubeFullDist, RTT remains almost flat as the number of nodes increases.
This is because in both protocols, all nodes are reasonably loaded, and the number of messages
sent and received by each node is small. Also, it can be seen that our construction
of logical incomplete hypercubes maintains its scalability goals with respect to complete hy-
percubes, since the graphs do not have any major jitters near and at system sizes that are
power of two. Note that data points in the same set have similar RTT values: (100, 128),
(200, 256, 300), (500, 512) and (1,000, 1,024).
The RTT in Coord increases linearly with n, since the coordinator load is proportional
to n. This causes a long message queue at the coordinator and slows down the overall
performance.
The RTT of FullDist increases dramatically as the number of nodes increases. In fact,
the timing of FullDist is so bad that we could not check beyond 400 nodes. The reason for
this is that the total load imposed by FullDist on the network is too high. Each node in
FullDist sends and receives O(n) messages, or a total of O(n 2 ) messages, which causes long
message queues at all nodes and heavy utilization of all links and nodes in the system. (The
load caused by FullDist can be seen in Figures 9, 10, and 11. We discuss these graphs later.)
It is interesting to notice that for both Coord and S Coord there is hardly any difference
between first node and last node RTT. This is because the coordinator/root informs everyone
after it finishes each protocol round. FullDist, on the other hand, shows a significant
difference between first node and last node RTT when the system load is high. For 150
nodes, the RTT for the first and last node is 16ms and 19ms respectively. For 400 nodes, the
RTT becomes 46ms and 82ms respectively. When the system load is low (150 nodes), the
difference in RTT values come from the distributed nature of the protocol; when the system
load is high, the difference comes from the fact that the system is overloaded with O(n 2 )
messages.
In CubeFullDist there is a slight difference between first node and last node RTT. This
difference is caused by the distributed nature of the protocol, that is, it takes time for the
information to propagate in the system.
4.2.2 Network Load
The average network load and the maximum queue size measured are reported in Figures 9
and 10, respectively. For a given system size, the network load is recorded every 1 ms, and
averaged over the time for each round. FullDist has the largest queue size. In FullDist each
node sends and receives O(n) messages, or a total of O(n 2 ) messages, which results in high
network load and large message queues.
In Coord, the maximum queue size grows linearly with the system size. This is because
the coordinator receives O(n) messages. The average network load is not so high though,
and in fact, is even somewhat better than CubeFullDist, since the total number of messages
sent out in the system is O(n).
CubeFullDist and S Coord have almost the same maximum queue length. The maximum
queue size of S Coord is somewhat better than CubeFullDist since S Coord uses fewer
messages. In CubeFullDist the average queue length and maximum queue length are very
load
Number of nodes
Number
of
messages
CubeFullDist
Coord
FullDist
S_Coord
network load without full dist
Number of nodes
Number
of
messages
CubeFullDist
Coord
S_Coord
Figure
9: Average network load as a function of system size. The right graph zooms in on
the results of Coord, S Coord, and CubeFullDist. (Notice difference in scale.)
Number of nodes
Number
of
messages
CubeFullDist
Coord
FullDist
S_Coord
Max queue length
Number of nodes
Number
of
messages CubeFullDist
S_Coord
Figure
10: Maximum queue size as a function of system size. The right graph zooms in on
the results of S Coord and CubeFullDist. (Notice difference in scale.)
Number of nodes
Hop
Count
Hop Count
CubeFullDist
Coord
FullDist
S_Coord
Message
Count
Number of nodes
Message Count
CubeFullDist
Coord
FullDist
S_Coord
O(n)
O(n*log(n)*log(n)
Figure
11: Hop count and number of messages as a function of system size. (Notice difference
in scale.)
similar, because all nodes send and receive the same number of messages. In S Coord, on
the other hand, the average queue length is very low because each node sends and receives
only log(n) messages.
4.2.3 Hop Count and Message Count
The total hop count and message count are reported in Figure 11. In all protocols, the
theoretical analysis of message count (see Section 3.4) is reflected in the simulation graphs.
FullDist has an enormous message count, O(n 2 ), which is reflected in both graphs. Coord
and S Coord have linear message count. In S Coord, the hop count is less than in Coord
because most protocol messages are only sent from a node to its parent in the tree. Whereas
in Coord, most messages are sent to the root node directly. It is not too difficult to embed a
logical tree on an arbitrary physical topology to match the logical topology with the physical
topology as much as possible, and therefore the actual number of hops could be even lower
than the simulation result.
First member rtt, the difference between the best and worst result
Number of nodes
Difference
CubeFullDist
Coord
FullDist
S_Coord
Max queue, the difference between the best and worst result
Number of nodes
Difference
CubeFullDist
Coord
FullDist
S_Coord
Figure
12: The difference between the worst and best result as a function of system size.
CubeFullDist uses O(n log 2 (n)) messages, which is reflected in the message count graph.
CubeFullDist hop count is relatively better than its message count because protocol messages
are sent only to hypercube neighbors. Note that while Coord hop count and message count
are better than CubeFullDist, CubeFullDist is faster (provides faster RTT). The reason is
that in CubeFullDist the load is distributed in the network and in Coord all messages must
traverse to the same root node.
4.2.4 Protocol Sensitivity
The difference between the best and worst simulation result, of first RTT and max queue,
are reported in Figure 12. The performance of Coord is very much affected by the underlining
network topology. If in the randomly generated network the coordinator has many
connections, Coord achieves quite good performance. On the other hand, if the coordinator
is assigned only a few connections, these connections become bottlenecks, and the performance
is very poor. The other protocols do not have such bottlenecks, so they are not very
sensitive to the network topology.
First member rtt of CubeFullDist
Number of nodes
(ms)
no failures
failures
3 failures
4 failures
5 failures
Number of nodes
Queue
length
no failures
failures
3 failures
4 failures
5 failures
Figure
13: The effects of node failures on CubeFullDist performance.
4.2.5 The Effect of Faults on CubeFullDist Performance
The effects that node faults have on CubeFullDist performance can be seen in Figures 13
and 14. We test the protocol with 0-5 node failures; faulty nodes are chosen randomly
among node 0's neighbors. As mentioned above, CubeFullDist is very fault tolerant, and its
performance is hardly affected by a small number of failures. In particular, note that our
choice of faulty nodes is a worst case scenario, since all faulty nodes are neighbors of the
same node rather than arbitrarily chosen.
5 Discussion and Future work
Our study indicates that superimposing a logical hypercube structure as a way of obtaining
scalability and fault-tolerance, especially in the context of reliable multicast, is a promising
direction. In [24], we also study the applicability of logical hypercubes to scalable failure
detection and causal ordering. The applicability of logical hypercubes to other aspects of
reliable multicast and group communication is still an open question.
In this work we have assumed general networks, on which the logical hypercube structure
Number of nodes
Hop
count
no failures
failures
3 failures
4 failures
5 failures
Figure
14: The effects of node failures on CubeFullDist performance.
(as well as the other structures) were superimposed in a random manner. Nevertheless,
trying to map the logical hypercube structure to the physical network topology in a smart
way might improve the performance of our protocol. Some work has already been done on
matching hypercubes to other topologies, mainly trees and meshes [21], although looking
at this problem in a more general context, such as the Internet, is an interesting research
direction.
Finally, none of the stability detection protocols we described here assume anything about
the reliable multicast protocol that it might use in conjunction with. Some optimizations,
for example, piggybacking stability messages on protocol messages, might be possible by
tailoring the stability detection protocol to the multicast protocol.
Acknowledgements
We would like to thank the anonymous reviewers for their helpful
comments.
--R
The Ensemble Home Page.
Routing Permutations and 2-1 Routing Requests in the Hypercube
Exploiting Virtual Synchrony in Distributed Systems.
Optimal Broadcasting in Faulty Hypercubes.
Fast Gossiping with Short Unreliable Messages.
The Pinwheel Asynchronous Atomic Broadcast Protocols.
The Transis Approach to High Availability Cluster Communi- cation
Optimal Algorithms for Dissemination of Information in Generalized Communication Networks.
A Reliable Multicast Framework for Light-Weight Sessions and Application Level Framing
Scalable Message Stability Detection Protocols.
Message Stability Detection for Reliable Multicast.
Hierarchical Message Stability Tracing Protocols.
The Ensemble System.
An Introduction to Parallel Algorithms.
Incomplete Hypercubes.
Fast Gossiping for the Hypercube.
Providing Availability using Lazy Replication.
Introduction to Parallel Algorithms and Architectures.
A Protocol for Maintaining Multicast Group Members in a Logical Hypercube Topology.
A Toolkit for Building Fault-Tolerant Distributed Application in Large Scale
Scalable Multicast in a Logical Hypercube.
A Fault-Tolerant Multicast Group Communication System
Reliable Multicast Transport Protocol (RMTP).
Accessing Nearby Copies of Replicated Objects in a Distributed Environment.
Topological Properties of Hypercube.
Efficient All-to-All Communication Patterns in Hypercube and Mesh Topolo- gies
A Scalable Peer-to-Peer Lookup Service for Internet Applications
Computer Networks.
Masking the Overhead of Protocol Layering.
A Flexible Group Communication System.
An Infrastructure for Fault-Tolerant Wide-Area Location and Routing
--TR | scalability;distributed systems;reliable multicast;group communication |
629492 | On Time Optimal Supernode Shape. | AbstractWith the objective of minimizing the total execution time of a parallel program on a distributed memory parallel computer, this paper discusses the selection of an optimal supernode shape of a supernode transformation (also known as tiling). We identify three parameters of a supernode transformation: supernode size, relative side lengths, and cutting hyperplane directions. For supernode transformations on algorithms with perfectly nested loops and uniform dependencies, we prove the optimality of a constant linear schedule vector and give a necessary and sufficient condition for optimal relative side lengths. We also prove that the total running time is minimized by a cutting hyperplane direction matrix from a particular subset of all valid directions and we discuss the cases where this subset is unique. The results are derived in continuous space and should be considered approximate. Our model does not include cache effects and assumes an unbounded number of available processors, the communication cost approximated by a constant, uniform dependences, and loop bounds known at compile time. A comprehensive example is discussed with an application of the results to the Jacobi algorithm. | Introduction
Supernode partitioning is a transformation technique
that groups a number of iterations in a nested
loop in order to reduce the communication startup
cost. This paper addresses the problem of selecting
optimal cutting hyperplane directions and optimal
supernode relative side lengths with the objective of
minimizing the total running time, the sum of communication
time and computation time, assuming a large
number of available processors which execute multiple
supernodes.
A problem in distributed memory parallel systems
is the communication startup cost, the time it takes
a message to reach transmission media from the mo-
This author was supported by AT&T Labs.
y This research was supported in part by the National Science
Foundation under Grant CCR-9502889 and by the Clare Boothe
Luce Professorship from Henry Luce Foundation.
ment of its initiation. The communication startup
cost is usually orders of magnitude greater than the
time to transmit a message across transmission media
or to compute data in a message. Supernode transformation
was proposed in [14], and has been studied
in [1, 2, 3, 4, 15, 18, 20, 25, 26, 27] and others to
reduce the communication startup cost. Informally,
in a supernode transformation, several iterations of
a loop are grouped into one supernode and this supernode
is assigned to a processor as a unit for ex-
ecution. The data of the iterations in the same su-
pernode, that need to be sent to another processor,
are grouped as a single message such that the number
of communication startups is reduced from the number
of iterations in a supernode to one. A supernode
transformation is characterized by the supernode size,
the relative lengths of the sides of a supernode, and
the directions of hyperplanes which slice the iteration
index space of the given algorithm into supernodes.
All the three factors affect the total running time. A
larger supernode size reduces communication startup
cost, but may delay the computation of other processors
waiting for the message and therefore, result in
a longer total running time. Also, a square supernode
may not be as good as a rectangular supernode
with the same supernode size. In this paper, selection
of optimal cutting hyperplane directions and optimal
relative side lengths is addressed.
The rest of the paper is organized as follows. Section
presents necessary definitions, assumptions,
and terminology. Section 3 discusses our results in
detail. Section 4 briefly describes related work and
the contribution of this work compared to previous
work. Section 5 concludes this paper. A bibliography
of related work is included at the end.
Basic definitions, models and assumptions
The architecture under consideration is a parallel
computer with distributed memory. Each processor
has access only to its local memory and is capable of
communicating with other processors by passing mes-
sages. In our model, the cost of sending a message
is represented by t s , the message startup time. The
computation speed of a single processor is characterized
by the time it takes to compute a single iteration
of a nested loop. This parameter is denoted by t c .
Algorithms under consideration consist of a single
nested loop with uniform dependencies [22]. Such algorithms
can be described by a pair (J; D), where J
is an iteration index space and D is an n \Theta m dependence
matrix. Each column in the dependence matrix
represents a dependence vector. The cone generated
by dependence vectors is called dependence cone. The
cone generated by the vectors orthogonal to the facets
of the dependence cone is called tiling cone. We assume
that m - n, matrix D has full rank (which is
equal to the number of loop nests n), and all elements
on the main diagonal of the Smith normal form of D
are equal to one. As discussed in [23], if the above assumptions
are not satisfied, then the iteration index
space J contains independent components and can be
partitioned into several independent sub-algorithms
with the above assumptions satisfied.
In a supernode transformation, the iteration space
is sliced by n independent families of parallel equidistant
hyperplanes. The hyperplanes partition iteration
index space into n-dimensional parallelepiped supernodes
(or tiles). Hyperplanes of one family can be
specified by a normal vector orthogonal to the hy-
perplanes. The square matrix consisting of n normal
vectors as rows is denoted by H. H is of full rank
because the n hyperplanes are assumed to be inde-
pendent. These parallelepiped supernodes can also
be described by the n linearly independent vectors
which are supernode sides. As described in [20], column
vectors of matrix are the n side vec-
tors. A supernode template T is defined as one of
the full supernodes translated to the origin 0, i.e.,
1g.
Supernode index space, J s , obtained by the supernode
transformation H is:
Supernode dependence matrix, D s 1 , resulting from
supernode transformation H consists of elements of
the set:
We use D to denote either a matrix or a set consisting of
the column vectors of matrix D. Whether it is a matrix or a
set should be clear from the context.
As discussed in [20, 26] 2 , partitioning hyperplanes defined
by matrix H have to satisfy HD - 0, i.e. each
entry in the product matrix is greater than or equal
to zero, in order to have (J s ; D s ) computable. This
implies that the cone formed by the column vectors in
E has to contain all dependence vectors in D. There-
fore, components of vectors in D s defined above are
nonnegative numbers. In our analysis, throughout the
paper, we further assume that all dependence vectors
of the original algorithm are properly contained in the
supernode template. Consequently, components of D s
are only 0 or 1, and I ' D s . This is a reasonable assumption
for real world problems [5, 26].
To present our analysis, the following additional
notations are introduced. The column vector l =
called supernode side length vector. Let
L be an n \Theta n diagonal matrix with vector l on its
diagonal and E u be a matrix with unit determinant
and column vectors in the directions of corresponding
column vectors of matrix E. Then,
components of vector l are supernode side lengths in
units of the corresponding columns of E u . We define
the cutting hyperplane direction matrix as:
The supernode size, or supernode volume, denoted by
g, is defined as the number of iterations in one su-
pernode. The supernode volume g, matrix of extreme
vectors E and the supernode side length vector l are
related as
. The relative supernode
side length vector,
sg
\Theta l;
and clearly
lengths of supernodes relative to the supernode size.
For example, if H I, the identity matrix,
and then the supernode is a square. How-
ever, if
with the same H u and n, then
the supernode is a rectangle with the same size as
the square supernode but the ratio of the two sides
being 2:1. We also use R to denote diagonal n \Theta n
matrix with vector r on its diagonal, and
gE u R and
transformation is completely specified by H u , r, and g,
and therefore, denoted by (H g). The advantage
of factoring matrix H this way is that it allows us to
study the three supernode transformation parameters
separately.
Implication 2 of Corollary 1 of [26]
For an algorithm A = (J; D), a linear schedule
[22] is defined as oe - : J ! N , such that oe -
is a linear schedule vector,
a row vector with n rational components, minf-d
Jg. A linear
schedule assigns each node j 2 J an execution step
with dependence relations respected. We approximate
the length of a linear schedule
Note that j 1 and j 2 for which oe - (j 1
are always extreme points in the iteration index
space.
The execution of an algorithm (J; D) is as follows.
We apply a supernode transformation (H
obtain (J s ; D s ). A time optimal linear schedule - can
be found for (J s ; D s ). The execution based on the
linear schedule alternates between computation and
communication phases. That is, in step i, we assign
with the same oe - available
processors. After each processor finishes all the computations
of a supernode, processors communicate by
passing messages in order to exchange partial results.
After the communication is done, we go to step i + 1.
Hence, the total running time of an algorithm depends
on all of the following: (J; D), H u , g, r, -, t c , t s .
The total running time is a sum of the total computation
time and the total communication time which
are multiples of the number of phases in the execu-
tion. The linear schedule length corresponds to the
number of communication phases in the execution.
We approximate the number of computation phases
and the number of communication phases by the linear
schedule length (2). The total running time is then
the sum of the computation time T comp and communication
time Tcomm in one phase, multiplied by the
number of phases P . Computation time is the number
of iterations in one supernode multiplied by the
time it takes to compute one iteration, T
The cost of communicating data computed in one supernode
to other dependent supernodes is denoted by
Tcomm . If c is the number of processors to which the
data needs to be sent, then ct s . This model
of communication greatly simplifies analysis, and is
acceptable when the message transmission time can
be overlapped with other operations such as computations
or communication startup of next message, or
when the communication startup time dominates the
communication operation. Thus, the total runningv1v2 denotes vector dot product of vectors v1 and v2
time is:
ct s
3 Optimal Supernode Shape
In this section, we present the results pertaining
to the time optimal supernode shape, i.e. supernode
relative side length matrix, R, and cutting hyperplane
direction matrix, H u , derived in the model and under
the assumptions set in the previous section.
In the model with constant communication cost,
only linear schedule vector and linear schedule length
in the expression (3) depend on the supernode shape.
Therefore, in order to minimize total running time, we
need to choose supernode shape that minimizes linear
schedule length of the transformed algorithm.
The problem is a non-linear programming problem:
diagonal matrix
det(H
H
where the scalar n
1=g is a constant that can be computed
independent of H u and Q, and without loss of
generality, we can exclude it from the objective func-
tion. We studied selection of supernode size in [9].
The floor operator of (1) has been droped in the objective
function to simplify the model. It can be shown
that the error in the linear schedule length is bounded
by
which is insignificant for components of -
close to 1 and large iteration index spaces.
Theorem 1 gives a closed form of the optimal linear
schedule vector for the transformed algorithm.
Theorem 1 An optimal supernode transformation,
with I ' D s , has an optimal linear schedule -
Proof: As defined in section 2, min- s d
D s . Since I ' D s , and in order to have feasible linear
schedule, i.e. - s D s ? 0, we must have - s - 1.
If all extreme projection vectors have non-negative
components, then their linear schedule length is minimized
with the linear schedule vector with smallest
components, i.e. -
If there are extreme projection vectors with negative
components, then an initial optimal linear schedule
vector may be different from 1. We still must
have in order to satisfy the definition of linear
schedule vector, i.e. min- s d 1. Let the i-th
component of - s be greater than 1. Then, we
can set - 0
modify the supernode shape
by setting is a diagonal matrix
with
Linear schedule of all vectors in the transformed algorithm
remains the same, i.e. linear schedule of vector
and we can divide - 0
s with min- 0
s which
will shorten linear schedule of all points. Therefore,
we got a shorter linear schedule for the algorithm and
got one more component in the linear schedule vector
equal 1. Continuing the process, we eventually get to
the linear schedule with all ones. 2
Theorem 2 gives a necessary and sufficient condition
for an optimal relative side length matrix R, and
consequently its inverse matrix Q, assuming the optimal
linear schedule vector 1.
Theorem 2 Let g and H u be fixed, let the linear
schedule vector be 1 and let M be the set of maximal
projection vectors in the transformed space. Relative
side lengths vector r is optimal if and only if vector
with equal components, v, belongs to the cone generated
by maximal projection vectors, of the transformed
algorithm:
Proof: Let m be linear schedule length, and without
loss of generality, let v be a vector with equal
components such that
Sufficient condition. Let vector v be included in
cone(M) and let the corresponding relative supernode
side length matrix be R. Consider another supernode
transformation close to the original with
slightly different from R.
Suppose the image of v of transformation with R 0
is and the schedule length of v 0 is 1v 0 .
Then, based on relation between geometric arithmetic
. The new
maximal projection vectors' linear schedule length can
only be greater than or equal to 1v 0 , which is greater
than Therefore, supernode relative side
length matrix R is optimal.
Necessary condition: We prove by contradiction.
Let R be optimal and assume v is not in cone(M ).
Then there exists a separating hyperplane Z:
for all x 2 Z, such that za ! 0, for all a 2 M , and
1. We can select the normal
vector z arbitrarily close to being orthogonal to vector
1 and of arbitrary length in order to ensure s
4 for convenience, we abuse notation and write P (v) to mean
-sv, where the choice of - is clear from the context0000000000000000000000000000000000000000000000001111111111111111111111111111111111111111111111111111
Z
z
s
cone(M)
Figure
1: Construction of vector s. Illustration for
the proof of Theorem 2.
and
1. The former is ensured by selecting
sufficiently small length of vector z. The latter is ensured
by selecting an appropriate angle between z and
1 such that s sinks on the curve
Based
on relation between arithmetic and geometric mean,
we must have s1 ?
The latter is the case
by the construction of Z, and thus construction of s
is feasible. Figure 1 illustrates construction of vector
s in two dimensional space. Then, by further scaling
supernode index space by diag(s), i.e. by choosing
improve linear schedule length
of vectors in M :
for all a 2 M . By choosing vector z arbitrarily close
to being orthogonal to vector 1, we can ensure that
no extreme projection vector becomes a new maximal
projection vector. By improving R to R 0 we contradict
the hypothesis that R is optimal. 2
Theorem 2 also implies the relation between Q and
H u , and enables analysis of H u independent of Q, as
stated by the following corollary.
Corollary 1 Given H u and the vector u in the convex
hull of the original iteration index space which
maps into vector with equal components and with maximal
linear schedule length in the convex hull of the
supernode index space, optimal Q has components:
With optimal selection of Q, objective function of (4):
1QH u u (6)
reduces to the expression:
Y
Proof: The relation (5) is readily derived from
const1. The expression (7)
follows by substituting expression (5) for Q in (6). 2
In the special case of a single maximal projection
vector, optimal Q can be easily computed based on
(5). For example, if the iteration index space is a
hyperrectangle and H optimal Q is the one
which turns supernode index space into a hypercube,
and makes supernode similar to the original iteration
index space.
Based on (7), our objective function for optimal H u
is:
Y
where u is the vector in the convex hull of the original
iteration index space as defined by Corollary 1.
The following shows that by positively combining
rows of a matrix, i.e. by taking any matrix with rows
inside the cone generated by rows of the original ma-
trix, we obtain no better cutting hyperplane direction
matrix.
u1 be a cutting hyperplane direction
matrix. Let U be a square matrix such that
and U - 0. Then, matrix H u 2
does not
give a cutting hyperplane matrix with shorter schedule
length.
Proof: It is enough to show that F (H u 1
with the vector u in (8) being the vector as defined
by Corollary 1, corresponding to H u 1
Let W be a square non-negative matrix of unit determinant
such that
for H u1 and u, and let 1Q 1 H u 1
Y
Y
Y
Y
Y
Y
where we used inequality between sum of non-negative
numbers and root of the sum of their squares, and
Hadamard inequality in the deductive sequence above.Based on Lemma 1, we can state the following regarding
the optimality of the choice of H u in general.
Theorem 3 Optimal hyperplane direction matrix H u
assumes row vectors from the surface of the tiling
cone.
Proof: If any of the row vectors of H u are from
the interior of the tiling cone, then there is another
hyperplane direction matrix H 0
u with row vectors from
the surface such that H
i.e. the cone generated by the rows of H u is included
in the cone generated by H 0
which takes row vectors
from the surface of the tiling cone. Based on Lemma
u is a better choice than H u . 2
In the case of algorithm with exactly n extreme
dependence vectors, we can state a stronger result,
provided by the following theorem.
Theorem 4 Optimal hyperplane direction matrix H u
for an algorithm with n extreme dependence directions
is uniquely defined, up to uniform rescaling, by the n
extreme directions of the tiling cone.
Proof: The dependence cone having exactly n
extreme directions, implies that there are exactly n
extreme directions of the corresponding tiling cone.
Then each row of any hyperplane direction matrix is
a non-negative linear combination of the n extreme
directions of the tiling cone, and we are sure that the
hyperplane direction matrix with extreme directions
of the tiling cone as its rows is the best choice, based
on Lemma 1. 2
An equivalent statement for two dimensional algorithms
gives an even stronger result.
Theorem 5 Optimal extreme direction matrix E u for
two dimensional algorithms has column vectors in the
directions of the two extreme dependence vectors.
Proof: Two dimensional algorithms always have
exactly two extreme dependence vectors. Since
Theorem 4 applies always. 2
The following example confirms that the optimal
hyperplane directions matrix H u can take row vectors
from all of the surface of the tiling cone, in the general
case, and not only from the set of extreme directions
of the tiling cone.
Example 1 In this example, we apply supernode
transformation to Jacobi algorithm [19]. We select
optimal Q, and discuss the selection of H u , assuming
(the selection of g is discussed in [9]).
The core of the Jacobi algorithm, with constant
number of iterations, can be written as follows:
do
do all
Iteration index space is a three dimensional rectan-
gle. There are eight extreme points in the iteration
index space, shown as column vectors of the matrix
X:
500 500 500 500C A :
There are four dependencies, caused by access to
elements of array a in the code above. They are represented
in matrix D as column vectors:
The tiling cone generating vectors, i.e. extreme vectors
of the tiling cone, for this example, include the
following four row vectors:
We can construct four different hyperplane direction
matrices, H ui , 4, from the four row vectors
of matrix B. However, none of those four matrices,
constructed from extreme directions of the tiling cone,
is necessarily optimal as we will see through the rest
of this example.
From the set of extreme points, X, we can construct
a set of extreme projection vectors. Those are 28 vector
differences between all different pairs of column
vectors of X. Applying an iterative non-linear optimization
procedure we obtain optimal Q i for each
H ui above. Applying the supernode transformations
obtained in this way to the set of extreme projection
vectors and finding the maximum, we get the same
linear schedule length of 606:2445 for each transfor-
mation. Thus, all H ui 's are equally good. This is due
to regularity of the iteration index space and the dependence
vectors.
Let us consider another hyperplane direction matrix
0:5 \Gamma0:5 \Gamma0:5
constructed from two row vectors of B and a sum of
the other two row vectors of B, and then properly
normalized. Corresponding optimal supernode relative
side lengths are given by:
Corresponding tiling matrix is:
0:125 \Gamma0:125 0:125
0:125 \Gamma0:125 \Gamma0:125
Matrix
5 gives the shape of the supernode:
Applying H 5 to the set of extreme index points, we get
the extreme index points in the supernode index space:
62:5 \Gamma312:5 \Gamma187:5
Similarly, we get the transformed extreme projection
vectors, but we don't show all 28 of them. We show
only the maximal projection vectors, there are nine of
them:
The nine maximal projection vectors sink on a single
two dimensional plane, and the vector
with equal components and the same linear schedule
length, belongs to the cone formed by the maximal projection
vectors and sinks on the same plane, ensuring
that we have selected the optimal relative side lengths
based on Theorem 2. This can be easily verified by
computing normal vectors to the faces of the cone generated
by maximal projection vectors and showing that
vector v's projection onto these normals indicates that
the vector v is inside the cone.
4 Related work
Irigoin and Triolet [14] proposed the supernode partitioning
technique for multiprocessors in 1988, as
a new restructuring method. Ramanujam and Sadayappan
[20] studied tiling multidimensional iteration
spaces for multiprocessors. They showed equivalence
between the problem of finding a partitioning
hyperplane matrix H, and the problem of finding a
containing cone for a given set of dependence vectors.
In [4], the choice of supernode shape is discussed with
the goal of minimizing a new objective function. The
key feature of the new objective function is its scal-
ability. It is defined in such a way that it is independent
of the supernode size. In [24], the authors
discussed the problem of finding the optimum wave-front
(optimal linear schedule vector, in our terminol-
ogy) that minimizes the total execution time for two
dimensional (data) arrays executed on one or two dimensional
processor arrays. In [18], the optimal tile
size is studied under different model and assumptions.
In [26], an extended definition of supernode transformation
is given. It is an extension of the definition
originally given in [14]. In [27] the choice of matrix
H with the criterion of minimizing the communication
volume is studied. Similar to [4], the optimization criterion
does not include iteration index space, and thus
the model does not include the linear schedule effects
on the execution time. In [3], the choice of optimal tile
size that minimizes total running time is studied. The
authors' approach is in two steps. They first formulate
an abstract optimization problem, which does not
include architectural or program characteristics, and
partially solve the optimization problem. In the second
step they include the architectural and program
details into the model and then solve the problem for
the optimal tile size yielding a closed form solution.
Recently, an international seminar [6] was held with
the topic of tiling where twenty five lectures were pre-
sented. The lectures covered many issues related to
the tiling transformation.
Selection of optimal supernode size was studied in
our previous work within a similar model in [9]. We
studied the choice of cutting hyperplane directions in
two dimensional algorithms in [12], selection of supernode
shape for the case of dependence cone with
extreme directions in [13] and the results presented
in this paper are an extenssion to the results of [13].
Compared to the related work, our optimization criterion
is to minimize the total running time, rather
than communication volume or ratio between communication
and computation volume. In addition, we
use a different approach where we specify a supernode
transformation by the supernode size, the relative
side length vector r, and the cutting hyperplane direction
such that the three variables become
independent, and can be studied separately.
5 Conclusion
We build a model of the total running time based
on algorithm characteristics, architecture parameters
and the parameters of supernode transformation. The
supernode transformation is specified by three independent
parameters: supernode size, supernode relative
side lengths, and the cutting hyperplane direc-
tions. The independence of the parameters allows
us to study their selection separately. In this paper,
two of the three parameters are studied. We give a
necessary and sufficient condition for optimal relative
side lengths, show that the optimal cutting hyper-plane
directions from the surface of the tiling cone,
and show that linear schedule - is an optimal
linear schedule in the transformed algorithm. If the
final supernode transformation violates the assumption
that I ' D s , then the results do not hold. The
results are derived in continuous space and should for
that reason be considered approximate.
--R
"Scanning Polyhedra with Do Loops,"
"Optimal Orthogonal Tiling,"
"Optimal Orthogonal Tiling of 2-D Iterations,"
"(Pen)- Ultimate Tiling,"
"Practical Dependence Testing,"
"Tiling for Optimal Resource Utiliza- tion,"
"Linear Scheduling is Nearly Optimal,"
"Eval- uating Compiler Optimizations for Fortran D,"
"On Supernode Transformations with Minimized Total Running Time,"
"On Supernode Partitioning Hyperplanes for Two Dimensional Algorithms,"
"Time Optimal Supernode Shape for Algorithms with n Extreme Dependence Di- rections,"
"Supernode Partitioning,"
"The Cache Performance and Optimizations of Blocked Al- gorithms,"
New Jersey
"Modeling Optimal Granularity When Adapting Systolic Algorithms to Transputer Based Supercomput- ers,"
"Op- timal Tile Size Adjustment in Compiling General DOACROSS Loop Nests,"
Parallel computing: theory and practice
"Tiling Multidimensional Iteration Spaces for Multicomputers,"
"Automatic Blocking of Nested Loops"
"Time Optimal Linear Schedules for Algorithms With Uniform Dependen- cies,"
"Independent Partitioning of Algorithms with Uniform Dependencies,"
"Finding Optimum Wavefront of Parallel Computation,"
"More Iteration Space Tiling,"
"On Tiling as a Loop Transformation,"
"Communication-Minimal Tiling of Uniform Dependence Loops,"
--TR
--CTR
Georgios Goumas , Nikolaos Drosinos , Maria Athanasaki , Nectarios Koziris, Automatic parallel code generation for tiled nested loops, Proceedings of the 2004 ACM symposium on Applied computing, March 14-17, 2004, Nicosia, Cyprus
Sriram Krishnamoorthy , Muthu Baskaran , Uday Bondhugula , J. Ramanujam , Atanas Rountev , P Sadayappan, Effective automatic parallelization of stencil computations, ACM SIGPLAN Notices, v.42 n.6, June 2007
Georgios Goumas , Nikolaos Drosinos , Maria Athanasaki , Nectarios Koziris, Message-passing code generation for non-rectangular tiling transformations, Parallel Computing, v.32 n.10, p.711-732, November, 2006
Maria Athanasaki , Aristidis Sotiropoulos , Georgios Tsoukalas , Nectarios Koziris , Panayiotis Tsanakas, Hyperplane Grouping and Pipelined Schedules: How to Execute Tiled Loops Fast on Clusters of SMPs, The Journal of Supercomputing, v.33 n.3, p.197-226, September 2005
Saeed Parsa , Shahriar Lotfi, A New Genetic Algorithm for Loop Tiling, The Journal of Supercomputing, v.37 n.3, p.249-269, September 2006 | distributed memory multicomputer;minimizing running time;parallelizing compilers;algorithm partitioning;tiling;supernode transformation |
630534 | Scalable Feature Mining for Sequential Data. | Classification algorithms are difficult to apply to sequential examples, such as text or DNA sequences, because a vast number of features are potentially useful for describing each example. Past work on feature selection has focused on searching the space of all subsets of the available features, which is intractable for large feature sets. The authors adapt data mining techniques to act as a preprocessor to select features for standard classification algorithms such as Naive Bayes and Winnow. They apply their algorithm to a number of data sets and experimentally show that the features produced by the algorithm improve classification accuracy up to 20%. | Introduction
Many real world datasets contain irrelevant or redundant attributes. This may be because the data was collected
without data mining in mind, or because the attribute dependences were not known a priori during data collection. It
is well known that many data mining methods like classification, clustering, etc., degrade prediction accuracy when
trained on datasets containing redundant or irrelevant attributes or features. Selecting the right feature set can not only
improve accuracy, but can also reduce the running time of the predictive algorithms, and can lead to simpler, more
understandable models. Good feature selection is thus one of the fundamental data preprocessing steps in data mining.
Most research on feature selection to-date has focused on non-sequential domains. Here the problem may be
defined as that of selecting an optimal feature subset of size m from the full d-dimensional feature space, where
ideally m d. The selected subset should maximize some optimization criterion such as classification accuracy or
it should faithfully capture the original data distribution. The subset search space is exponentially large in the number
of features.
In contrast to traditional non-sequential data, we focus on sequence data in which each example is represented as a
sequence of "events", where each event might be described by a set of predicates, i.e., we are dealing with categorical
sequential domains. Examples of sequence data include text, DNA sequences, web usage data, multi-player games,
and plan execution traces. In sequential domains, features are ordered sets of partial event descriptions. For example,
a sequential feature that describes a chess game is "black moves a knight, and then white moves a bishop to square
D-6". This feature holds in some chess games, but not in others, and thus might be used to classify chess games into,
for example, ones played by experts vs. ones played by novices. Selecting the right features in sequential or temporal
domains is even more challenging than in non-sequence data. The original feature set is itself undefined; there are
potentially an infinite number of sequences of arbitrary length over d categorical attributes or dimensions. Even if we
restrict ourselves to some maximum sequence length k, we have potentially O(d k ) subsequences over d dimensions.
The complexity is d d if we consider maximum subsequence length to be d, as opposed to 2 d in the non-sequential
case.
The goal of feature selection in sequential domains is to select the best subset of sequential features out of the d k
possible sequential features (i.e., subsequences) that can be composed out of the d attributes for describing individual
events. We were motivated to use data mining techniques because the set of possible features is exponentially large.
An alternative conception of the problem is that we are constructing new features out of the primitives for describing
events. These new features augment the dimensionality of the original space by effectively pulling apart examples of
the same class, making them more easily distinguishable by classification algorithms. Of course, the process of constructing
features out of primitives is equivalent to the process of selecting features from the space of all combinations
of those primitives.
The input to our system is a set of labeled training sequences, and the output is a function which maps from a new
sequence to a label. In other words we are interested in selecting (or constructing) features for sequence classification.
In order to generate this function, our algorithm first uses sequence mining on a portion of the training data for
discovering frequent and distinctive sequences and then uses these sequences as features to feed into a classification
algorithm (Winnow or Naive Bayes) to generate a classifier from the remainder of the data.
In past work, the rules produced by data mining algorithms have been used to construct classifiers primarily by
ordering the rules into decision lists (e.g. [1, 2]) or by merging them into more general rules that occur in the training
data (e.g., [3]). In this paper, we convert the patterns discovered by the mining algorithm into a set of boolean features
to feed into standard classification algorithms. The classification algorithms, in turn, assign weights to the features
which allows evidence for different features to be combined in order to classify a new example.
There are two main contributions of this paper. First, we combine two powerful data mining paradigms: sequence
mining which can efficiently search for patterns that are correlated with the target classes, and classification algorithms
which learn to weigh evidence for different features to classify new examples. Second, we present a scalable feature
mining algorithm that can handle very large datasets with thousands of items and millions of records. Additionally,
we present criteria for selecting features, and present pruning rules that allow for more efficient mining of the features.
We present FEATUREMINE, a scalable algorithm capable of handling large disk-resident datasets, which mines for
good sequential features, and integrates pruning constraints in the algorithm itself, instead of post-processing. This
enables it to efficiently search through large pattern spaces.
1.1 Example: poker
Let's first preview the main ideas in this paper with a simple, illustrative example. Suppose we observe three people
playing poker with betting sequences and outcomes such as:
example: P1 Bets 3, P2 Calls, P3 Raises 2, P1 Raises 1, P2 Folds, P3 Calls ) P1 wins
example: P2 Passes, P3 Bets 1, P1 Folds, P2 Raises 2, P3 Raises 2, P2 Calls ) P3 wins
Our objective is to learn a function that predicts who is most likely to win given a betting sequence. This task
resembles standard classification: we are given labeled training examples and must produce a function that classifies
new, unlabeled examples. Many classifiers require, however, that examples be represented as vectors of feature-value
pairs. This paper addresses the problem of selecting features to represent the betting sequences.
First, consider an obvious, but poor, feature set. Let N be the length of the longest betting sequence. We can
represent betting sequences with 3N features by generating a distinct feature for every 0 i N , for the person who
made the ith bet, for the type of the ith bet, and for the amount of the ith bet. In section 3, we show experimentally that
this feature set leads to poor classification. One problem with these features is that an individual feature can express
only that a particular, complete bidding sequence took place, but not that an interesting subsequence occurred, such
as:
Feature:
Feature: dollars
The first feature would be important if P 1 tends to win whenever she raises twice. A classifier could construct a boolean
expression out of the features described above to capture the notion "P 1 raises twice", but the expression would have
disjuncts, since we need a disjunct for "P 1 raises in the ith bet and in the jth" for all
believe it is important to consider partial specifications because it is difficult to know in advance whether "P 1 raises
twice" or "P 1 raises by 2 and then raises by 3" will be a more useful feature.
An alternative is to use a much larger feature set. If there are 3 players, 4 bids, and 5 different amounts then there
are 4 5 specifications of a bet, such as "someone bets 3". We can chain partial specifications
together with an "and then" relation, as in "P 1 raises and then someone bets 3". The number of such features of length
K is 120 K . The problem with this feature set is that it is too large. Sets of 10,000 features are considered large for
classification algorithms. Furthermore, irrelevant or redundant features can reduce classification accuracy [4].
We adopt a middle ground between these two extremes. We use data mining techniques to search through the
second, huge feature set and select a subset. We show that criteria similar to that used in the general knowledge
discovery task works well for deciding which features will be useful.
2 Data mining for features
We now formulate and present an algorithm for feature mining. The formulation involves specifying a language for
features that can express sequences of partial descriptions of events (with gaps), such as "P 1 raises and then in some
later bid folds". We then present criteria for selecting a subset of the features, to be used for classification, from the
entire set that can be expressed in our language. Finally, we describe the FEATUREMINE algorithm to efficiently mine
the features selected by our criteria.
We begin by adopting the following terminology, which closely resembles that used for sequence mining (e.g.,
[5]). Let F be a set of distinct features, each with some finite set of possible values. Let I contain a unique element
for every possible feature-value pair. A sequence is an ordered list of subsets of I . For example, if I = fA; B; C:::g,
then an example sequence would be AB ! A ! BC. A sequence is denoted as
each sequence element i is a subset of I . The length of sequence its width is the
maximum size of any i for 1 i n. We say that is a subsequence of , denoted as , if there exists integers
for all j . For example, AB ! C is a subsequence of AB ! A ! BC. Let
H be a set of class labels. An example is a pair h; ci where is a sequence and c 2 H is a
label. Each example has a unique identifier eid, and each i has a time-stamp at which it occurred. An example h; ci
is said to contain sequence if .
Our input database D consists of a set of examples. This means that the data we look at has multiple sequences,
each of which is composed of sets of items. The frequency of sequence in D, denoted fr(; D), is the fraction of
examples in D that contain . Let be a sequence and c be a class label. The confidence of the rule ) c, denoted
conf(; c; D), is the conditional probability that c is the label of an example in D given that it contains sequence .
That is, conf(; c;
where D c is the subset of examples in D with class label c. A sequence is said to be
frequent if its frequency is more than a user-specified min freq threshold. A rule is said to be strong if its confidence
is more than a user-specified min conf threshold. Our goal is to mine for frequent and strong patterns. Figure 1 shows
a database of examples. There are 7 examples, 4 belonging to class c 1 , and 3 belonging to class c 2 . In general there
can be more than two classes. We are looking for different min freq sequences on each class. For example, while C
is frequent for class c 2 , it's not frequent for class c 1 . The rule C ) c 2 has confidence while the rule
confidence
A sequence classifier is a function from sequences to class labels, H. A classifier can be evaluated using standard
A
B->A
A->A
100%
75%
75%
75%
75%
100%
100%
100%
EID
A C
A
A
A
Time
A C
A A
A->C
67%
67%
100%
Class
6 c2
Examples
New Boolean Features
Class
EID
Figure
1: of Examples, B) New Database with Boolean Features
metrics such as accuracy and coverage. Finally, we describe how frequent sequences 1 ; :::; n can be used as features
for classification. Recall that the input to most standard classifiers is an example represented as vector of feature-value
pairs. We represent an example sequence as a vector of feature-value pairs by treating each sequence i as a boolean
feature that is true iff i . For example, suppose the features are f
The sequence AB ! BD ! BC would be represented as hf 1 ; truei; hf falsei. The sequence ABCD
would be represented as truei. Note that features can "skip" steps: the
feature A ! BC holds in AB ! BD ! BC. Figure 1B shows the new dataset created from the frequent sequences
of our example database (in Figure 1 A). While we use all frequent sequences as features in this example, in general
we use only a "good" subset of all frequent sequences as features, as described below.
2.1 Selection criteria for mining
We now specify our selection criteria for selecting features to use for classification. Our objective is to find sequences
such that representing examples with these sequences will yield a highly accurate sequence classifier. However, we do
not want to search over the space of all subsets of features [4]), but instead want to evaluate each new sequential feature
in isolation or by pair-wise comparison to other candidate features. Certainly, the criteria for selecting features might
depend on the domain and the classifier being used. We believe, however, that the following domain-and-classifier-
independent heuristics are useful for selecting sequences to serve as features:
Features should be frequent.
Features should be distinctive of at least one class.
Feature sets should not contain redundant features.
The intuition behind the first heuristic is simply that rare features can, by definition, only rarely be useful for
classifying examples. In our problem formulation, this heuristic translates into a requirement that all features have
some minimum frequency in the training set. Note that since we use a different min freq for each class, patterns
that are rare in the entire database can still be frequent for a specific class. We only ignore those patterns which are
rare for any class. The intuition for the second heuristic is that features that are equally likely in all classes do not
help determine which class an example belongs to. Of course, a conjunction of multiple non-distinctive features can
be distinctive. In this case, our algorithm prefers to use the distinctive conjunction as a feature rather than the non-
distinctive conjuncts. We encode this heuristic by requiring that each selected feature be significantly correlated with
at least one class that it is frequent in.
The motivation for our third heuristic is that if two features are closely correlated with each other, then either of
them is as useful for classification as both are. We show below that we can reduce the number of features and the
time needed to mine for features by pruning redundant rules. In addition to wanting to prune features which provide
the same information, we also want to prune a feature if there is another feature available that provides strictly more
information. Let M(f;D) be the set of examples in D that contain feature f . We say that feature f 1 subsumes feature
with respect to predicting class c in data set D iff M(f
Intuitively, if f 1 subsumes f 2 for class c then f 1 is superior to f 2 for predicting c because f 1 covers every example of c
in the training data that f 2 covers and f 1 covers only a subset of the non-c examples that f 2 covers. Note that a feature
f 2 can be a better predictor of class c than f 1 even if f 1 covers more examples of c than f 2 if, for example, every
example that f 2 covers is in c but only half the examples that f 1 covers are in c. In this case, neither feature subsumes
the other. The third heuristic leads to two pruning rules. The first pruning rule is that we do not extend (i.e, specialize)
any feature with 100% accuracy. Let f 1 be a feature contained by examples of only one class. Specializations of f 1
may pass the frequency and confidence tests in the definition of feature mining, but will be subsumed by f 1 . The
following Lemma, which follows from the definition of subsume, justifies this pruning rule:
with respect to class c.
Our next pruning rule concerns correlations between individual items. Recall that the examples in D are represented
as a sequence of sets. We say that examples D if B occurs in every set in every sequence in D in
which A occurs. The following lemma states that if A ; B then any feature containing a set with both A and B will
be subsumed by one of its generalizations:
Lemma 2: Let will be subsumed
by
We precompute the set of all A ; B relations and immediately prune any feature, during the search, that contains
a set with both A and B. In section 3, we discuss why A ; B relations arise and show they are crucial for the success
of our approach for some problems. We can now define the feature mining task. The inputs to the FEATUREMINE
algorithm are a set of examples D and parameters min freq, maxw , and max l . The output is a non-redundant set of
the frequent and distinctive features of width maxw and length max l . Formally:
Feature mining: Given examples D and parameters min freq, maxw , and max l return feature set F such that
for every feature f i and every class c
D) is significantly greater than jD c j=jDj then F contains f i or contains a feature that
subsumes f i with respect to class c j in data set D (we use a chi-squared test to determine significance.)
2.2 Efficient mining of features
We now present the FEATUREMINE algorithm which leverages existing data mining techniques to efficiently mine
features from a set of training examples. Sequence mining algorithms are designed to discover highly frequent and
confident patterns in sequential data sets and so are well suited to our task. FEATUREMINE is based on the recently
proposed SPADE algorithm [5] for fast discovery of sequential patterns. SPADE is a scalable and disk-based algorithm
that can handle millions of example sequences and thousands of items. Consequently FEATUREMINE shares
these properties as well. To construct FEATUREMINE, we adapted the SPADE algorithm to search databases of labeled
examples. FEATUREMINE mines the patterns predictive of all the classes in the database, simultaneously. As
opposed to previous approaches that first mine millions of patterns and then apply pruning as a post-processing step,
FEATUREMINE integrates pruning techniques in the mining algorithm itself. This enables it to search a large space,
where previous methods would fail.
FEATUREMINE uses the observation that the subsequence relation defines a partial order on sequences. If
, we say that is more general than , or is more specific than . The relation is a monotone specialization
relation with respect to the frequency fr(; D), i.e., if is a frequent sequence, then all subsequences are
also frequent. The algorithm systematically searches the sequence lattice spanned by the subsequence relation, from
general to specific sequences, in a depth-first manner. Figure 2 shows the frequent sequences for our example database.
EID
SUFFIX-JOINS ON ID-LISTS
Time Time
Time
Time
{
A->A B->A B->B
A
A->C
(Intersect A->B and B->B)
(Intersect A and B)
Figure
2: Frequent Sequence Lattice and Frequency Computation
Frequency Computation: FEATUREMINE uses a vertical database layout, where we associate with each item X
in the sequence lattice its idlist, denoted L(X), which is a list of all example IDs (eid) and event time (time) pairs
containing the item. Figure 2 shows the idlists for all the items A and B. Given the sequence idlists, we can determine
the support of any k-sequence by simply intersecting the idlists of any two of its (k 1) length subsequences. A
check on the cardinality of the resulting idlist tells us whether the new sequence is frequent or not. There are two kinds
of intersections: temporal and equality. For example, Figure 2 shows the idlist for A ! B obtained by performing
a temporal intersection on the idlists of A and B, i.e., L(B). This is done by looking if,
within the same eid, A occurs before B, and listing all such occurrences. On the other hand the idlist for
obtained by an equality intersection, i.e., L(AB ! B). Here we check to see if the two
subsequences occur within the same eid at the same time. Additional details can be found in [5]. We also maintain the
class index table indicating the classes for each example. Using this table we are able to determine the frequency of a
sequence in all the classes at the same time. For example, A occurs in eids f1; 2; 3; 4; 5; 6g. However eids f1; 2; 3; 4g
have label c 1 and f5; 6g have label c 2 . Thus the frequency of A is 4 for c 1 , and 2 for c 2 . The class frequencies for each
pattern are shown in the frequency table.
To use only a limited amount of main-memory FEATUREMINE breaks up the sequence search space into small,
independent, manageable chunks which can be processed in memory. This is accomplished via suffix-based partition-
ing. We say that two k length sequences are in the same equivalence class or partition if they share a common k 1
length suffix. The partitions, such as f[A]; [B]; [C]g, based on length 1 suffixes are called parent partitions. Each
parent partition is independent in the sense that it has complete information for generating all frequent sequences that
share the same suffix. For example, if a class [X ] has the elements . The only possible frequent
sequences at the next step can be Y It should be obvious that no other
item Q can lead to a frequent sequence with the suffix X , unless (QX) or Q ! X is also in [X ].
for each parent partition P i do Enumerate-Features(P i )
for all elements do
for all elements do
if Rule-Prune(R, maxw ,
if accuracy(R) == 100% return TRUE;
return FALSE;
Figure
3: The FEATUREMINE Algorithm
Feature Enumeration: FEATUREMINE processes each parent partition in a depth-first manner, as shown in the
pseudo-code of Figure 3. The input to the procedure is a partition, along with the idlist for each of its elements.
Frequent sequences are generated by intersecting the idlists of all distinct pairs of sequences in each partition and
checking the cardinality of the resulting idlist against min sup(c i ). The sequences found to be frequent for some class
c i at the current level form partitions for the next level. This process is repeated until all frequent sequences have been
enumerated.
Integrated Constraints: FEATUREMINE integrates all pruning constraints into the mining algorithm itself, instead of
applying pruning as a post-processing step. As we shall show, this allows FEATUREMINE to search very large spaces
efficiently, which would have been infeasible otherwise. The Rule-Prune procedure eliminates features based on our
two pruning rules, and also based on length and width constraints. While the first pruning rule has to be tested each
time we extend a sequence with a new item, there exists a very efficient one-time method for applying the A ; B
rule. The idea is to first compute the frequency of all 2 length sequences. Then if
then A ; B, and we can remove AB from the suffix partition [B]. This guarantees that at no point in the future will
AB appear together in any set of any sequence.
3 Empirical evaluation
We now describe experiments to test whether the features produced by our system improve the performance of the
Winnow [6] and Naive Bayes [7] classification algorithms.
Winnow is a multiplicative weight-updating algorithm. We used a variant of Winnow that maintains a
weight w i;j for each feature f i and class c j . Given an example, the activation level for class c j is
x i is 1 if feature f i is true in the example, or 0 otherwise. Given an example, Winnow outputs the class with the
highest activation level. During training, Winnow iterates through the training examples. If Winnow's classification
of a training example does not agree with its label then Winnow updates the weights of each feature f i that was true
in the example: it multiplies the weights for the correct class by some constant > 1 and multiples the weights for
the incorrect classes by some constant < 1. In our experiments, Learning is often sensitive
to the values used for and ; we chose our values based on what is common in the literature and a small amount of
experimentation. Winnow can actually be used to prune irrelevant features. For example, we can run Winnow with
large feature sets (say 10,000) and then throw away any features that are assigned weight 0, or near 0. However, this
is not practical for sequence classification, since the space of potential features is exponential.
Naive-Bayes Classifier: For each feature f i and class c j , Naive Bayes computes P(f i jc j ) as the fraction of training
examples of class c j that contain f i . Given a new example in which features f 1 ; :::f n are true, Naive Bayes returns the
class that maximizes P though the Naive Bayes algorithm appears to make the
unjustified assumption that all features are independent it has been shown to perform surprisingly well, often doing as
well as or better than C4.5 [8]. We now describe the domains we tested our approach in and then discuss the results of
our experiments.
3.0.1 Random parity problems:
We first describe a non-sequential problem on which standard classification algorithms perform very poorly. In this
problem, every feature is true in exactly half of the examples in each class and the only way to solve the problem
is to discover which combinations of features are correlated with the different classes. Intuitively, we construct a
problem by generating N randomly-weighted "meta features", each of which is composed of a set of M actual, or
observable, features. The parity of the M observable features determines whether the corresponding meta feature is
true or false, and the class label of an instance is a function of the sum of the weights of the meta features that are true.
Thus, in order to solve these problems, FEATUREMINE must determine which of the observable features correspond
to the same meta feature. It is more important to discover the meta features with higher weights than ones with lower
weights. Additionally, to increase the difficulty of the problem, we add irrelevant features which have no bearing on
the class of an instance.
More formally, the problem consists of N parity problems of size M with L distracting, or irrelevant, features. For
every there is a boolean feature F i;j . Additionally, for 0 k L, there is an irrelevant,
boolean feature I k . To generate an instance, we randomly assign each relevant and irrelevant boolean true or false
with 50/50 probability. An example instance for
There are NM+L features, and 2 NM+L distinct instances.
All possible instances are equally likely.
We also choose N weights w 1 ; :::; wN , from the uniform distribution between 0 and 1, which are used to assign
each instance one of two class labels (ON or OFF) as follows. An instance is credited with weight w i iff the ith set of
M features has an even parity. That is, the "score" of an instance is the sum of the weights w i for which the number
of true features in f i;1 ; :::f i;M is even. If an instance's score is greater than half the sum of all the weights,
then the instance is assigned class label ON, otherwise it is assigned OFF. Note that if M > 1, then no feature by
itself is at all indicative of the class label ON or OFF, which is why parity problems are so hard for most classifiers.
The job of FEATUREMINE is essentially to figure out which features should be grouped together. Example features
that were produced by FEATUREMINE, for the results shown in Table 1, include (f 1;1 =true, f 1;2 =true), and (f 4;1 =true,
f 4;2 =false). We used a min freq of .02 to .05,
3.0.2 Forest fire plans:
The FEATUREMINE algorithm was originally motivated by the task of plan monitoring in stochastic domains. Probabilistic
planners construct plans with high probability of achieving their goal. The task of monitoring is to "watch" as
the plan is executed and predict, in advance, whether the plan will most likely succeed or fail to facilitate re-planning.
In order to build a monitor given a plan and goal, we first simulate the plan repeatedly to generate execution traces
and label each execution trace with SUCCESS or FAILURE depending on whether or not the goal holds in the final
state of the simulation. We then use these execution traces as input to FEATUREMINE.
Plan monitoring (or monitoring any probabilistic process that we can simulate) is an attractive area for machine
learning because there is an essentially unlimited supply of training data. Although we cannot consider all possible
execution paths, because the number of paths is exponential in the length of the plan, or process, we can generate
arbitrary numbers of new examples with Monte Carlo simulation. The problem of over-fitting is reduced because we
can test our hypotheses on "fresh" data sets.
As an example domain, we constructed a simple forest-fire domain based loosely on the Phoenix fire simulator [9]
(execution traces are available by email; contact [email protected]). We use a grid representation of the terrain. Each
grid cell can contain vegetation, water, or a base. An example terrain is shown in figure 4. At the beginning of each
.B.bb.B. ++d.d.+ ++d.d++ ++d++++d++
.bb.d.bb.d. ++d.bb.d++ ++d.bb.d++ ++d.+b.d++
.Bbb.B.wwwwww. ++d.bb.d++ ++d.bb.d++ ++d.d++
.wwwwww.wwwwww. ++wwwwww++ ++wwwwww++ ++wwwwww++
.wwwwww.wwwwww. ++wwwwww++ ++wwwwww++ ++wwwwww++
.+www.++www. ++++www+++ ++++www+++ ++++www+++
. ++++++++++ ++++++++++ ++++++++++
. ++++++++++ ++++++++++ ++++++++++
time
Figure
4: ASCII representation of several time slices of an example simulation of the fire world domain.A '+' indicates
fire. A 'b' indicates a base. A 'B' indicates a bulldozer. A 'd' indicates a place where the bulldozer has dug a fire line.
A 'w' indicates water, an unburnable terrain.
simulation, the fire is started at a random location. In each iteration of the simulation, the fire spreads stochastically.
The probability of a cell igniting at time t is calculated based on the cell's vegetation, the wind direction, and how
many of the cell's neighbors are burning at time t 1. Additionally, bulldozers are used to contain the fire. For each
example terrain, we hand-designed a plan for bulldozers to dig a fire line to stop the fire. The bulldozer's speed varies
from simulation to simulation. An example simulation looks like:
(time0 Ignite X3 Y7), (time0 MoveTo BD1 X3 Y4), (time0 MoveTo BD2 X7 Y4), (time0 DigAt BD2 X7 Y4), .,
(time6 Ignite X4 Y8), (time6 Ignite X3 Y8), (time8 DigAt BD2 X7 Y3), (time8 Ignite X3 Y9), (time8 DigAt BD1
X2 Y3), (time8 Ignite X5 Y8), ., (time32 Ignite X6 Y1), (time32 Ignite X6 Y0), .
We tag each plan with SUCCESS if none of the locations with bases have been burned in the final state, or FAILURE
otherwise. To train a plan monitor that can predict at time k whether or not the bases will ultimately be burned,
we only include events which occur by time k in the training examples. Example features produced by FEATUREMINE
in this domain are:
The first sequence holds if bulldozer BD1 moves to the second column before time 6. The second holds if a fire ignites
anywhere in the second column and then any bulldozer moves to the third row at time 8.
Many correlations used by our second pruning rule described in section 2.2 arise in these data sets. For example,
arises in one of our test plans in which a bulldozer never moves in the eighth column.
For the fire data, there are 38 boolean features to describe each event. And thus the number of composite features
we search over is ((38 l . In the experiments reported here, we used a min
Experiment Winnow WinnowTF WinnowFM Bayes BayesTF BayesFM
parity,
parity,
parity,
spelling, their vs. there .70 N/A .94 .75 N/A .78
spelling, I vs. me .86 N/A .94 .66 N/A .90
spelling, than vs. then .83 N/A .92 .79 N/A .81
spelling, you're vs. your .77 N/A .86 .77 N/A .86
Table
1: Classification results: the average classification accuracy using different feature sets to represent the examples.
Legend: TF uses features obtained by Times Features approach, and FM uses features produced by FEATUREM-
INE. The highest accuracy was obtained with the features produced by the FEATUREMINE algorithm. The standard
deviations are shown, in parentheses following each average, except for the spelling problems for which only one test
and training set were used.
3.0.3 Context-sensitive spelling correction
We also tested our algorithm on the task of correcting spelling errors that result in valid words, such as substituting
there for their ([10]). For each test, we chose two commonly confused words and searched for sentences in the 1-
million-word Brown corpus [11] containing either word. We removed the target word and then represented each word
by the word itself, the part-of-speech tag in the Brown corpus, and the position relative to the target word. For example,
the sentence "And then there is politics" is translated into (word=and tag=cc pos=-2) ! (word=then tag=rb pos=-1)
(word=is tag=bez pos=+1) ! (word=politics tag=nn pos=+2).
Example features produced by FEATUREMINE include (pos=+3) ! (word=the), indicating that the word the occurs
at least 3 words after the target word, and indicating that a noun occurs within three
words before the target word. These features (for reasons not obvious to us) were significantly correlated with either
there or their in the training set.
For the "I" vs. "me" dataset there were 3802 training examples, 944 test examples, and 4692 feature/value pairs;
for "there" vs. "their" dataset there were 2917 training examples, 755 test examples, and 5663 feature/value pairs;
for "than" vs. "then" dataset there were 2019 training examples, 494 test examples, and 4331 feature/value pairs; and
finally for "you're'' vs. ''your'' dataset there were 647 training examples, 173 test examples, and 1646 feature/value
pairs. If N is the number of feature/value pairs, then we search over (N maxw In the experiments
reported here, we used a min 2.
3.1 Results
For each test in the parity and fire domains, we generated 7,000 random training examples. We mined features from
1,000 examples, pruned features that did not pass a chi-squared significance test (for correlation to a class the feature
was frequent in) in 2,000 examples, and trained the classifier on the remaining 5,000 examples. We then tested on 1,000
additional examples. The results in Tables 1 and 2 are averages from 25-50 such tests. For the spelling correction, we
Experiment Evaluated Selected
features features
fire world, time =10 64,766 553
spelling, there vs. their 782,264 318
Table
2: Mining results: number of features considered and returned by FEATUREMINE
Experiment CPU seconds CPU seconds CPU seconds Features Features examined Features
with no pruning with only with all examined with with only examined with
pruning pruning no pruning A ; B pruning all pruning
random 320 337 337 1,547,122 1,547,122 1,547,122
fire world 5.8 hours 560 559 25,336,097 511,215 511,215
spelling 490 407 410 1,126,114 999,327 971,085
Table
3: Impact of pruning rules: running time and nodes visited for FEATUREMINE with and without the A ; B
pruning. Results taken from one data set for each example.
used all the examples in the Brown corpus, roughly 1000-4000 examples per word set, split 80-20 (by sentence) into
training and test sets. We mined features from 500 sentences and trained the classifier on the entire training set.
Table
1 shows that the features produced by FEATUREMINE improved classification performance. We compared
using the feature set produced by FEATUREMINE with using only the primitive features themselves, i.e. features of
length 1. In the fire domain, we also evaluated the feature set containing a feature for each primitive feature at each
time step (this is the feature set of size 3N described in section 1.1). Both Winnow and Naive Bayes performed
much better with the features produced by FEATUREMINE. In the parity experiments, the mined features dramatically
improved the performance of the classifiers and in the other experiments the mined features improved the accuracy of
the classifiers by a significant amount, often more than 20%.
Table
2 shows the number of features evaluated and the number returned, for several of the problems. For the
largest random parity problem, FEATUREMINE evaluated more than 7 million features and selected only about 200.
There were in fact 100 million possible features (there are 50 booleans features, giving rise to 100 feature-value pairs.
We searched to depth 4 since but most of were rejected implicitly by the pruning rules.
Table
3 shows the impact of the A ; B pruning rule described in Section 2.2 on mining time. The results are
from one data set from each domain, with slightly higher values for max l and maxw than in the above experiments.
The pruning rule did not improve mining time in all cases, but made a tremendous difference in the fire world prob-
lems, where the same event descriptors often appear together. Without A ; B pruning, the fire world problems are
essentially unsolvable because FEATUREMINE finds over 20 million frequent sequences.
4 Related work
A great deal of work has been done on feature-subset selection, motivated by the observation that classifiers can
perform worse with feature set F than with some F 0 F (e.g., [4]). The algorithms explore the exponentially
large space of all subsets of a given feature set. In contrast, we explore exponentially large sets of potential features,
but evaluate each feature independently. The feature-subset approach seems infeasible for the problems we consider,
which contain hundreds of thousands to millions of potential features.
[10] applied a Winnow-based algorithm to context-sensitive spelling correction. They use sets of 10,000 to 40,000
features and either use all of these features or prune some based on the classification accuracy of the individual features.
They obtain higher accuracy than we did. Their approach, however, involves an ensemble of Winnows, combined by
majority weighting, and they took more care in choosing good parameters for this specific task. Our goal, here, is to
demonstrate that the features produced by FEATUREMINE improve classification performance.
Data mining algorithms have often been applied to the task of classification. [2] build decision lists out of patterns
found by association mining, which is the non-sequential version of sequence mining. Additionally, while previous
work has explored new methods for combining association rules to build classifiers, the thrust of our work has been
to leverage and augment standard classification algorithms. Our pruning rules resemble ones used by [1], which also
employs data mining techniques to construct decision lists. Previous work on using data mining for classification has
focused on combining highly accurate rules together. By contrast, our classification algorithms can weigh evidence
from many features which each have low accuracy in order to classify new examples.
Our work is close in spirit to [12], which also constructs a set of sequential, boolean features for use by classification
algorithms. They employ a heuristic search algorithm, called FGEN, which incrementally generalizes features
to cover more and more of the training examples, based on its classification performance on a hold-out set of training
data, whereas we perform an exhaustive search (to some depth) and accept all features which meet our selection crite-
ria. Additionally, we use a different feature language and have tested our approaches on different classifiers than they
have.
Conclusions
We have shown that data mining techniques can be used to efficiently select, or construct, features for sequential
domains such as DNA, text, web usage data, and plan execution traces. These domains are challenging because of the
exponential number of potential subsequence features that can be formed from the primitives for describing each item
in the sequence data. The number of features is too large to be practically handled by today's classification algorithms.
Furthermore, this feature set contains many irrelevant and redundant features which can reduce classification accuracy.
Our approach is to search over the set of possible features, mining for ones that are frequent, predictive, and not-
redundant. By adapting scalable and disk-based data mining algorithms, we are able to perform this search efficiently.
However, in one of the three domains studied, this search was only practical due to the pruning rules we have incorporated
into our search algorithms. Our experiments in several domains show that the features produced by applying our
selection criteria can significantly improve classification accuracy. In particular, we have shown that we can construct
problems in which classifiers perform no better than random guessing using the original features but perform with near
perfect accuracy when using the features produced by FEATUREMINE. Furthermore, we have shown that the features
produced by FEATUREMINE can improve performance by as much as 20% in a simulated fire-planning domain and
on spelling correction data. More generally, this work shows that we can apply classification algorithms to domains in
which there is no obvious, small set of features for describing examples, but there is a large space of combinations of
primitive features that probably contains some useful features. Future work could involve applying these ideas to the
classification of, for example, images or audio signals.
--R
Learning decision lists using homogeneous rules.
Integrating classification and association rule mining.
Mining audit data to build intrusion detection models.
Greedy attribute selection.
Efficient enumeration of frequent sequences.
Learning quickly when irrelevant attributes abound: A new linear-threshold algorithm
Pattern Classification and Scene Analysis.
Beyond independence: conditions for the optimality of the simple bayesian clas- sifier
Predicting and explaining success and task duration in the phoenix planner.
Applying winnow to context-sensitive spelling correction
Computational Analysis of Present-Day American English
Feature generation for sequence categorization.
--TR
--CTR
Xiaonan Ji , James Bailey , Guozhu Dong, Mining minimal distinguishing subsequence patterns with gap constraints, Knowledge and Information Systems, v.11 n.3, p.259-286, April 2007
Florence Duchne , Catherine Garbay , Vincent Rialle, Learning recurrent behaviors from heterogeneous multivariate time-series, Arificial Intelligence in Medicine, v.39 n.1, p.25-47, January, 2007
San-Yih Hwang , Chih-Ping Wei , Wan-Shiou Yang, Discovery of temporal patterns from process instances, Computers in Industry, v.53 n.3, p.345-364, April 2004
Sbastien Ferr , Ross D. King, A Dichotomic Search Algorithm for Mining and Learning in Domain-Specific Logics, Fundamenta Informaticae, v.66 n.1-2, p.1-32, January 2005
Mohammed J. Zaki , Charu C. Aggarwal, XRules: An effective algorithm for structural classification of XML data, Machine Learning, v.62 n.1-2, p.137-170, February 2006
Chih-Ming Chen, Incremental personalized web page mining utilizing self-organizing HCMAC neural network, Web Intelligence and Agent System, v.2 n.1, p.21-38, January 2004
Chih-Ming Chen, Incremental personalized web page mining utilizing self-organizing HCMAC neural network, Web Intelligence and Agent System, v.2 n.1, p.21-38, August 2004 | classification;sequence mining;feature extraction;feature selection |
630575 | Analyzing the Subjective Interestingness of Association Rules. | Association rules are a class of important regularities in databases. They are found to be very useful in practical applications. However, association rule mining algorithms tend to produce a huge number of rules, most of which are of no interest to the user. Due to the large number of rules, it is very difficult for the user to analyze them manually to identify those truly interesting ones. This article presents a new approach to assist the user in finding interesting rules (in particular, unexpected rules) from a set of discovered association rules. This technique is characterized by analyzing the discovered association rules using the user's existing knowledge about the domain and then ranking the discovered rules according to various interestingness criteria, e.g., conformity and various types of unexpectedness. This technique has been implemented and successfully used in a number of applications. | Introduction
The interestingness issue has long been identified as an important problem in data mining. It refers to
finding rules that are interesting/useful to the user, not just any possible rule [e.g., 1, 11, 12, 21, 23,
24, 27, 30]. The reason for its importance is that, in practice, it is all too easy for a data mining
algorithm to discover a glut of rules, and most of these rules are of no interest to the user [11, 12, 21,
27, 30]. This is particularly true for association rule mining [e.g., 2, 3, 7, 14, 16, 28], which often
produces a huge number of rules. The huge number of rules makes manual inspection of the rules
very difficult. Automated assistance is needed. This paper presents an interestingness analysis system
(IAS) to help the user identify interesting association rules.
1.1 Rule interestingness measures
Past research in data mining has shown that the interestingness of a rule can be measured using
objective measures and subjective measures [e.g., 27, 11]. Objective measures involve analyzing the
rule's structure, predictive performance, and statistical significance [e.g., 27, 21, 17, 14, 2, 3]. In
association rule mining, such measures include support and confidence [2, 28, 3]. However, it is noted
in [21] that objective measures are insufficient for determining the interestingness of a discovered
To appear in IEEE Intellgent Systems, 2000
rule. Subjective measures are needed. Subjective interestingenss is the topic of this paper. Two main
subjective interestingness measures are: unexpectedness [11, 27] and actionability [21, 27].
. Unexpectedness: Rules are interesting if they are unknown to the user or contradict the user's
existing knowledge (or expectations).
. Actionability: Rules are interesting if the user can do something with them to his/her advantage.
Although both unexpectedness and actionability are important, actionability is the key concept in
most applications because actionable rules allow the user to do his/her job better by taking some
specific actions in response to the discovered knowledge [21, 27]. Actionability is, however, an
elusive concept because it is not feasible to know the space of all rules and the actions to be attached
to them [27]. Fortunately, the two measures are not mutually exclusive. Interesting rules can be
classified into three categories: (1) rules that are both unexpected and actionable; (2) rules that are
unexpected but not actionable; and (3) rules that are actionable but expected.
In this research, we only focus on unexpectedness. Actionability is partially handled through
unexpectedness because actionable rules are either expected or unexpected. Thus, the proposed
technique aims to find expected and unexpected association rules. Expected rules are also called
conforming rules as they conform to the user's existing knowledge or expectations.
1.2 Generalized association rules
Before discussing our proposed technique, let us first introduce the concept of association rules, in
particular, generalized association rules [28]. The generalized association rule model is more general
than the original association rule model given in [2].
The (generalized) association rule mining is defined as follows: Let I be a set of
items. Let G be a directed acyclic graph on the items. An edge in G represents an is-a relationship.
Then, G is a set of taxonomies. A taxonomy example is shown in Figure 1. Let T be a set of
transactions, where each transaction t is a set of items such that t - I. A (generalized) association rule
is an implication of the form X - Y, where X - I, Y - I, and X - -. The rule X - Y holds in the
transaction set T with confidence c if c% of transactions in T that support X also support Y. The rule
has support s in T if s% of the transactions in T contains X - Y.
For example, an association rule could be:
which says that 10% of people buy grape and apple together, and 60% of the people who buy grape
also buy apple. This rule only involves items at the bottom level of the taxonomy. We can also have
rules that involve items of more than one level. For example,
Fooditem
Fruit Dairy_product Meat
grape pear apple milk cheese butter beef pork chicken
Figure
1: An example taxonomy
Fruit, milk -Meat
1.3 Summary of the proposed technique
The basic idea of our technique is as follows: The system first asks the user to specify his/her existing
knowledge, e.g., beliefs or concepts, about the domain. It then analyzes the discovered rules to
identify those potentially interesting ones (e.g., unexpected rules). The proposed technique is an
interactive and iterative post-processing technique (see Section 3). It consists of three components:
1. A specification language: it allows the user to specify his/her various types of existing knowledge.
2. An interestngness analysis system: it analyzes the discovered association rules using the user's
specifications, and through such analysis, to identifiy: conforming rules, unexpected consequent
rules, unexpected condition rules and both-side unexpected rules.
3. A visualization system: it enables the user to visually detect interesting rules easily.
The proposed technique has been implemented and successfully applied to a number of applications.
The system is called IAS. It can be downloaded from: http://www.comp.nus.edu.sg/~dm2.
The paper is organized as follows: In the next section, we discuss the related work. Section 3
presents the proposed technique. Section 4 describes the visualization system using an example.
Section 5 evaluates the proposed technique. Section 6 concludes the paper.
Related Work
Existing research in rule interestingness focuses on either objective interestingness or subjective
interestingness. Objective interestingness analyzes rules' structure, predictive performance, statistical
significance, etc [e.g., 2, 3, 14, 16, 18, 22, 25, 31, 32]. Objective interestingness will not be discussed
further, as it is not the focus of this paper. This paper studies subjective interestingness. We assume
that objective interestingness analysis [14, 3, 32] has been performed to remove those redundant
and/or insignificant rules.
Most existing approaches to finding subjectively interesting association rules ask the user to
explicitly specify what types of rules are interesting and uninteresting. The system then generates or
retrieves those matching rules. [10] proposes a template-based approach. In this approach, the user
specifies interesting and uninteresting association rules using templates. A template describes a set of
rules in terms of items occurred in the conditional and the consequent parts. The system then retrieves
the matching rules from the set of discovered rules.
[29] proposes an association rule mining algorithm that can take item constraints specified by the
user in the rule mining process so that only those rules that satisfy the constraints are generated. [20]
extends this approach further to allow much more sophisticated constraints to be specified by the user.
It also uses the constraints to optimize the association rule mining process. The idea of using
constraints in the rule mining process is important as it avoids generating irrelevant rules.
Along the similar line, there are also a number of works based on data mining queries. For
example, M-SQL in [8], DMQL in [7], and Metaqueries in [26]. A data mining query basically
defines a set of rules of a certain type (or constraints on the rule to be found). To "execute" a query
means to find all rules that satisfy the query.
All the above methods view the process of finding subjectively interesting rules as a query-based
process, although the queries may be considered during rule generation or after all rules have been
discovered. Query-based methods have the following problems.
1. It is hard to find the truly unexpected rules. They can only find those anticipated rules because
queries can only be derived from the user's existing knowledge space. Yet, many rules that do not
satisfy the user's queries may also be of interest. It is just that the user has never thought of them
(they are unexpected or novel) or has forgotten about them.
2. The user often does not know or is unable to specify completely what interest him/her. He/she
needs to be stimulated or reminded. Query-based approaches do not actively perform this task
because they only return those rules that satisfy the queries.
Our proposed technique not only identifies those conforming rules as query-based methods, but also
provides three types of unexpected rules. Thus, the user is exposed to more possible interesting
aspects of the discovered rules rather than only focusing on his/her current interests (which he/she
may not be sure). If the unexpected rules are not truly unexpected, they serve to remind the user what
he/she has forgotten. IAS's visualization system also helps the user explore interesting rules easily.
In [11, 12], we reported two techniques for analyzing the subjective interestingness of
classification rules. However, those techniques cannot be applied to analyzing association rules.
Association rules require a different specification language and different ways of analyzing and
ranking the rules.
[23, 24] proposes a method of discovering unexpected patterns that takes into consideration a set
of expectations or beliefs about the problem domain. The method discovers unexpected patterns using
these expectations to seed the search for patterns in data that contradict the beliefs. However, this
method is in general not as efficient and flexible as our post-analysis method unless the user is able to
specify his/her beliefs or expectations about the domain completely beforehand, which is very
difficult, if not impossible [4, 5]. Typically, user interaction with the system is needed in order for
him/her to provide a more complete set of expectations and to find more interesting rules. Our post-analysis
method facilitates user-interaction because of its efficiency. The approach given in [23, 24]
also does not handle user's rough or vague feelings, but only precise knowledge (see Section 3.1).
User's vague feelings are important for identifying interesting rules because in our applications we
found that the user is more likely to have such forms of knowledge than precise knowledge. The
definitions of vague feelings and precise knowledge will be given in Section 3.1.
The system WizWhy [31] also has a method to produce unexpected rules. Its method, however, is
based on objective interestingness as its analysis does not depend on individual users. It first
computes the expected probability of a rule assuming independence of each of its conditions. It then
compares this expected probability with the rule's actual probability to compute its unexpectedness.
[27] proposes to use belief systems to describe unexpectedness. A number of formal approaches
to the belief systems are presented, e.g., Bayesian probability and Dempster-Shafer theory. These
approaches require the user to provide complex belief information, such as conditional probabilities,
which are difficult to obtain in practice.
There are also existing techniques that work in the contexts of specific domains. For example,
[21] studies the issue of finding interesting deviations in a health care application. Its data mining
system, KEFIR, analyzes health care information to uncover "key findings". A domain expert system
is constructed to evaluate the interestingness (in this case, actionability) of the "key findings". The
approach is, however, application specific. It also does not deal with association rules. Our method is
general. It does not make any domain-specific assumptions.
3. IAS: Interestingness Analysis System
We now present IAS. Basically, IAS is an interactive and iterative technique. In each iteration, it first
asks the user to specify his/her existing knowledge about the domain. It then uses this knowledge to
analyze the discovered rules according to some interestingness criteria, conformity and various types
of unexpectedness, and through such analysis to identify those potentially interesting rules. The IAS
system works as follows:
Repeat until the user decides to stop
1 the user specifies some existing knowledge or modifies the knowledge specified previously;
2 the system analyzes the discovered rules according to their conformity and unexpectedness;
3 the user inspects the analysis results through the visualization system, saves the interesting
rules, and removes those unwanted rules.
3.1. The specification language
IAS has a simple specification language to enable the user to express his/her existing knowledge. This
language focuses on representing the user's existing knowledge about associative relations on items in
the database. The basic syntax of the language takes the same format as association rules. It is
intuitive and simple, which is important for practical applications.
The language allows three types of specifications. Each represents knowledge of a different
degree of preciseness. They are:
. general impressions,
. reasonably precise concepts, and
. precise knowledge.
The first two types of knowledge represent the user's vague feelings. The last type represents
his/her precise knowledge. This division is important because human knowledge has granularities. It
is common that some aspects of our knowledge about a domain are quite vague, while other aspects
are very precise. For example, we may have a vague feeling or impression that some Meat items and
Fruit items should be associated, but have no idea what items are involved and how they are
associated. However, we may know precisely from past experiences or a previous data mining session
that buying bread implies buying milk with a support of around 10% and confidence of around 70%.
It is crucial to allow different types of knowledge to be specified. This not only determines how
we can make use of the knowledge, but also whether we can make use of all possible knowledge from
the user. For example, if a system can only handle precise knowledge, then the user who does not
have precise knowledge but has only vague impressions cannot use it.
The proposed specification language also make use of the idea of class hierarchy (or taxonomy),
which is the same as the one used in generalized association rules [28]. We represent the hierarchy in
Figure
1 as follows:
{grape, pear, apple} - Fruit - Fooditem
{milk, cheese, butter} - Dairy_product - Fooditem
{beef, pork, chicken} - Meat - Fooditem
Fruit, Dairy_product, Meat and Fooditems are classes (or class names). grape, pear, apple, milk,
cheese, beef, pork, chicken, #Fruit, #Dairy_product, #Meat and #Fooditems are items. Note that in
generalized association rules, class names can also be treated as items, in which case, we append a "#"
in front of a class name. Note also that in the proposed language, a class hierarchy does not need to be
constructed beforehand, but can be created on the fly when needed.
We now discuss the three types of knowledge that the user may input. The next sub-section shows
how these types of knowledge are used in finding conforming and unexpected rules.
General Impression (GI): It represents the user's vague feeling that there should be some
associations among some classes of items, but he/she is not sure how they are associated. This can
be expressed
where (1) Each S i is one of the following: an item, a class, or an expression C+ or C*, where C
is a class. C+ and C* correspond to one or more, and zero or more instances of the
class C, respectively.
(2) A discovered rule: a 1 , ., a n - b 1 , ., b k , conforms to the GI if <a 1 ,., a n , b 1 ,., b k >
can be considered to be an instance of <S 1 , ., S m >, otherwise it is unexpected with
respect to the GI.
(3) This impression actually represents a disjunctive propositional formula. Each
disjunct is an implication. For example, "gi(<a, {b, c}+>)" can be expanded into the
following (note that {b, c} is treated as a constructed class without a name):
A discovered association rule conforms to the impression if the rule is one of the
disjuncts. We can see that the formula is much more complex than the GI.
(4) Support and confidence are optional. The user can specify the minimum support and
the minimum confidence of the rules that he/she wants to see.
Example: The user believes that there exist some associations among {milk, cheese}, Fruit items, and
beef (assume we use the class hierarchy in Figure 1). He/she specifies this as:
gi(<{milk, cheese}*, Fruit+, beef>)
{milk, cheese} here represents a class constructed on the fly unlike Fruit. The following are
examples of association rules that conform to the specification:
apple - beef
grape, pear, beef - milk
The following two rules are unexpected with respect to this specification:
(2) milk, cheese, pear - clothes
(1) is unexpected because Fruit+ is not satisfied. (2) is unexpected because beef is not present in
the rule, and clothes is not from any of the elements of the GI specification.
Reasonably Precise Concept (RPC): It represents the user's concept that there should be some
associations among some classes of items, and he/she also knows the direction of the associations.
This can be expressed
where (1) S i or V j is the same as S i in the GI specification.
(2) A discovered rule, a 1 , ., a n - b 1 , ., b k , conforms to the RPC, if the rule can be
considered to be an instance of the RPC, otherwise it is unexpected with respect to
the RPC.
(3) Similar to a GI, an RPC also represents a complex disjunctive propositional formula.
(4) Support and confidence are again optional.
Example 2: Suppose the user believes the following:
rpc(<Meat, Meat, #Dairy_product - {grape, apple}+>)
Note that #Dairy_product here refers to an item, not a class. The following are examples of
association rules that conform to the specification:
beef, pork, Dairy_product - grape
beef, chicken, Dairy_product - grape, apple
The following association rules are unexpected with respect to the specification:
(1) pork, Dairy_product - grape
(2) beef, pork - grape
(3) beef, pork - milk
(1) is unexpected because it has only one Meat item, but two Meat items are needed as we have
two Meat's in the specification. (2) is unexpected because Dairy_product is not in the conditional
part of the rule. (3) is unexpected because Dairy_product is not in the conditional part of the rule,
and milk is not in the consequent of the RPC specification.
knowledge (PK): The user believes in a precise association. This is expressed
where (1) Each S i or V j is an item.
(2) A discovered rule, a 1 , ., a n - b 1 , ., b k [sup, confid], is equal to the PK, if the rule
part is the same as S 1 , . Whether it conforms to the PK or is
unexpected depends on the support and confidence specifications.
(3) Support and confidence need to be specified (not optional).
Example 3: Suppose the user believes the following:
pk(<#Meat, milk - apple>) [10%, 30%]
The discovered rule below conforms to the PK quite well because the supports and confidences of
the rule and the PK are quite close.
Meat, milk - apple [8%, 33%]
However, if the discovered rule is the following:
Meat, milk - apple [1%, 10%]
then it is less conforming, but more unexpected, because its support and confidence are quite
different from those of the PK.
3.2. Analyzing discovered rules using the user's existing knowledge
After the existing knowledge of the user is specified, the system uses it to analyze the discovered
rules. For GIs and RPCs, we perform syntax-based analysis, i.e., comparing the syntactic structure of
the discovered rules with GIs and RPCs. It does not make sense to do semantics-based analysis
because the user does not have any precise associations in mind. Using PKs, we can perform
semantics-based analysis, i.e., to perform support and confidence comparisons of the user's
specifications against the discovered rules that are equal to the specifications. This process is quite
straightforward and will not be discussed here. See [15] for details.
Let U be the set of user's specifications representing his/her knowledge space, A be the set of
discovered association rules. The proposed technique "matches" and ranks the rules in A in a number
of ways for finding different types of interesting rules, conforming rules, unexpected consequent
rules, unexpected condition rules and both-side unexpected rules. Below, we define them intuitively
and explain the purposes they serve. The computation details will follow.
Conforming rules: A discovered rule A i - A conforms to a piece of user's knowledge U j - U if both
the conditional and consequent parts of A i match those of U j - U well. We use confm ij to denote
the degree of conforming match.
Purpose: ranking of conforming rules shows us those rules that conform to or are consistent with
our existing knowledge fully or partially.
Unexpected consequent rules: A discovered rule A i - A has unexpected consequents with respect to
a U j - U if the conditional part of A i matches that of U j well, but not the consequent part. We
use unexpConseq ij to denote the degree of unexpected consequent match.
Purpose: ranking of unexpected consequent rules shows us those discovered rules that are contrary
to our existing knowledge (fully or partially). These rules are often very interesting.
Unexpected condition rules: A discovered rule A i - A has unexpected conditions with respect to a U j
- U if the consequent part of A i matches that of U j well, but not the conditional part. We use
unexpCond ij to denote the degree of unexpected condition match.
Purpose: ranking of unexpected condition rules shows us that there are other conditions which can
lead to the consequent of our specified knowledge. We are thus guided to explore the unfamiliar
territories, i.e., other associations that are related to our existing knowledge.
Both-side unexpected rules: A discovered rule A i - A is both-side unexpected with respect to a U -
U if both the conditional and consequent parts of the rule A i do not match those of U j well. We
use bsUnexp ij to denote the degree of both-side unexpected match.
Purpose: ranking of both-side unexpected rules reminds us that there are other rules whose
conditions and consequents have never been mentioned in our specification(s). It helps us to go
beyond our existing concept space.
The values for confm ij are between 0 and 1. 1 represents a
complete match, either a complete conforming or a complete unexpectedness match, and 0 represents
no match. Let L ij and R ij be the degrees of condition and consequent match of rule A i against U j
are computed as follows:
Note that we use to compute the unexpected consequent match degree because we wish to
rank those rules with high L ij but low R ij higher. Similar idea applies to unexpectCond ij . The formula
basically to make sure that those rules with high values in any other three categories
should have lower values here, and vice versa.
We now show how to compute L ij and R ij for GI and RPS specifications. Let I be the
set of items in the database. Let LN i and RN i be the total numbers of items in the conditional and
consequent parts of A i respectively. Let the discovered rule A i be
R
R
R
---
R
R
R
1. U j is a general impression (GI):
Let SN j be the total number of elements in the GI, where a class with a "*", i.e., C*, is not
counted. Let LM ij and RM ij be the numbers of items in the conditional and consequent parts of A i
that match {S 1 , ., S m } respectively. Let SM ij be the number of elements in {S 1 , ., S m } that have
been matched by A i (again, matching with a C* is not counted).
We define that an item a p - {a 1 , ., a n } matches S q - {S 1 , ., S m } (the same applies to b p
(i) if S q - I and a
(ii) if S q is a class C, and only exactly one a p - C (or exactly one a p is an instance of C), or
(iii) if S i is C+ or C*, and a p - C.
are computed as follows:
RN
RM
LM > then
RN
RM
else
RN
RM
Note that if SN
2. U j is a reasonably precise concept (RPC):
Let LSN j and RVN j be the total numbers of elements in the conditional and consequent parts of the
RPC respectively, where a class with a *, e.g., C*, is not counted. Let LM ij and RM ij be the
numbers of items in the conditional and consequent parts of A i that match {S 1 ,
respectively. Let LSM ij and RVM ij be the numbers of elements in {S 1 ,
that have been matched by the conditional and consequent parts of A i respectively (matching
with C* is not counted).
The meaning of matching is the same as above for the GI, except that here the conditional and
the consequent parts of A are considered separately with respect to {S 1 ,
are computed as follows:
RVN
RN
RM
Note that if LSN
RVN
been computed, we rank the
discovered rules using these values.
Ranking the rules with respect to each individual U j - U: For each U j - U, we simply use the
values to sort the discovered rules in A in a
descending order to obtain the four rankings. In each ranking, those rules that do not satisfy the
support and confidence requirements of U j are removed.
Ranking the rules with respect to the whole set of specifications U: Formulas for these rankings
are also designed and implemented. However, in our applications, we find that it is less effective
to use these rankings because all the conforming rules or unexpected rules with respect to all the
specifications in U are lumped together, thus making them hard to understand. Ranking with
respect to individual specification is more effective and easy to understand. However, ranking of
the discovered rules in A with respect to the whole set U is useful for finding rules whose
conditional and consequent parts are both unexpected, namely, both-side unexpected rules.
Both-side unexpected: Both the conditional and consequent parts of the rule A i - A are
unexpected with respect to the set U. The match value BsUnexp i of A i is computed
ensures that those rules that have been ranked high in
other rankings will not be ranked high here.
Time complexity: Assume the maximal number of items in a discovered rule is N; the number of
existing concept specifications is |U|, and the number of discovered rules is |A|. Computing LM ij
can be done in O(N). Without considering the final ranking which is a sorting process,
the runtime complexity of the algorithm is O(N|U||A|). Since N is small (at most 6 in our
applications) and |U| is also small (most of the time we only use each individual specification for
analysis), the computation is very efficient.
4. The Visualization System of IAS
After the discovered rules have been analyzed, IAS displays different types of potentially interesting
rules to the user. The key here is to show the essential aspects of the rules such that it can take
advantage of the human visual capabilities to enable the user to identify the truly interesting rules
easily and quickly. Let us discuss what are the essential aspects:
1. Types of potentially interesting rules: Different types of interesting rules should be separated
because they give the user different kinds of interesting knowledge.
2. Degrees of interestingness ("match" values): Rules should be grouped according to their degrees
of interestingness. This enables the user to focus his/her attention on the most unexpected (or
conforming) rules first and to decide whether to view those rules with lower degrees of
interestingness.
3. Interesting items: Showing the interesting items in a rule is more important than the whole rule.
This is perhaps the most crucial decision that we have made. In our applications, we find that it is
those unexpected items that are most important to the user because due to 1 above, the user
already knows what kind of interesting rules he/she is looking. For example, when the user is
looking at unexpected consequent rules, it is natural that the first thing he/she wants to know is
what are the unexpected items in the consequent parts. Even if we show the whole set of rules, the
user still needs to look for the unexpected items in the rules.
The main screen of the visualization system contains all the above information. Below, we use an
example to illustrate the visualization system.
4.1. An example
Our example uses a RPC specification. The rules in the example are a small subset of rules (857 rules)
discovered in an exam results database. This application tries to discover the associations between the
exam results of a set of 7 specialized courses (called GA courses) and the exam results of a set of 7
basic courses (called GB courses). A course together with an exam result form an item, e.g., GA6-1,
where GA6 is the course code and "1" represents a poor exam grade ("2" represents an average grade
and "3" a good grade). The discovered rules and our existing concept specification are listed below.
. Discovered association rules: The rules below have only GA course grades on left-hand-side and
GB course grades on right-hand-side (we omit their support and confidence).
. Our existing concept specification: Assume we have the common belief that students good in
some GA courses are likely to be good in some GB courses. This can be expressed as a RPC:
where the classes, GA-good and GB-good, are defined as follows:
GA-good - {GA1-3, GA2-3, GA3-3, GA4-3, GA5-3, GA6-3, GA7-3}
GB-good - {GB1-3, GB2-3, GB3-3, GB4-3, GB5-3, GB6-3, GB7-3}
4.2. Viewing the results
After running the system with the above RPC specification, we obtain the screen in Figure 2 (the main
screen). We see "RPC" in the middle. To the bottom of "RPC", we have the conforming rules
visualization unit. To the left of "RPC", we have the unexpected condition rules visualization unit. To
the right, we have the unexpected consequent rules visualization unit. To the top, we have both-side
unexpected rules visualization unit. Below, we will discuss these units in turn with the example.
Figure
2. RPC main visualization screen
Conforming rules visualization unit: Clicking on Conform, we will see the complete conforming rules
ranking in a pop-up window.
Rank 1: 1.00
Rank 2: 0.50 R11 GA6-1, GA3-3 - GB6-3
Rank 2: 0.50 R12 GA7-2, GA3-3 - GB4-3
The number (e.g., 1.00 and 0.50) after each rank number is the conforming match value, confm i1 .
The first three rules conform to our belief completely. The last two only conform to our belief
partially since GA6-1 and GA7-2 are unexpected. This list of rules can be long in an application.
The following mechanisms help the user focus his/her attention, i.e., enabling him/her to view
rules with different degrees of interestingness ("match" values) and to view the interesting items.
. On both sides of Conform we can see 4 pairs of boxes, which represent sets of rules with
different conforming match values. If a pair of boxes is colored, it means that there are rules
there, otherwise there is no rule. The line connecting "RPC" and a pair of colored boxes also
indicates that there are rules under them. The number of rules is shown on the line. Clicking on
the box with a value will give all the rules with the corresponding match value and above. For
example, clicking on 0.50 shows the rules with 0.50 - confm i1 < 0.75. Below each colored box
with a value, we have two small windows. The one on the top has all the rules' condition items
from our RPC specification, and the one at the bottom has all the consequent items. Clicking on
each item gives us the rules that use this item as a condition item (or a consequent item).
. Clicking on the colored box without a value (below the valued box) brings us to a new screen
(not shown here). From this screen, the user sees all the items in different classes involved, and
also conforming and unexpected items.
Unexpected condition rules visualization unit: The boxes here have similar meanings as the ones for
conforming rules. From Figure 2, we see that there are 4 unexpected condition rules. Two have the
unexpected match value of 1.00 and two have 0.50. The window (on the far left) connected to the
box with a match value gives all the unexpected condition items. Clicking on each item reveals the
relevant rules. Similarly, clicking on the colored box next to the one with a value shows both the
unexpected condition items and the items used in the consequent part of the rules. To obtain all the
rules in the category, we can click Unexpected Condition.
Rank 2: 0.50 R11 GA6-1, GA3-3 - GB6-3
Rank 2: 0.50 R12 GA7-2, GA3-3 - GB4-3
1.00 and 0.50 are the unexpCond i1 values. Here, we see something quite unexpected. For example,
many students with bad grades in GA6 actually have good grades in GB1.
Unexpected consequent rules visualization unit: This is also similar to the conforming rules
visualization unit. From Figure 2, we see that there is only one unexpected consequent rule and the
unexpected consequent match value is 1.00. Clicking on the colored box with 1.00, we will obtain
the unexpected consequent rule:
This rule is very interesting because it contradicts our belief. Many students with good grades in
GA2 actually have bad grades in GB5.
Both-side unexpected rules visualization unit: We only have two unexpected match value boxes here,
i.e., 1.00 and 0.50. Due to the formulas in Section 3.2, rules with bsUnexp ij < 1.00 can actually all
be seen from other visualization units. The unexpected items can be obtained by clicking on the
box above the one with a value. All the ranked rules can be obtained by clicking Both Sides
Unexpected.
Rank 2: 0.50 R11 GA6-1, GA3-3 - GB6-3
Rank 2: 0.50 R12 GA7-2, GA3-3 - GB4-3
From this ranking, we also see something quite interesting, i.e., average grades lead to average
grades and bad grades lead to average grades. Some of these rules are common sense, e.g., average
to average rules (R8 and R10), but we did not specify them as our existing knowledge (if "average
to average" had been specified as our knowledge earlier, these rules would not have appeared here
because they would have been removed). This shows the advantage of our technique, i.e., it can
remind us what we have forgotten if the rules are not truly unexpected.
The visualization system also allows the user to incrementally save interesting rules and to remove
unwanted rules. Whenever a rule is removed or saved (also removed from the original set of rules),
the related pictures and windows are updated.
5. Evaluation
The IAS system is implemented in Visual C++. Our association rule mining system is based on the
generalized association rule mining algorithm in [28]. Those redundant and/or insignificant rules are
removed using the pruning technique in [14] (objective interestingness analysis).
Since there is no existing technique that is able to perform our task, we could not carry out a
comparison. Most existing methods [10, 7, 8, 18, 20, 29] only produce conforming rules but not
unexpected rules. Although the system described in [23, 24] produces unexpected association rules, it
is not an interactive post-analysis system, and it does not handle RPC and GI specifications.
As the proposed technique deals with subjective interestingness, it is difficult to have an objective
measure of its performance. We have carried out a number of experiments involving our users (2) and
students (6) to check whether the rankings do reflect people's intuitions of subjective interestingness,
in particular, unexpectedness.
In the experiments, we used 3 application datasets, and each subject is asked to specify 10 pieces
of existing knowledge for each dataset and to view the ranking results. In the process, we found that
some subjects do occasionally disagree with the relative ranking. For example, a subject may believe
that a particular rule should be ranked above its neighbor. There were 5 such cases. However, this
(i.e., slightly different relative ranking) is not a problem. We do expect such minor disagreements
because we are dealing with a subjective issue here. The important thing is that everyone agrees that
the technique is able to bring those interesting rules to the top of the list.
Our system has been successfully used in three real-life applications in Singapore, one
educational application, one insurance application and one medical application. Due to confidentiality
agreements, we could not give details of these applications. In the applications, the smallest rule set
has 770 rules. Most of them have one to two thousand rules. When our users first saw a large number
of rules, they were overwhelmed. Our tool makes it much easier for them to analyze these discovered
rules. Initially, they were only interested in finding a few types of rules to confirm (or verify) their
hypotheses. However, they ended up finding many interesting rules that they had never thought of
before as a result of the various unexpectedness rankings. The rules used in the example of Section 4
are from one of our applications (the items appeared in the rules were encrypted).
6. Conclusion and Future Work
This paper proposes a new approach for helping the user identify interesting association rules, in
particular, expected and unexpected rules. It consists of an intuitive specification language and an
interestingness analysis system. The specification language allows the user to specify his/her various
types of existing knowledge about the domain. The interestingness analysis system analyzes the
discovered association rules using the user's specifications to identify those potentially interesting
ones for the user. The new method is more general and powerful than the existing methods because
most existing methods only produce the conforming rules, but not the unexpected rules of various
types. Unexpected rules are by definition interesting.
In our future work, we will investigate more sophisticated representation schemes and analysis
methods such that we not only can perform analysis at individual rule level but also at higher levels,
e.g., to determine whether a set of rules is interesting as a group to the user, and to infer interesting
knowledge from the discovered rules.
Acknowledgement
We would like to thank many people, especially Minqing Hu, Ken Wong and
Yiyuan Xia, for their contributions to the project. The project is funded by National Science and
Technology Board (NSTB) and National University of Singapore (NUS).
--R
"Discovery of actionable patterns in databases: the action hierarchy approach."
"Mining the most interesting rules."
"A survey of knowledge acquisition techniques and tools."
"From data mining to knowledge discovery: an overview,"
"DMQL: a data mining query language for relational databases."
"DataMine: application programming interface and query language for database mining."
"Evaluating the interestingness of characteristic rules."
"Finding interesting rules from large sets of discovered association rules."
"Post-analysis of learned rules."
"Using general impressions to analyze discovered classification rules."
"Integrating classification and association rule mining."
"Pruning and summarizing the discovered associations."
"Helping the user identify unexpected association rules."
"Multi-level organization and summarization of the discovered rules,"
"Selecting among rules induced from a hurricane database."
"A new SQL-like operator for mining association rules."
"Exploratory mining and pruning optimizations of constrained associatino rules."
"The interestingness of deviations."
"Discovery, analysis and presentation of strong rules."
"A belief-driven method for discovering unexpected patterns."
"Small is Beautiful: Discovering the Minimal Set of Unexpected Patterns."
"Metaqueries for data mining."
"What makes patterns interesting in knowledge discovery systems."
"Mining generalized association rules."
"Mining association rules with item constraints."
"A pattern discovery algebra."
"Generating non-redundant association rules,"
--TR
--CTR
Peter Fule , John F. Roddick, Experiences in building a tool for navigating association rule result sets, Proceedings of the second workshop on Australasian information security, Data Mining and Web Intelligence, and Software Internationalisation, p.103-108, January 01, 2004, Dunedin, New Zealand
B. Shekar , Rajesh Natarajan, A Framework for Evaluating Knowledge-Based Interestingness of Association Rules, Fuzzy Optimization and Decision Making, v.3 n.2, p.157-185, June 2004
Yang, Pruning and Visualizing Generalized Association Rules in Parallel Coordinates, IEEE Transactions on Knowledge and Data Engineering, v.17 n.1, p.60-70, January 2005
Mu-Chen Chen, Ranking discovered rules from data mining with multiple criteria by data envelopment analysis, Expert Systems with Applications: An International Journal, v.33 n.4, p.1110-1116, November, 2007
Hassan H. Malik , John R. Kender, Clustering web images using association rules, interestingness measures, and hypergraph partitions, Proceedings of the 6th international conference on Web engineering, July 11-14, 2006, Palo Alto, California, USA
Cristbal Romero , Sebastin Ventura , Paul De Bra, Knowledge Discovery with Genetic Programming for Providing Feedback to Courseware Authors, User Modeling and User-Adapted Interaction, v.14 n.5, p.425-464, January 2005
Julien Blanchard , Fabrice Guillet , Henri Briand, Interactive visual exploration of association rules with rule-focusing methodology, Knowledge and Information Systems, v.13 n.1, p.43-75, September 2007 | interestingness analysis in data mining;subjective interestingness;association rules |
631019 | The Infeasibility of Quantifying the Reliability of Life-Critical Real-Time Software. | This work affirms that the quantification of life-critical software reliability is infeasible using statistical methods, whether these methods are applied to standard software or fault-tolerant software. The classical methods of estimating reliability are shown to lead to exorbitant amounts of testing when applied to life-critical software. Reliability growth models are examined and also shown to be incapable of overcoming the need for excessive amounts of testing. The key assumption of software fault tolerance-separately programmed versions fail independently-is shown to be problematic. This assumption cannot be justified by experimentation in the ultrareliability region, and subjective arguments in its favor are not sufficiently strong to justify it as an axiom. Also, the implications of the recent multiversion software experiments support this affirmation. | Introduction
The potential of enhanced flexibility and functionality has led to an ever increasing use of digital
computer systems in control applications. At first, the digital systems were designed to perform
the same functions as their analog counterparts. However, the availability of enormous computing
power at a low cost has led to expanded use of digital computers in current applications and
their introduction into many new applications. Thus, larger and more complex systems are being
designed. The result has been, as promised, increased performance at a minimal hardware cost;
however, it has also resulted in software systems which contain more errors. Sometimes, the impact
of a software bug is nothing more than an inconvenience. At other times a software bug leads to
costly downtime. But what will be the impact of design flaws in software systems used in life-critical
applications such as industrial-plant control, aircraft control, nuclear-reactor control, or nuclear-
warhead arming? What will be the price of software failure as digital computers are applied more
and more frequently to these and other life-critical functions? Already, the symptoms of using
insufficiently reliable software for life-critical applications are appearing [1, 2, 3].
For many years, much research has focused on the quantification of software reliability. Research
efforts started with reliability growth models in the early 1970's. In recent years, an emphasis on
developing methods which enable reliability quantification of software used for life-critical functions
has emerged. The common approach which is offered is the combination of software fault-tolerance
and statistical models.
In this paper, we will investigate the software reliability problem from two perspectives. We
will first explore the problems which arise when you test software as a black box, i.e. subject it to
inputs and check the outputs without examination of internal structure. Then, we will examine the
problems which arise when software is not treated as a black box, i.e. some internal structure is
modeled. In either case, we argue that the associated problems are intractable-i.e., they inevitably
lead to a need for testing beyond what is practical.
For life-critical applications, the validation process must establish that system reliability is extremely
high. Historically, this ultrahigh reliability requirement has been translated into a probability
of failure on the order of 10 \Gamma7 to 10 \Gamma9 for 1 to 10 hour missions. Unfortunately, such
probabilities create enormous problems for validation. For convenience, we will use the following
name failure rate (per hour)
moderate reliability 10 \Gamma3 to 10 \Gamma7
low reliability
Software does not physically fail as hardware does. Physical failures (as opposed to hardware design
occur when hardware wears out, breaks, or is adversely affected by environmental phenomena
such as electromagnetic fields or alpha particles. Software is not subject to these problems. Software
faults are present at the beginning of and throughout a system's lifetime. To such an extent, software
reliability is meaningless-software is either correct or incorrect with respect to its specification.
Nevertheless, software systems are embedded in stochastic environments. These environments
subject the software program to a sequence of inputs over time. For each input, the program
produces either a correct or an incorrect answer. Thus, in a systems context, the software system
produces errors in a stochastic manner; the sequence of errors behaves like a stochastic point
process.
In this paper, the inherent difficulty of accurately modeling software reliability will be explored.
To facilitate the discussion, we will construct a simple model of the software failure process. The
driver of the failure process is the external system that supplies inputs to the program. As a
function of its inputs and internal state, the program produces an output. If the software were
perfect, the internal state would be correct and the outputs produced would be correct. However,
if there is a design flaw in the program, it can manifest itself either by production of an erroneous
output or by corruption of the internal state (which may affect subsequent outputs).
In a real-time system, the software is periodically scheduled, i.e. the same program is repeatedly
executed in response to inputs. It is not unusual to find "iteration rates" of 10 to 100 cycles per
second. If the probability of software failure per input is constant, say p, we have a binomial
process. The number of failures S n after n inputs is given by the binomial distribution:
We wish to compute the probability of system failure for n inputs. System failure occurs for all
This can be converted to a function of time with the transformation
of inputs per unit time. The system failure probability at time t, P sys (t), is thus:
Of course, this calculation assumes that the probability of failure per input is constant over time. 1
This binomial process can be accurately approximated by an exponential distribution since p is
small and n is large:
This is easily derived using the Poisson approximation to the binomial. The discrete binomial
process can thus be accurately modeled by a continuous exponential process. In the following
discussion, we will frequently use the exponential process rather than the binomial process to
simplify the discussion.
Analyzing Software as a Black Box
The traditional method of validating reliability is life testing. In life testing, a set of test specimens
are operated under actual operating conditions for a predetermined amount of time. Over this
period, failure times are recorded and subsequently used in reliability computation. The internal
structure of the test specimens is not examined. The only observable is whether a specimen has
failed or not.
For systems that are designed to attain a probability of failure on the order of 10 \Gamma7 to 10 \Gamma9 for 1
hour missions or longer, life testing is prohibitively impractical. This can be shown by an illustrative
example. For simplicity, we will assume that the time to failure distribution is exponential. 2 Using
standard statistical methods [4], the time on test can be estimated for a specified system reliability.
There are two basic approaches: (1) testing with replacement and (2) testing without replacement.
In either case, one places n items on test. The test is finished when r failures have been observed.
In the first case, when a device fails a new device is put on test in its place. In the second case, a
failed device is not replaced. The tester chooses values of n and r to obtain the desired levels of the
ff and fi errors (i.e., the probability of rejecting a good system and the probability of accepting a
bad system respectively.) In general, the larger r and n are, the smaller the statistical estimation
errors are. The expected time on test can be calculated as a function of r and n. The expected
time on test, D t , for the replacement case is:
r
1 If the probability of failure per input were not constant, then the reliability analysis problem is even harder.
One would have to estimate p(t) rather than just p. A time-variant system would require even more testing than a
time-invariant one, since the rate must be determined as a function of mission time. The system would have to be
placed in a random state corresponding to a specific mission time and subjected to random inputs. This would have
to be done for each time point of interest within the mission time. Thus, if the reliability analysis is intractable for
systems with constant p, it is unrealistic to expect it to be tractable for systems with non-constant p(t).
2 In the previous section the exponential process was shown to be an accurate approximation to the discrete
binomial software failure process.
no. of replicates (n) Expected Test Duration D t
Table
1: Expected Test Duration For r=1
is the mean failure time of the test specimen [4]. The expected time on test for the
non-replacement case is:
r
Even without specifying an ff or fi error, a good indication of the testing time can be determined.
Clearly, the number of observed failures r must be greater than 0 and the total number of test
specimens n must be greater than or equal to r. For example, suppose the system has a probability
of failure of 10 \Gamma9 for a 10 hour mission. Then the mean time to failure of the system (assuming
exponentially distributed) -
Table
1 shows the expected test duration for this system as a function of the number of test
replicates n for 1. 3 It should be noted that a value of r equal to 1 produces the shortest test
time possible but at the price of extremely large ff and fi errors. To get satisfactory statistical
significance, larger values of r are needed and consequently even more testing. Therefore, given
that the economics of testing fault-tolerant systems (which are very expensive) rarely allow n to be
greater than 10, life-testing is clearly out of the question for ultrareliable systems. The technique
of statistical life-testing is discussed in more detail in the appendix.
4 Reliability Growth Models
The software design process involves a repetitive cycle of testing and repairing a program. A
program is subjected to inputs until it fails. The cause of failure is determined; the program is
repaired and is then subjected to a new sequence of inputs. The result is a sequence of programs
a sequence of inter-failure times (usually measured in number of
inputs). The goal is to construct a mathematical technique (i.e. model) to predict the reliability of
the final program p n based on the observed interfailure data. Such a model enables one to estimate
the probability of failure of the final "corrected" program without subjecting it to a sequence of
inputs. This process is a form of prediction or extrapolation and has been studied in detail [5, 6, 7].
These models are called "Reliability Growth Models". If one resists the temptation to correct the
program based on the last failure, the method is equivalent to black-box testing the final version.
If one corrects the final version and estimates the reliability of the corrected version based on a
reliability growth model, one hopefully has increased the efficiency of the testing process in doing
so. The question we would like to examine is how much efficiency is gained by use of a reliability
3 The expected time with or without replacement is almost the same in this case.
growth model and is it enough to get us into the ultrareliable region. Unfortunately, the answer is
that the gain in efficiency is not anywhere near enough to get us into the ultrareliable region. This
has been pointed out by several authors. Keiller and Miller write [8]:
The reliability growth scenario would start with faulty software. Through execution
of the software, bugs are discovered. The software is then modified to correct for the
design flaws represented by the bugs. Gradually the software evolves into a state of
higher reliability. There are at least two general reasons why this is an unreasonable
approach to highly-reliable safety-critical software. The time required for reliability
to grow to acceptable levels will tend to be extremely long. Extremely high levels of
reliability cannot be guaranteed a priori.
Littlewood writes [9]:
Clearly, the reliability growth techniques of x2 [a survey of the leading reliability growth
models] are useless in the face of such ultra-high reliability requirements. It is easy to
see that, even in the unlikely event that the system had achieved such a reliability, we
could not assure ourselves of that achievement in an acceptable time.
The problem alluded to by these authors can be seen clearly by applying a reliability growth model
to experimental data. The data of table 2 was taken from an experiment performed by Nagel and
Skrivan [10]. The data in this table was obtained for program A1, one of six programs investigated.
Number of Bugs Removed failure probability per input
Table
2: Nagel Data From Program A1
The versions represent the successive stages of the program as bugs were removed. A log-linear
growth model was postulated and found to fit all 6 programs analyzed in the report. A simple
regression on the data of table 2 yields a slope and y-intercept of: \Gamma1:415 and 0:2358, respectively.
The line is fitted to the log of the raw data as shown in figure 1. The correlation coefficient is
-0.913. It is important to note that in the context of reliability growth models the failure rates are
usually reported as failure rates per input, whereas the system requirements are given as failure
rates per hour or as a probability of failure for a specified mission duration (e.g. 10 hours). However,
equation 2 can be rearranged into a form which can be used to convert the system requirements
into a required failure rate per input.
Kt
If the system requirement is a probability of failure of 10 \Gamma9 for a 10-hour mission and the sample
rate of the system (i.e., K) is 10=sec, then the required failure rate per input p can be calculated
Number of Bugs Removed
Figure
1: Loglinear Fit To Program A1 Failure Data
as follows:
Kt
(10=sec)(3600 secs=hour)(10 hours)
The purpose of a reliability growth model is to estimate the failure rate of a program after the
removal of the last discovered bug. The loglinear growth model plotted in figure 1 can be used to
predict that the arrival rate of the next bug will be 6:34 \Theta 10 \Gamma5 . The key question, of course, is how
long will it take before enough bugs are removed so that the reliability growth model will predict
a failure rate per input of 2:78 \Theta 10 \Gamma15 or less. Using the loglinear model we can find the place
where the probability drops below 2:78 \Theta 10 \Gamma15 as illustrated in figure 2. Based upon the model,
the 24th bug will arrive at a rate of 2:28 \Theta 10 \Gamma15 , which is less than the goal. Thus, according to
the loglinear growth model, 23 bugs will have to removed before the model will yield an acceptable
failure rate. But how long will it take to remove these 23 bugs? The growth model predicts that
bug 23 will have a failure rate of about 9:38 \Theta 10 \Gamma15 . The expected number of test cases until
observing a binomial event of probability 9:38 \Theta 10 \Gamma15 is 1:07 \Theta 10 14 . If the test time is the same
as the real time execution time, then each test case would require 0.10 secs. Thus, the expected
time to discover bug 23 alone would be 1:07 \Theta 10 13 secs or 3:4 \Theta 10 5 years. In table 3, the above
calculations are given for all of the programs in reference [10]. 4 These examples illustrate why the
use of a reliability growth model does not alleviate the testing problem even if one assumes that
the model applies universally to the ultrareliable region.
Table
5 assumes a perfect fit with the log-linear model in the ultrareliable region.
Number of Bugs Removed
Figure
2: Extrapolation To Predict When Ultrareliability Will Be Reached
program slope y-intercept last bug test time
A3 -.54526 -1.3735 58 6:8 \Theta 10 5 years
Table
3: Test Time To Remove the Last Bug to Obtain Ultrareliability
4.1 Low Sample Rate Systems and Accelerated Testing
In this section, the feasibility of quantifying low-sample rate systems (i.e. systems where the time
between inputs is long) in the ultrareliable region will be briefly explored. Also, the potential of
accelerated testing will be discussed.
Suppose that the testing rate is faster than real time and let R = test time per input. Since
each test is an independent trial, the time to the appearance of the next bug is given by the
geometric distribution. Thus, the expected number of inputs until the next bug appears is 1=p and
the expected test time, D t , is given by:
Using equation 5, D t becomes:
RKt
From equation 6 it can be seen that a low sample rate system (i.e. a system with small K) requires
less test time than a high sample rate system (assuming that R remains constant). Suppose that the
system requirement is a probability of failure of 10 \Gamma9 for a 10 hour mission (i.e. P
If the system has a fast sample rate (e.g. and the time required to test an
input is the same as the real-time execution time (i.e. then the expected test time
years. Now suppose that R remains constant but K is reduced. (Note
that this usually implies an accelerated testing process. The execution time per input is usually
greater for slow systems than fast systems. Since R is not increased as K is decreased the net result
is equivalent to an accelerated test process.) The impact of decreasing K while holding R constant
can be seen in table 4 which reports the expected test time as a function of K. Thus, theoretically, a
K Expected Test
1/minute 1:9 \Theta 10 3 years
1/hour 31.7 years
1/day 1.32 years
1/month
Table
4: Expected Test Time as a function of K for
very slow system that can be tested very quickly (i.e. much faster than real-time) can be quantified
in the ultrareliable region. However, this is not as promising as it may look at first. The value
of K is fixed for a given system and the experimenter has no control over it. For example, the
sample rate for a digital flight control system is on the order of 10 inputs per second or faster and
little can be done to slow it down. Thus, the above theoretical result does nothing to alleviate the
testing problem here. Furthermore, real time systems typically are developed to exploit the full
capability of a computing system. Consequently, although a slower system's sample rate is less,
its execution time per input is usually higher and so R is much greater than the 0.10 secs used in
table 4. In fact one would expect to see R grow in proportion to 1=K. Thus, the results in table
4 are optimistic. Also, it should be noted that during the testing process, one must also determine
whether the program's answer is correct or incorrect. Consequently, the test time per input is often
much greater than the real-time execution time rather than being shorter. In conclusion, if one is
fortunate enough to have a very slow system that can exploit an accelerated testing process, one
can obtain ultra-reliable estimates of reliability with feasible amounts of test times. However, such
systems are usually not classified as real-time systems and thus, are out of the scope of this paper.
4.2 Reliability Growth Models and Accelerated Testing
Now lets revisit the reliability growth model in the context of a slow system which can be quickly
tested. Suppose the system under test is a slow real-time system with a sample rate of 1 input per
minute. Then, the failure rate per input must be less than 10 \Gamma9 \Gamma11 in order for
the program to have a failure rate of 10 \Gamma9 =hour. Using the regression results, it can be seen that
approximately 17 bugs must be removed:
bug failure rate per input
Thus one could test until 17 bugs have been removed, remove the last bug and use the reliability
growth model to predict a failure rate per input of 1:106 \Theta 10 \Gamma11 . But, how long would it take to
remove the 17 bugs? Well, the removal of the last bug alone would on average require approximately
test cases. Even if the testing process were 1000 times faster than the operational time
per input (i.e. would require 42 years of testing. Thus, we see why
Littlewood, Keiller and Miller see little hope of using reliability growth models for ultrareliable
software. This problem is not restricted to the program above but is universal. Table 5 repeats
the above calculations for the rest of the programs in reference [10]. At even the most optimistic
program slope y-intercept last bug test time
A3 -.54526 -1.3735 42 66 years
Table
5: Test Time To Remove the Last Bug to Obtain Ultrareliability
improvement rates, it is obvious that reliability growth models are impractical for ultrareliable
software.
5 Software Fault Tolerance
Since fault tolerance has been successfully used to protect against hardware physical failures, it
seems natural to apply the same strategy against software bugs. It is easy to construct a reliability
model of a system designed to mask physical failures using redundant hardware and voting. The
key assumption which enables both the design of ultrareliable systems from less reliable components
and the estimation of 10 \Gamma9 probabilities of failure is that the separate redundant components fail
independently or nearly so. The independence assumption has been used in hardware fault tolerance
modelling for many years. If the redundant components are located in separate chassis, powered
by separate power supplies, electrically isolated from each other and sufficiently shielded from the
environment it is not unreasonable to assume failure independence of physical hardware faults.
The basic strategy of the software fault-tolerance approach is to design several versions of a
program from the same specification and to employ a voter of some kind to protect the system from
bugs. The voter can be an acceptance test (i.e., recovery blocks) or a comparator (i.e., N-version
programming). Each version is programmed by a separate programming team. 5 Since the versions
are developed by separate programming teams, it is hoped that the redundant programs will fail
independently or nearly so [11, 12]. From the version reliability estimates and the independence
assumption, system reliability estimates could be calculated. However, unlike hardware physical
failures which are governed by the laws of physics, programming errors are the products of human
reasoning (i.e., actually improper reasoning). The question thus becomes one of the reasonableness
of assuming independence based on little or no practical or theoretical foundations. Subjective
arguments have been offered on both sides of this question. Unfortunately, the subjective arguments
for multiple versions being independent are not compelling enough to qualify it as an axiom.
The reasons why experimental justification of independence is infeasible and why ultrareliable
quantification is infeasible despite software fault tolerance are discussed in the next section.
5.1 Models of Software Fault Tolerance
Many reliability models of fault-tolerant software have been developed based on the independence
assumption. To accept such a model, this assumption must be accepted. In this section, it will
be shown how the independence assumption enables quantification in the ultrareliable region, why
quantification of fault-tolerant software reliability is unlikely without the independence assumption,
and why this assumption cannot be experimentally justified for the ultrareliable region.
5.1.1 INdependence Enables Quantification Of Ultrareliability
The following example will show how independence enables ultrareliability quantification. Suppose
three different versions of a program control a life-critical system using some software fault tolerance
scheme. Let E i;k be the event that the ith version fails on its kth execution. Suppose the probability
that version i fails during the kth execution is p i;k . As discussed in section 2, we will assume that
the failure rate is constant. Since the versions are voted, the system does not fail unless there is
a coincident error, i.e., two or more versions produce erroneous outputs in response to the same
input. The probability that two or more versions fail on the kth execution causing system failure
is:
Using the additive law of probability, this can be written as:
If independence of the versions is assumed, this can be rewritten as:
5 Often these separate programming teams are called "independent programming" teams. The phrase "independent
programming" does not mean the same thing as "independent manifestation of errors."
The reason why independence is usually assumed is obvious from the above formula-if each P
can be estimated to be approximately 10 \Gamma6 , then the probability of system failure due to two or
more coincident failures is approximately 3 \Theta 10 \Gamma12 .
Equation can be used to calculate the probability of failure for a T hour mission. Suppose
and the probability that the system fails during a mission of T hours can be calculated using
equation (1):
the number of executions of the program in an hour. For small p i the following
approximation is accurate:
For the following typical values of execution per second), we have
Thus, an ultrareliability quantification has been made. But, this depended critically on the independence
assumption. If the different versions do not fail independently, then equation (7) must
be used to compute failure probabilities and the above calculation is meaningless. In fact, the
probability of failure could be anywhere from 0 to about 10 \Gamma2 (i.e., 0 to 3pKT 6 ).
5.1.2 Ultrareliable Quantification Is Infeasible Without Independence
Now consider the impact of not being able to assume independence. The following argument was
adapted from Miller [13]. To simplify the notation, the last subscript will be dropped when referring
to the kth execution only. Thus,
Using the identity P this can be rewritten as:
This rewrite of the formula reveals two components of the system failure probability: (1) the first
line of equation 11 and (2) the last 4 lines of equation 11. If the multiple versions manifest errors
independently, then the last four lines (i.e. the second component) will be equal to zero. Conse-
quently, to establish independence experimentally, these terms must be shown to be 0. Realistically,
to establish "adequate" independence, these terms must be shown to have negligible effect on the
probability of system failure. Thus, the first component represents the "non-correlated" contribution
to P sys and the second component represents the "correlated" contribution to P sys . Note that
the terms in the first component of P sys are all products of the individual version probabilities.
6 3pKT is a first-order approximation to the probability that the system fails whenever any one of the 3 versions
fail.
If we cannot assume independence, we are back to the original equation (10). Since P
we have
Clearly, if P sys In other words, in order for P sys to be in the
ultrareliable region, the interaction terms (i.e. P must also be in the ultrareliable region.
To establish that the system is ultrareliable, the validation must either demonstrate that these
terms are very small or establish that P sys is small by some other means (from which we could
indirectly deduce that these terms are small.) Thus, we are back to the original life-testing problem
again.
From the above discussion, it is tempting to conclude that it is necessary to demonstrate that
each of the interaction terms is very small in order to establish that P sys is very small. However,
this is not a legitimate argument. Although the interaction terms will always be small when P sys is
small, one cannot argue that the only way of establishing that P sys is small is by showing that the
interaction terms are small. However, the likelihood of establishing that P sys is very small without
directly establishing that all of the interaction terms are small appears to be extremely remote.
This follows from the observation that without further assumptions, there is little more that can be
done with equation (10). It seems inescapable that no matter how (10) is manipulated, the terms
linearly. Unless, a form can be found where these terms are eliminated
altogether or appear in a non-linear form where they become negligible (e.g. all multiplied by
other parameters), the need to estimate them directly will remain. Furthermore, the information
contained in these terms must appear somewhere. The dependency of P sys on some formulation of
interaction cannot be eliminated.
Although the possibility that a method may be discovered for the validation of software fault-tolerance
remains, it is prudent to recognize where this opportunity lies. It does not lie in the
realm of controlled experimentation. The only hope is that a reformulation of equation (10) can be
discovered that enables the estimation of P sys from a set of parameters which can be estimated using
moderate amounts of testing. The efficacy of such a reformulation could be assessed analytically
before any experimentation.
5.1.3 Danger Of Extrapolation to the Ultrareliability Region
To see the danger in extrapolating from a feasible amount of testing that the different versions are
independent, we will consider some possible scenarios for coincident failure processes. Suppose that
the probability of failure of a single version during a 1 hour interval is 10 \Gamma5 . If the versions fail
independently, then the probability of a coincident error is on the order of 10 \Gamma10 . However, suppose
in actuality the arrival rate of a coincident error is 10 \Gamma7 =hour. One could test for 100 years and
most likely not see a coincident error. From such experiments it would be tempting to conclude
that the different versions are independent. After all, we have tested the system for 100 years and
not seen even one coincident error! If we make the independence assumption, the system reliability
is actually the system reliability is approximately
the failure rate for a single version were 10 \Gamma4 =hour and the arrival rate of coincident errors were
testing for one year would most likely result in no coincident errors. The erroneous
assumption of independence would allow the assignment of a 3 \Theta 10 \Gamma8 probability of failure to the
system when in reality the system is no better than 10 \Gamma5 .
In conclusion, if independence cannot be assumed, it seems inescapable that the intersection
of the events must be directly measured. As shown above, these
occur in the system failure formula not as products, but alone, and thus must be less than 10 \Gamma12 per
input in order for the system probability of failure to be less than 10 \Gamma9 at 1 hour. Unfortunately,
testing to this level is infeasible and extrapolation from feasible amounts of testing is dangerous.
Since ultrareliability has been established as a requirement for many systems, there is great
incentive to create models which enable an estimate in the ultrareliable region. Thus, there are many
examples of software reliability models for operational ultrareliable systems. Given the ramifications
of independence on fault-tolerant software reliability quantification, unjustifiable assumptions must
not be overlooked.
5.2 Feasibility of a General Model For Coincident Errors
Given the limitations imposed by non-independence, one possible approach to the ultrareliability
quantification problem is to develop a general fault-tolerant software reliability model that accounts
for coincident errors. Two possibilities exist:
1. The model includes terms which cannot be measured within feasible amounts of time.
2. The model includes only parameters which can be measured within feasible amounts of time.
It is possible to construct elaborate probability models which fall into the first class. Unfortunately
since they depend upon unmeasurable parameters, they are useless for the quantification
of ultrareliability. The second case is the only realistic approach. 7 The independence model is
an example of the second case. Models belonging to the second case must explicitly or implicitly
express the interaction terms in equation (10) as "known" functions of parameters which can be
measured in feasible amounts of time:
The known function f in the independence model is the zero function, i.e., the interaction terms
P I are zero identically irrespective of any other measurable parameters.
A more general model must provide a mechanism that makes these interaction terms negligibly
small in order to produce a number in the ultrareliable region. These known functions must be
applicable to all cases of multi-version software for which the model is intended. Clearly, any
estimation based on such a model would be strongly dependent upon correct knowledge of these
functions. But how can these functions be determined? There is little hope of deriving them from
fundamental laws, since the error process occurs in the human mind. The only possibility is to derive
them from experimentation, but experimentation can only derive functions appropriate for low or
moderate reliability software. Therefore, the correctness of these functions in the ultrareliable
region can not be established experimentally. Justifying the correctness of the known functions
requires far more testing than quantifying the reliability of a single ultrareliable system. The
model must be shown to be applicable to a specified sample space of multi-version programs. Thus,
there must be extensive sampling from the space of multi-version programs, each of which must
undergo life-testing for over 100,000 years in order to demonstrate the universal applicability of the
functions. Thus, in either case, the situation appears to be hopeless-the development of a credible
coincident error model which can be used to estimate system reliability within feasible amounts of
time is not possible.
7 The first case is included for completeness and because such models have been proposed in the past.
5.3 The Coincident-Error Experiments
Experiments have been performed by several researchers to investigate the coincident error process.
The first and perhaps most famous experiment was performed by Knight and Leveson [14]. In this
experiment 27 versions of a program were produced and subjected to 1,000,000 input cases. The
observed average failure rate per input was 0.0007. The major conclusion of the experiment was
that the independence model was rejected at the 99% confidence level. The quantity of coincident
errors was much greater than that predicted by the independence model. Experiments produced
by other researchers have confirmed the Knight-Leveson conclusion [12, 15]. A excellent discussion
of the experimental results is given in [16].
Some debate [16] has occurred over the credibility of these experiments. Rather than describe
the details of this debate, we would prefer to make a few general observations about the scope
and limitations of such experiments. First, the N-version systems used in these experiments must
have reliabilities in the low to moderate reliability region. Otherwise, no data would be obtained
which would be relevant to the independence question. 8 It is not sufficient (to get data) that the
individual versions are in this reliability region. The coincident error rate must be observable, so
the reliability of "voted" outputs must be in the low to moderate reliability region. To see this
consider the following. Suppose that we have a 3-version system where each replicate's failure rate
independently, the coincident error rate should be 3 \Theta 10 \Gamma8 =hour. The
versions are in the moderate reliability region, but the system is potentially (i.e. if independent) in
the ultrareliable region. In order to test for independence, "coincident" errors must be observed.
If the experiment is performed for one year and no coincident errors are observed, then one can
be confident that the coincident error rate (and consequently the system failure rate) is less than
coincident errors are observed then the coincident error rate is probably even higher.
If the coincident error rate is actually 10 \Gamma7 =hour, then the independence assumption is invalid, but
one would have to test for over 1000 years in order to have a reasonable chance to observe them!
Thus, future experiments will have one of the following results depending on the actual reliability
of the test specimens:
1. demonstration that the independence assumption does not hold for the low reliability system.
2. demonstration that the independence assumption does hold for systems for the low reliability
system.
3. no coincident errors were seen but the test time was insufficient to demonstrate independence
for the potentially ultrareliable system.
If the system under test is a low reliability system, the independence assumption may be contradicted
or vindicated. Either way, the results will not apply to ultrareliable systems except by way
of extrapolation. If the system under test were actually ultrareliable, the third conclusion would
result. Thus, experiments can reveal problems with a model such as the independence model when
the inaccuracies are so severe that they manifest themselves in the low or moderate reliability re-
gion. However, software reliability experiments can only demonstrate that an interaction model
is inaccurate, never that a model is accurate for ultrareliable software. Thus, negative results are
possible, but never positive results.
The experiments performed by Knight and Leveson and others have been useful to alerting
the world to a formerly unnoticed critical assumption. However, it is important to realize that
8 that is, unless one was willing to carry out a "Smithsonian" experiment, i.e. one which requires centuries to
complete.
these experiments cannot accomplish what is really needed-namely, to establish with scientific
rigor that a particular design is ultrareliable or that a particular design methodology produces
ultrareliable systems. This leaves us in a terrible bind. We want to use digital processors in life-critical
applications, but we have no feasible way of establishing that they meet their ultrareliability
requirements. We must either change the reliability requirements to a level which is in the low to
moderate reliability region or give up the notion of experimental quantification. Neither option is
very appealing.
6 Conclusions
In recent years, computer systems have been introduced into life-critical situations where previously
caution had precluded their use. Despite alarming incidents of disaster already occurring with
increasing frequency, industry in the United States and abroad continues to expand the use of
digital computers to monitor and control complex real-time physical processes and mechanical
devices. The potential performance advantages of using computers over their analog predecessors
have created an atmosphere where serious safety concerns about digital hardware and software are
not adequately addressed. Although fault-tolerance research has discovered effective techniques to
protect systems from physical component failure, practical methods to prevent design errors have
not been found. Without a major change in the design and verification methods used for life-critical
systems, major disasters are almost certain to occur with increasing frequency.
Since life-testing of ultrareliable software is infeasible (i.e., to quantify 10 \Gamma8 =hour failure rate
requires more than 10 8 hours of testing), reliability models of fault-tolerant software have been
developed from which ultrareliable-system estimates can be obtained. The key assumption which
enables an ultrareliability prediction for hardware failures is that the electrically isolated processors
fail independently. This assumption is reasonable for hardware component failures, but not provable
or testable. This assumption is not reasonable for software or hardware design flaws. Furthermore,
any model which tries to include some level of non-independent interaction between the multiple
versions can not be justified experimentally. It would take more than 10 8 hours of testing to make
sure there are not coincident errors in two or more versions which appear rarely but frequently
enough to degrade the system reliability below
Some significant conclusions can be drawn from the observations of this paper. Since digital
computers will inevitably be used in life-critical applications, it is necessary that "credible" methods
be developed for generating reliable software. Nevertheless, what constitutes a "credible" method
must be carefully reconsidered. A pervasive view is that software validation must be accomplished
by probabilistic and statistical methods. The shortcomings and pitfalls of this view have been
expounded in this paper. Based on intuitive merits, it is likely that software fault tolerance will be
used in life-critical applications. Nevertheless, the ability of this approach to generate ultrareliable
software cannot be demonstrated by research experiments. The question of whether software fault
tolerance is more effective than other design methodologies such as formal verification or vice versa
can only be answered for low or moderate reliability systems, not for ultrareliable applications.
The choice between software fault tolerance and formal verification must necessarily be based on
either extrapolation or nonexperimental reasoning.
Similarly, experiments designed to compare the accuracy of different types of software reliability
models can only be accomplished in the low to moderate reliability regions. There is little reason
to believe that a model which is accurate in the moderate region is accurate in the ultrareliable
region. It is possible that models which are inferior to other models in the moderate region are
superior in the ultrareliable region-again, this cannot be demonstrated.
Appendix
In this section, the statistics of life testing will be briefly reviewed. A more detailed presentation
can be found in a standard statistics text book such as Mann-Schafer-Singpurwalla [4]. This section
presents a statistical test based on the maximum likelihood ratio 9 and was produced using reference
extensively. The mathematical relationship between the number of test specimens, specimen
reliability, and expected time on test is explored.
the number of test
specimens
observed number of
specimen failures
the ordered failure times
A hypothesis test is constructed to test the reliability of the system against an alternative.
The null hypothesis covers the case where the system is ultrareliable. The alternative covers the
case where the system fails to meet the reliability requirement. The ff error is the probability of
rejecting the null hypothesis when it is true (i.e. producer's risk). The fi error is the probability of
accepting the null hypothesis when it is false (i.e. consumer's risk).
There are two basic experimental testing with replacement and (2) testing
without replacement. In either case, one places n items on test. The test is finished when r failures
have been observed. In the first case, when a device fails a new device is put on test in its place. In
the second case, a failed device is not replaced. The tester chooses values of n and r to obtain the
desired levels of the ff and fi errors. In general, the larger r and n are, the smaller the statistical
testing errors are.
It is necessary to assume some distribution for the time-to-failure of the test specimen. For
simplicity, we will assume that the distribution is exponential. 10 The test then can be reduced to
a test on exponential means, using the transformation:
\Gammaln[R(t)]
The expected time on test can then be calculated as a function of r and n. The expected time on
test, D t , for the replacement case is:
r
is the mean time to failure of the test specimen. The expected time on test for the
non-replacement case is:
r
In order to calculate the ff and fi errors, a specific value of the alternative mean must be selected.
Thus, the hypothesis test becomes:
9 The maximum likelihood ratio test is the test which provides the "best" critical region for a given ff error.
If the failure times follow a Weibull distribution with known shape parameter, the data can be transformed into
variables having exponential distributions before the test is applied.
A reasonable alternative hypothesis is that the reliability at 10 hours is or that -
The test statistic T r is given by
r
for the non-replacement case and
for the "replacement case". The critical value T c (for which the null hypothesis should be rejected
can be determined as a function of ff and r:
2r;ffwhere
-;ff is the ff percentile of the chi-square distribution with - degrees of freedom. Given a
choice of r and ff the value of the "best" critical region is determined by this formula. The fi error
can be calculated from
2r;1\GammafiNeither of the above equations can be solved until r is determined. However, the following formula
can be derived from them:
Given the desired ff and fi errors, one chooses the smallest r which satisfies this equation.
Example 1
Suppose that we wish to test:
For 0:01, the smallest r satisfying equation (14) is 3 (using a chi-square table).
Thus, the critical region is -
. The experimenter can choose
any value of n greater than r. The larger n is, the shorter the expected time on test is. For the
replacement case, the expected time on test is -
no. of replicates (n) Expected Test Duration D t
hours
hours
Even with 10000 test specimens, the expected test time is 342 years.
Example 2
Suppose that we wish to test:
Given the fi error can be calculated. First the critical region is -
From a chi-square table, the fi error can be seen to be greater than
Illustrative Table
For
- a
The following relationship exists between ff, r, and fi:
ff r fi
The power of the test drastically with changes in r. Clearly r must be at least 2 to
have a reasonable value for the beta error.
Acknowledgements
The authors are grateful to Dr. Alan White for his many helpful comments and to the anonymous
reviewers for their careful reviews and many helpful suggestions.
--R
"Software safety: What, why, and how,"
"A digital matter of life and death,"
"Software bugs: A matter of life and liability,"
Methods for Statistical Analysis of Reliability and Life Data.
"Evaluation of competing reliability predictions,"
"Adaptive software reliability modeling,"
"Stochastic model for fault-removal in computer programs and hardware designs,"
"On the use and the performance of software reliability growth models,"
"Predicting software reliability,"
"Software reliability: Repetitive run experimentation and modeling,"
"The n-version approach to fault-tolerant software,"
"Fault-tolerant software reliability modeling,"
"Making statistical inferences about software reliability,"
"An experimental evaluation of the assumptions of independence in multiversion programming,"
"An empirical comparison of software fault-tolerance and fault elimination,"
"A reply to the criticisms of the Knight & Leveson experi- ment,"
--TR
Software safety: why, what, and how
An experimental evaluation of the assumption of independence in multiversion programming
Evaluation of competing software reliability predictions
Fault-tolerant software reliability modeling
Software bugs: a matter of life and liability
An Empirical Comparison of Software Fault Tolerance and Fault Elimination
A reply to the criticisms of the Knight MYAMPERSANDamp; Leveson experiment
--CTR
Pierluigi San Pietro , Angelo Morzenti , Sandro Morasca, Generation of Execution Sequences for Modular Time Critical Systems, IEEE Transactions on Software Engineering, v.26 n.2, p.128-149, February 2000
Sandro Morasca , Angelo Morzenti , Pieluigi SanPietro, Generating functional test cases in-the-large for time-critical systems from logic-based specifications, ACM SIGSOFT Software Engineering Notes, v.21 n.3, p.39-52, May 1996
Phyllis G. Frankl , Yuetang Deng, Comparison of delivered reliability of branch, data flow and operational testing: A case study, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.124-134, Sept. 2000
V. Winter , D. Kapur , G. Fuehrer, Formal specification and refinement of a safe train control function, Formal methods for embedded distributed systems: how to master the complexity, Kluwer Academic Publishers, Norwell, MA, 2004
Dick Hamlet, On subdomains: Testing, profiles, and components, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.71-76, Sept. 2000
new type of security and safety architecture for distributed system: models and implementation, Proceedings of the 3rd international conference on Information security, November 14-16, 2004, Shanghai, China
J. H. R. May , A. D. Lunn, A Model of Code Sharing for Estimating Software Failure on Demand Probabilities, IEEE Transactions on Software Engineering, v.21 n.9, p.747-753, September 1995
Terry Shepard , Margaret Lamb , Diane Kelly, More testing should be taught, Communications of the ACM, v.44 n.6, p.103-108, June 2001
John C. Knight , Aaron G. Cass , Antonio M. Fernndez , Kevin G. Wika, Testing a safety-critical application, Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis, p.199, August 17-19, 1994, Seattle, Washington, United States
Phyllis G. Frankl , Richard G. Hamlet , Bev Littlewood , Lorenzo Strigini, Evaluating Testing Methods by Delivered Reliability, IEEE Transactions on Software Engineering, v.24 n.8, p.586-601, August 1998
Peter Amey , Roderick Chapman, Static verification and extreme programming, ACM SIGAda Ada Letters, v.XXIV n.1, p.4-9, March 2004
John C. Knight, Computing systems dependability, Proceedings of the 25th International Conference on Software Engineering, May 03-10, 2003, Portland, Oregon
Elisabeth A. Strunk , Xiang Yin , John C. Knight, Echo: a practical approach to formal verification, Proceedings of the 10th international workshop on Formal methods for industrial critical systems, p.44-53, September 05-06, 2005, Lisbon, Portugal
Walter J. Gutjahr, Optimal Test Distributions for Software Failure Cost Estimation, IEEE Transactions on Software Engineering, v.21 n.3, p.219-228, March 1995
Dick Hamlet, Subdomain testing of units and systems with state, Proceedings of the 2006 international symposium on Software testing and analysis, July 17-20, 2006, Portland, Maine, USA
Phyllis Frankl , Dick Hamlet , Bev Littlewood , Lorenzo Strigini, Choosing a testing method to deliver reliability, Proceedings of the 19th international conference on Software engineering, p.68-78, May 17-23, 1997, Boston, Massachusetts, United States
Brian Mitchell , Steven J. Zeil, Modeling reliability growth during non-representative, Annals of Software Engineering, 4, p.11-29, 1997
John C. Knight, Dependability of embedded systems, Proceedings of the 24th International Conference on Software Engineering, May 19-25, 2002, Orlando, Florida
John C. Knight, An Introduction to Computing System Dependability, Proceedings of the 26th International Conference on Software Engineering, p.730-731, May 23-28, 2004
Dick Hamlet , Dave Mason , Denise Woit, Theory of software reliability based on components, Proceedings of the 23rd International Conference on Software Engineering, p.361-370, May 12-19, 2001, Toronto, Ontario, Canada
Brian Mitchell , Steven J. Zeil, A reliability model combining representative and directed testing, Proceedings of the 18th international conference on Software engineering, p.506-514, March 25-29, 1996, Berlin, Germany
S. Morasca , A. Morzenti , P. San Pietro, A Case Study on Applying a Tool for Automated System Analysis Based on Modular Specifications Written in TRIO, Automated Software Engineering, v.7 n.2, p.125-155, May 2000
Dick Hamlet, When only random testing will do, Proceedings of the 1st international workshop on Random testing, July 20-20, 2006, Portland, Maine
Xiang Yin, The echo approach to formal verification, Proceeding of the 28th international conference on Software engineering, May 20-28, 2006, Shanghai, China
Dick Hamlet, Continuity in software systems, ACM SIGSOFT Software Engineering Notes, v.27 n.4, July 2002
Paul Ammann , Dahlard L. Lukes , John C. Knight, Applying data redundancy to differential equation solvers, Annals of Software Engineering, 4, p.65-77, 1997
Robyn R. Lutz, Software engineering for safety: a roadmap, Proceedings of the Conference on The Future of Software Engineering, p.213-226, June 04-11, 2000, Limerick, Ireland
Manuel Blum , Hal Wasserman, Reflections on the Pentium Division Bug, IEEE Transactions on Computers, v.45 n.4, p.385-393, April 1996
Farokh B. Bastani, Relational programs: An architecture for robust real-time safety-critical process-control systems, Annals of Software Engineering, v.7 n.1-4, p.5-24, 1999
John Penix , Willem Visser , Seungjoon Park , Corina Pasareanu , Eric Engstrom , Aaron Larson , Nicholas Weininger, Verifying Time Partitioning in the DEOS Scheduling Kernel, Formal Methods in System Design, v.26 n.2, p.103-135, March 2005
James L. Caldwell, Formal Methods Technology Transfer: A View from NASA, Formal Methods in System Design, v.12 n.2, p.125-137, March 1, 1998
Peter Amey , Roderick Chapman, Industrial strength exception freedom, ACM SIGAda Ada Letters, v.XXIII n.1, p.1-9, March
Alena Griffiths, On proof-test intervals for safety functions implemented in software, Proceedings of the eleventh Australian workshop on Safety critical systems and software, p.23-33, August 31, 2006, Melbourne, Australia
Hal Wasserman , Manuel Blum, Software reliability via run-time result-checking, Journal of the ACM (JACM), v.44 n.6, p.826-849, Nov. 1997
Terry Shepard, An efficient set of software degree programs for one domain, Proceedings of the 23rd International Conference on Software Engineering, p.623-632, May 12-19, 2001, Toronto, Ontario, Canada
Adid Jazaa, Toward better software automation, ACM SIGSOFT Software Engineering Notes, v.20 n.1, p.79-84, Jan. 1995 | software reliability;life-critical real-time software;software fault tolerance;fault-tolerant software;growth models;fault tolerant computing;real-time systems;statistical methods;multiversion software experiments;safety;reliability |
631027 | Performance Comparison of Three Modern DBMS Architectures. | The introduction of powerful workstations connected through local area networks (LANs) inspired new database management system (DBMS) architectures that offer high performance characteristics. The authors examine three such software architecture configurations: client-server (CS), the RAD-UNIFY type of DBMS (RU), and enhanced client-server (ECS). Their specific functional components and design rationales are discussed. Three simulation models are used to provide a performance comparison under different job workloads. Simulation results show that the RU almost always performs slightly better than the CS, especially under light workloads, and that ECS offers significant performance improvement over both CS and RU. Under reasonable update rates, the ECS over CS (or RU) performance ratio is almost proportional to the number of participating clients (for less than clients). The authors also examine the impact of certain key parameters on the performance of the three architectures and show that ECS is more scalable that the other two. | Introduction
Centralized DBMSs present performance restrictions due to their limited resources. In
the early eighties, a lot of research was geared towards the realization of database ma-
chines. Specialized but expensive hardware and software were used to built complex
systems that would provide high transaction throughput rates utilizing parallel processing
and accessing of multiple disks. In recent years though, we have observed different
trends. Research and technology in local area networks have matured, workstations became
very fast and inexpensive, while data volume requirements continue to grow rapidly
[2]. In the light of these developments, computer systems-and DBMSs in particular-
in order to overcome long latencies have adopted alternative configurations to improve
their performance.
In this paper, we present three such configurations for DBMSs that strive for high
throughput rates, namely: the standard Client-Server[23], the RAD-UNIFY type of
DBMS[19], and the Enhanced Client-Server architecture[16]. The primary goal of this
study is to examine performance related issues of these three architectures under different
workloads. To achieve that, we develop closed queuing network models for all
architectures and implement simulation packages. We experiment with different workloads
expressed in the context of job streams, analyze simulated performance ratios, and
derive conclusions about system bottlenecks. Finally, we show that under light update
rates (1%-5%) the Enhanced Client-Server offers performance almost proportional to the
number of the participating workstations in the configuration for 32 or less workstations.
On the other hand, the RAD-UNIFY performs almost always slightly better than the
pure Client-Server architecture.
In section 2, we survey related work. Section 3 discusses the three DBMS architectures
and identifies their specific functional components. In section 4, we propose
three closed queuing network models for the three configurations and talk briefly about
the implemented simulation packages. Section 5 presents the different workloads, the
simulation experiments and discusses the derived performance charts. Conclusions are
found in section 5.
Related Work
There is a number of studies trying to deal with similar issues like those we investigate
here. Roussopoulos and Kang [18] propose the coupling of a number of workstations
with a mainframe. Both workstations and mainframe run the same DBMS and the
workstations are free to selectively download data from the mainframe. The paper
describes the protocol for caching and maintenance of cached data.
Hagman and Ferrari [11] are among the first who tried to split the functionality
of a database system and off-load parts of it to dedicated back-end machines. They
instrumented the INGRES DBMS, assigned different layers of the DBMS to two different
machines and performed several experiments comparing the utilization rates of the CPUs,
disks and network. Among other results, they found that generally there is a 60%
overhead in disk I/O and a suspected overhead of similar size for CPU cycles. They
attribute these findings to the mismatch between the operating and the DBMS system.
The cooperation between a server and a number of workstations in an engineering design
environment is examined in [13]. The DBMS prototype that supports a multi-level
communication between workstations and server which tries to reduce redundant work
at both ends is described.
DeWitt et al. [8] examine the performance of three workstation-server architectures
from the Object-Oriented DBMS point of view. Three approaches in building a server
are proposed: object, server and file server. A detailed simulation study is presented with
different loads but no concurrency control. They report that the page and file server
gain the most from object clustering, page and object servers are heavily dependent
on the size of the workstation buffers and finally that file and page servers perform
better in experiments with a few write type transactions (response time measurements).
Wilkinson and Niemat in [26] propose two algorithms for maintaining consistency of
workstation cached data. These algorithms are based on cache, and notify locks and
new lock compatibility matrices are proposed. The novel point of this work is that server
concurrency control and cache consistency are treated in a unified approach. Simulation
results show that cache locks always give a better performance than two-phase locking
and that notify locks perform better than cache locks whenever jobs are not CPU bound.
Alonso et al. in [2] support the idea that caching improves performance in information
retrieval systems considerably and introduce the concept of quasi-caching. Different
caching algorithms- allowing various degree of cache consistency- are discussed and
studied using analytical queuing models. Delis and Roussopoulos in [6] through a simulation
approach examine the performance of server based information systems under light
updates and they show that this architecture offers significant transaction processing
rates even under considerable updates of the server resident data. In [17], we describe
modern Client-Server DBMS architectures and report some preliminary results on their
performance, while in [7], we examine the scalability of three such Client-Server DBMS
configurations.
Carey et al. in [5] examine the performance of five algorithms that maintain consistency
of cached data in client-server DBMS architecture. The important assumption
of this work is that client data are maintained in cache memory and they are not disk
resident. Wang and Rowe in [25] a similar study examine the performance of five more
cache consistency algorithms in a client-server configuration. Their simulation experiments
indicate that either a two phase locking or a certification consistency algorithm
offer the best performance in almost all cases. Some work, indirectly related to the issues
examined in this paper, are the Goda distributed filing system project [20] and the cache
coherence algorithms described in [3].
The ECS model-presented here-is a slightly modified abstraction of the system
design described in [18] and is discussed in [16]. In this paper, we extend the work in
three ways: first, we give relative performance measures with the other two existing
DBMS configurations, analyze the role of some key system parameter values, and finally,
provide insights about the scalability of the architectures.
3 Modern DBMS Architectures
Here, we briefly review three alternatives for modern database system architectures and
highlight their differences. There is a number of reasons that made these configurations
a reality:
1. The introduction of inexpensive but extremely fast processors on workstations with
large amount of main memory and medium size disk.
2. The ever-growing volume of operational databases.
3. The need for data staging: that is, extracting for a user class just a portion of the
database that defines the user's operational region.
Although the architectures are general, they are described here for the relational model
only.
3.1 Client-Server Architecture (CS)
The Client-Server architecture(CS) is an extension of the traditional centralized database
system model. It originated in engineering applications where data are mostly processed
in powerful clients, while centralized repositories with check in and out protocols are
predominantly used for maintaining data consistency.
In a CS database [23], each client runs an application on a workstation (client) but
does database access from the server. This is depicted in Figure 1(a). Only one client
is shown for the sake of presentation. The communication between server and clients is
done through remote calls over a local area network (LAN) [22]. Applications processing
is carried out at the client sites, leaving the server free to carry out database work only.
The same model is also applicable for a single machine in which one process runs the
server and others run the clients. This configuration avoids the transmission of data
over the network but obviously puts the burden of the system load on the server. In this
paper, we assume that server and client processes run on different hardware.
3.2 RAD-UNIFY Type of DBMS Architecture(RU)
The broad availability of high speed networks and fast processing diskless workstations
were the principal reasons for the introduction of the RAD-UNIFY type of DBMS ar-
chitecture(RU) presented by Rubinstein et al. in [19]. The main objective of this configuration
is to improve the response time by utilizing both the client processing capability
and its memory. This architecture is depicted in Figure 1(b). The role of the server is
to execute low level DBMS operations such as locking and page reads/writes. As Figure
1(b) suggests, the server maintains the lock and the data manager. The client performs
the query processing and determines a plan to be executed. As datapages are being
retrieved, they are sent to the diskless client memory for carrying out the rest of the
processing.
In the above study, some experimental results are presented where mostly look up
operations perform much better than in traditional Client-Server configurations. It is
also acknowledged that the architecture may perform well only under the assumption of
light update loads. More specifically in the system's prototype, it is required that only
one server database writer is permitted at a time. The novel point of the architecture
is the introduction of client cache memory used for database processing. Therefore, the
clients can use their own CPU to process the pages resident in their caches and the server
degenerates to a file server and a lock manager. The paper [19] suggests that the transfer
of pages to the appropriate client memory gives improved response times at least in the
case of small to medium size retrieval operations. This is naturally dependent on the
size of each client cache memory.
3.3 Enhanced Client-Server Architecture (ECS)
The RU architecture relieves most of the CPU load on the server but does little to the
biggest bottleneck, the I/O data manager. The Enhanced Client-Server Architecture
reduces both the CPU and the I/O load by caching query results and by incorporating
in the client a full-fledged DBMS for incremental management of cached data
[15, 16].
Shared Database
Server DBMS
Commun. Software
CS Server
Commun. Software
Application Soft.
Client
Commun. Software
Application Soft.
Server DBMS
Commun. Software
Shared Database
Shared Database
Server DBMS Locking
and Data Managers
Commun. Software
Commun. Software
Application Soft.
Cached
Data
LAN LAN LAN
RAD-UNIFY Server ECS Server
Client Client
Client
Client DBMS Client DBMS
Figure
1: CS, RU, and ECS architectures
Initially, the clients start off with an empty database. Caching query results over
time permits a user to create a local database subset which is pertinent to the user's
application. Essentially, a client database is a partial replica of the server database.
Furthermore, a user can integrate into her/his local database private data not accessible
to others. Caching in general presents advantages and disadvantages. The two
major advantages are that it eliminates requests for the same data from the server and
boosts performance with client CPUs working on local data copies. In the presence of
updates though, the system needs to ensure proper propagation of new item values to
the appropriate clients. Figure 1(c) depicts this new architecture.
Updates are directed for execution to the server which is the primary site. Pages
to be modified are read in main memory, updated and flushed back to the server disk.
Every server relation is associated with an update propagation log which consists of
timestamped inserted tuples and timestamped qualifying conditions for deleted tuples.
Only updated (committed) tuples are recorded in these logs. The amount of bytes
written in the log per update is generally much smaller than the size of the pages read
in main memory.
Queries involving server relations are transmitted to and processed initially by the
server. When the result of a query is cached into a local relation for the first time, this
local relation is bound to the server relations used in extracting the result. Every
such binding is recorded in the server DBMS catalog by stating three items of interest:
the participating server relation(s), the applicable condition(s) on the relation(s), and a
timestamp. The condition is essentially the filtering mechanism that decides what are
the qualifying tuples for a particular client. The timestamp indicates what is the last
time that a client has seen the updates that may affect the cached data. There are two
possible scenarios for implementing this binding catalog information. The first approach
is that the server undertakes the whole task. In this case, the server maintains all the
information about who caches what and up to what time. The second alternative is that
each client individually keeps track of its binding information. This releases the server's
DBMS from the responsibility to keep track of a large number of cached data subsets
and the different update statuses which multiplies quickly with the number of clients.
Query processing against bound data is preceded by a request for an incremental
update of the cached data. The server is required to look up the portion of the log
that maintains timestamps greater than the one seen by the submitting client. This is
possible to be done once the binding information for the demanding client is available.
If the first implementation alternative is used this can be done readily. However, if
the second solution is followed then the client request should be accompanied along
with the proper binding template. This will enable the server to perform the correct
tuple filtering. Only relevant fractions (increments) of the modifications are propagated
to the client's site. The set of algorithms that carry out these tasks are based on the
Incremental Access Methods for relational operators described in [15] and involve looking
at the update logs of the server and transmitting differential files [21]. This significantly
reduces data transmission over the network as it only transmits the increments affecting
the bound object, compared with traditional CS architectures in which query results are
continuously transmitted in their entirety.
It is important to point out some of the characteristics of the concurrency control
mechanism assumed in the ECS architecture. First, since updates are done on the
server, a 2-OE locking protocol is assumed to be running by the server DBMS (this is
also suggested by a recent study [25]). For the time being, and until commercial DBMSs
reveal a 2-OE commit protocol, we assume that updates are single server transactions.
Second, we assume that the update logs on the servers are not locked and, therefore, the
server can process multiple concurrent requests for incremental updates.
4 Models for DBMS Architectures
In this section, we develop three closed queuing network models one for each of the three
architectures. The implementation of the simulation packages is based on them. We
first discuss the common components of all models and then the specific elements of
each closed network.
4.1 Common Model Components and DBMS Locking
All models maintain a W orkLoad Generator that is part of either the client or the
workstation. The W orkLoad Generator is responsible for the creation of the jobs (of
either read or write type). Every job consists of some database operation such as selec-
tion, projection, join, update. These operations are represented in an SQL-like language
developed as part of the simulation interface. This interface specifies the type of the operation
as well as simple and join page selectivities. The role of the W orkLoad Generator
is to randomly generate client job from an initial menu of jobs. It is also responsible for
maintaining the mixing of read and write types of operations. When a job finishes suc-
cessfully, the W orkLoad Generator submits the next job. This process continues until
an entire sequence of queries/updates is processed. To accurately measure throughput,
we assume that the sites' query/update jobs are continuous (no think-time between
jobs).
All three models have a Network Manager that performs two tasks :
1. It routes messages and acknowledgments between the server and the clients.
2. It transfers results/datapages/increments to the corresponding client site depending
on the CS/RU/ECS architecture respectively. Results are answers to the
queries of the requesting jobs. Datapages are pages requested by RU during the
query processing at the workstation site. Increments are new records and tuple
NetworkParameters Meaning V alue
time overhead for a 10 msec
remote procedure call
net rate network speed 10 Mbits/sec
mesg length average size of 300 bytes
requesting messages
Table
1: Network Parameters
identifiers of deleted records used in the ECS model incremental maintenance of
cached data.
The related parameters to the network manager appear in Table 1. The time
overhead for every remote call is represented by init time [14] while net rate is the
network's transfer rate(Mbits/sec). The average size of each job request message is
mesg length.
Locking is at the page level. There are two types of locks in all models: shared
and exclusive. We use the lock compatibility matrix described in [10] and the standard
two-phase locking protocol.
4.2 Client-Server Model
Figure
2 outlines the closed queueing network for the CS model. It is an extension of the
model proposed in [1]. It consists of three parts: the server, the network manager and
the client. Only one client is depicted in the figure. The client model runs application
programs and directs all DBMS inquiries and updates through the network manager to
the CS server. The network manager routes requests to the server and transfers the
results of the queries to the appropriate client nodes.
A job submitted at the client's site, passes through the Send Queue, the network,
the server's Input Queue, and finally arrives at the Ready Queue of the system where
it awaits service. A maximum multiprogramming degree is assumed by the model. This
limits the maximum number of jobs concurrently active and competing for system resources
in the server. Pending jobs remain in the Ready Queue until the number of
active jobs becomes less than the multiprogramming level (MPL). When a job becomes
active, it is queued at the concurrency control manager and is ready to compete for system
resources such as CPU, access to disk pages and lock tables (which are considered to
be memory resident structures). In the presence of multiple jobs, CPU is allocated in a
WorkLoad
Generator
Server
Client
Commit Queue
CMT
Output
Queue
Receive Queue
Send Queue
MPL
Concurrency
Control Queue
CCM
Update Queue
UPD
Blocked Queue
Read Queue
RD
Abort Queue
ABRT
Update
Blocked Operation
Read
Abort Operation
Ready Queue
Input
Queue
Processing
Queue
Network
Manager
Figure
2: Queuing Model for CS Architecture
round-robin fashion. The physical limitations of the server main memory (and the client
main memory as we see later on) allocated to database buffering may contribute to extra
data page accesses from the disk. The number of these page accesses are determined by
formulas given in [24] and they depend on the type of the database operation.
Jobs are serviced at all queues with a FIFO discipline. The concurrency control
manager (CCM) attempts to satisfy all the locks requested by a job. There are several
possibilities depending on what happens after a set of locks is requested. If the
requested locks are acquired, then the corresponding page requests are queued in the
ReadQueue for processing at the I/O RD module. Pages read by RD are buffered in
the P rocessingQueue. CPU works (PRC) on jobs queued in P rocessingQueue. If
only some of the pages are locked successfully, then part of the request can be processed
normally (RD;PRC) but its remaining part has to reenter the concurrency control manager
queue. Obviously, this is due to a lock conflict and is routed to CCM through the
Blocked Queue. This lock will be acquired when the conflict ceases to exist.
For each update request, the corresponding pages are exclusively locked, and subsequently
buffered and updated (UPD). If the job is unsuccessful in obtaining a lock, it
is queued in the Blocked Queue. If the same lock request passes unsuccessfully a predefined
number of times through the Blocked Queue, then a deadlock detection mechanism
ServerParameters Meaning V alue
server cpu mips processing power of server 21MIPS
disk tr average disk transfer time 12msec
main memory size of server main memory 2000 pages
instr per lock instructions executed per lock 4000
instr selection instructions executed per selected page 10000
instr projection instructions per projected page 11000
instr join instructions per joined page 29000
instr update instructions per updated page 12500
instr RD page instructions to read a page from disk 6500
instr WR page instructions to write a page to disk 8000
ddlock search deadlock search time
kill time time required to kill a job 0.2 sec
mpl multiprogramming level 10
Table
2: Server Parameters
is triggered. The detection mechanism simply looks for cycles in a wait-for graph that
is maintained throughout the processing of the jobs. If such a cycle is found, then the
job with the least amount of processing done so far is aborted and restarted. Aborted
jobs release their locks and rejoin the system's Ready Queue. Finally, if a job commits,
all changes are reflected on the disk, locks are released, and the job departs.
Server related parameters appear in Table 2 (these parameters are also applicable to
the other two models). Most of them are self-describing and give processing overheads,
time penalties, ratios and ranges of system variables. Two issues should be mentioned:
first, the queuing model assumes an average value for disk access, thereby avoiding issues
related to data placement on the disk(s). Second, whenever a deadlock has been detected,
the system timing is charged with an amount equal to ddlock search active jobs
kill time. Therefore, the deadlock overhead is proportional to the number of active
jobs.
The database parameters (applicable to all three models) are described by the set
of parameters shown in Table 3. Each relation of the database consists of the following
information: a unique name (rel name), the number of its tuples (card), and the size
of each of those tuples (rel size). The stream mix indicates the composition of the
streams submitted by the W orkLoad Generator(s) in terms of read and write jobs.
Note also that the size of the server main memory was defined to hold just a portion
of the disk resident database which is 25% in the case of multiprogramming equal to
one(we have applied this fraction concept later on with the sizes of the client main mem-
DataParameters Meaning V alue
name of db name of the database TestDataBase
dp size size of the data page 2048 bytes
num rels number of relations 8
fill factor page filling factor 98%
rel name name of a relation R1, R2,., R8
card cardinality of every relation 20k
rel size size of a relation tuple (bytes) 100
stream mix query/update ratio 10
Table
3: Data Parameters
ories). For the above selected parameter values every segment of the multiprogrammed
main memory is equal to 200 pages (main memory=mpl).
4.3 RAD-UNIFY Model
Abort Queue
ABRT
Ready Queue
Concurrency
Control Queue
CCM
UPD
Update Queue
Read Queue
RD
Input
Queue
Send Queue
WorkLoad
Generator
Rec. Queue
Network
Manager
Commit Queue
CMT
Output
Queue
RAD-UNIFY Server
Diskless Client
Blocked Queue
MPL
Processing
Queue
Aborted or
Committed Job
more
Figure
3: Queuing Model for RAD-UNIFY Architecture
Figure
3 depicts a closed network model for the RAD-UNIFY type of architecture. There
are a few differences from the CS model:
ffl Every client uses a cache memory for buffering datapages during query processing.
ffl Only one writer at a time is allowed to update the server database.
Every client is working off its finite capacity cache and any time it wants to request
an additional page forwards this request over to the server.
ffl The handling of the aborts is done in a slightly different manner.
The W orkLoad Generator creates the jobs which, through the proper SendQueue,
the network and the server's InputQueue are directed to the server's ReadyQueue. The
functionality of the MPL processor is modified to account for the single writer require-
ment. One writer may coexist with more than one readers at a time. The functionality of
the UPD and RD processors, their corresponding queues as well as the queue of blocked
jobs is similar to that of the previous model.
The most prominent difference from the previous model is that the query job pages
are only read (ReadQueue and RD service module) and then through the network are
directed to the workstation buffers awaiting processing. The architecture capitalizes on
the workstations processing capability. The server becomes the device responsible for
the locking and page retrieval which make up the "low" level database system opera-
tions. Write type of jobs are executed solely on the server and are serviced with the
UpdateQueue and the UPD service module.
As soon as pages have been buffered in the client site (P rocessingQueue), the
local CPU may commence processing whenever is available. The internal loop of the
RU client model corresponds to the subsequent requests of the same page that may
reside in the client cache. Since the size of the cache is finite, this may force request of
the same page many times-depending on the type of the job [24]. The replacement is
performed in either a least recently used (LRU) discipline or the way specific database
operations call for (i.e. case of the sort-merge join operation). The wait-for graph of
the processes executing is maintained at the server which is the responsible component
for deadlock detection. Once a deadlock is found, a job is selected to be killed. This job
is queued in the AbortQueue and the ABRT processing element releases all the locks
of this process and sends an abort signal to the appropriate client. The client abandons
any further processing on the current job, discards the datapages fetched so far from the
server and instructs the W orkLoad Generator to resubmit the same job. Processes that
commit notify with a proper signal the diskless client component, so that the correct
job scheduling takes place. In the presence of an incomplete job (still either active or
pending in the server) the W orkLoad Generator takes no action awaiting for either a
commit or an abort signal.
4.4 Enhanced Client-Server Model
The closed queuing model for the ECS architecture is shown in Figure 4. There is a
major difference from the previous models:
ffl The server is extended to facilitate incremental update propagation of cached data
and/or caching of new data. Every time a client sends a request, appropriate
sections of the server logs need to be shipped from the server back over to the
client. This action decides whether there have been more recent changes than the
latest seen by the requesting client. In addition, the client may demand new data
(not previously cached on the local disk) for the processing of its application.
Ready Queues
MPL
Concurrency
Control Queue
CCM
Update Queue
UPD
Blocked Queue
Abort Queue
ABRT
Update
Blocked Operation
Abort Operation
Receive
Messages
Network
Manager
Read Type Xaction
Send Queue
WorkLoad
Generator
Commit Queue
CMT
Rec. Queue
Update
Processing
Upd
Input
Queue
Output
Queue
Messages
ECS Server
ISM
Data Access
and Processing
Client
RDM
LOGRD Queue
Figure
4: Queuing Model for ECS Architecture
Initially, client-disk cached data from the server relations can be defined using the
parameter ff Rel i
that corresponds to the percentage of server relation
cached in each participating client. Jobs initiated at the
client sites are always dispatched to the server for processing through a message output
queue (Send Queue). The server receives these requests in its Input Queue via the
network and forwards them to the Ready Queue. When a job is finally scheduled by the
concurrency control manager (CCM ), its type can be determined. If the job is of write
type, it is serviced by the Update, Blocked, Abort Queues and the UPD and ABRT
processing elements. At commit time, updated pages are not only flushed to the disk but
a fraction (write log fract) of these pages is appended in the log of the modified relation
(assuming that only fraction of page tuples is modified per page at a time). If it is just
a read only job it is routed to the LOGRD Queue. This queue is accommodated by the
log and read manager (LOGRDM) which decides what pertinent changes need to be
transmitted before the job is evaluated at the client or the new portion of the data to be
downloaded to the client. The increments are decided after all the not examined pieces
of the logs are read and filtered through the client applicable conditions. The factor that
decides the amount of new data cached per job is defined for every client individually
by the parameter cont caching perc. This last factor determines the percentage of new
pages to be cached every time (this parameter is set to zero for the initial set of our
experiment). The log and read manager sends -through a queue- either the requested
data increments along with the newly read data or an acknowledgment.
The model for the clients requires some modification due to client disk existence.
Transactions commence at their respective WorkLoad Generators and are sent through
the network to the server. Once the server's answer has been received, two possibilities
exist. The first is that no data has been transferred from the server, just an acknowledgment
that no increments exist. In this case, the client's DBMS goes on with the
execution of the query. On the other hand, incremental changes received from the server
are accommodated by the increment service module (ISM) which is responsible for reflecting
them on the local disk. The service for the query evaluation is provided through
a loop of repeating data page accesses and processing. Clearly, there is some overhead
involved whenever updates on the server affect a client's cached data or new data are
being cached.
Table
4 shows some additional parameters for all the models. Average client disk
access time, client CPU processing power, and client main memory size are described by
client dist tr, client cpu mips and client main memory respectively. The num clients
is the number of clients participating in the system configuration in every experiment
and instr log page is the number of instructions executed by the ECS server in order to
process a log page.
ClientParameters Meaning V alue Applicable in Model
client disk tr average disk transfer time 15 msec ECS
client cpu mips processing power of client 20 MIPS RU, ECS
client main memory size of client main memory 500 Pages RU, ECS
initial cached fraction 0.30 or 0.40 ECS
of server relation Rel i
instr log page instructions to process 5000 ECS
a log page
log fract fraction of updated pages 0.10 ECS
to be recorded to the log
cont caching perc continuous caching factor 0 ECS
clients number of clients 4,8,16,24.56 CS, RU, ECS
Table
4: Additional CS, RU and ECS parameters
4.5 Simulation Packages
The simulation packages were written in C and their sizes range from 4.6k to 5.4k lines
of source code. Two-phase locking protocol is used and we have implemented time-out
mechanisms for detecting deadlocks. Aborted jobs are not scheduled immediately but
they are delayed for a number of rounds before restart. The execution time of all three
simulators for a complete workload is about one day on a DECstation 5000/200. We
also ensure-through the method of batch means[9]-that our simulations reach stability
(confidence of more than 96%).
5 Simulation Results
In this section, we discuss performance metrics and describe some of the experiments
conducted using the three simulators. System and data related parameter values appear
in tables 1 to 4.
5.1 Performance Metrics, Query-Update Streams, and Work
Load Settings
The main performance criteria for our evaluation are the average throughput (jobs/min),
and throughput speedup defined as the ratio of two throughput values. Speedup is
related to the throughput gap [12] and measures the relative performance of two system
configurations for several job streams. We also measure server disk access reduction
which is the ratio of server disk accesses performed by two system configurations, cache
memory hits and various system resource utilizations.
The database for our experiments consists of eight base relations. Every relation
has cardinality of 20000 tuples and requires 1000 disk pages. The main memory of the
server can retain 2000 pages for all the multiprogrammed jobs while each client can
retain the most 500 pages for database processing in its own main memory for the case
of RU and ECS configurations. Initially, the database is resident on the server's disk but
as time progresses parts of it are cached either in the cache memory of the RU clients or
the disk and the main memory of the ECS clients.
In order to evaluate the three DBMS architectures, the W orkLoad Generator
creates several workloads using a mix of queries and modifications. A Query-Update
Stream(QUS) is a sequence of query and update jobs mixed in a predefined ratio. In
the first two experiments of this paper, the mix ratio is 10, that is each QUS includes
one update per ten queries. QUS jobs are randomly selected from a predetermined set
of operations that describe the workload. Every client submits for execution a QUS and
terminates when all its jobs have completed successfully. Exactly the same QUS are
submitted to all configurations. The lenght of the QUSs was selected to be 132 jobs
since that gave confidence of more than 96% in our results.
Our goal is to examine system performances under diverse parameter settings. The
principal resources all DBMS configurations compete for are CPU cycles and disk ac-
cesses. QUS consists of three groups of jobs. Two of them are queries and the other is
updates. Since the mix of jobs plays an important role in system performance (as Boral
and DeWitt show in [4]), we chose small and large queries. The first two experiments
included in this paper correspond to low and high workloads. The small size query set
(SQS) consists of 8 selections on the base relations with tuple selectivity of 5% (2 of
them are done on clustered attributes) and 4 2-way join operations with join selectivity
0.2. The large size query set (LQS) consists of 8 selections with tuple selectivity equal to
selections with tuple selectivity of 40%(4 of these selections are done on clustered
attributes), 3 projections, and finally 4 2-way joins with join selectivity .40. Update
jobs (U) are made up of 8 modifications with varying update rates (4 use clustered at-
tributes). The update rates give the percentage of pages modified in a relation during
an update. For our experiments, update rates are set to the following values: 0% (no
modifications), 2%, 4%, 6% and 8%.
Given the above group classification, we formulate two experiments: SQS-U, and
LQS-U. A job stream of a particular experiment, say SQS-U, and of x% update rate
consists of queries from the SQS group and updates from the U that modify x% of
Throughput CS and RU Throughput Rates (SQS-U)
Clients
Figure
5: CS and RU Throughput
the server relation pages. Another set of experiments designed around the Wisconsin
benchmark can be found in [16].
5.2 Experiments: SQS-U and LQS-U
In
Figure
5, the throughput rates for both CS and RU architectures are presented. The
number of clients varies from 4 to 56. There are clearly two groups of curves: those of
the CS (located in the lower part of the chart) and those of the RU (located in the upper
part of the chart). Overall, we could say that the throughput averages at about 11.6 jobs
per minute for the CS and 23.7 for the RU case. From 4 to 16 CS clients, throughput
increases as the CS configuration capitalize upon their resources. For more than 24 CS
clients, we observe a throughput decline for the non-zero update streams attributed to
high contention and the higher number of aborted and restarted jobs. The RU 0% curve
always remains in the range between 32 and 33 jobs/min since client cache memories
essentially provide much larger memory partitions than the corresponding ones of the
CS. The "one writer at a time" requirement of the RU configuration plays a major role
in the almost linear decrease in throughput performance for the non-zero update curves
as the number of clients increases. Note that at 56 clients, the RU performance values
obtained for QUS with 4% to 8% update rates are about the same with those of their CS
counterparts. It is evident from the above figure that since RU utilizes both the cache
Throughput ECS Throughput (SQS-U)
Clients
Figure
memories and the CPU of its clients for DBMS page processing, it performs considerably
better than its CS counterpart (except in the case of many clients submitting non-zero
update streams where RU throughput rates are comparable with those obtained in the
CS configuration). The average RU throughput improvement for this experiment was
calculated to be 2.04 times higher than that of CS.
Figure
6 shows the ECS throughput for identical streams. The 0% update curve
shows that the throughput of the system increases almost linearly with the number of
clients. This benefit is due to two reasons: 1) the clients use only their local disks
and achieve maximal parallel access to already cached data 2) the server carries out
a negligible disk operations (there are no updates therefore the logs are empty) and
handles only short messages and acknowledgments routed through the network. No
data movement across the network is observed in this case. As the update rate increases
(2%, 4%) the level of the achieved throughput rates remains high and increases almost
linearly with the number of workstations. This holds up to stations. After that, we
observe a small decline that is due to higher conflict rate caused by the increased number
of updates. Similar declines are observed by all others (but the 0% update curve). From
Figures
5 and 6, we see that the performance of ECS is significantly higher than those
of the CS and RU. For ECS the maximum processing rate is 1316.4 jobs/min (all 56
clients attached to a single server with 0% updates) while the maximum throughput
0% RU/CS
2% RU/CS
4% RU/CS
6% RU/CS
8% RU/CS
0% ECS/CS
2% ECS/CS
4% ECS/CS
6% ECS/CS
8% ECS/CS21e+015Throughput
Clients
Figure
7: ECS/CS and RU/CS Throughput Speedup
value for the CS is about 12.6 jobs/min and for the RU 32.9 jobs/min. The number of
workstations after which we observe a decline in job throughput is termed "maximum
throughput threshold" (mtt) and varies with the update rate. For instance, for the
2% curve it comes at about 35 workstations and for the 6%, 8% curves appears in the
region around 20 workstations. The mtt greatly depends on the type of submitted jobs,
the composition of the QUS as well as the server concurrency control manager. A more
sophisticated (flexible) manager (such as that in [26]) than the one used in our simulation
package would further increase the mtt values.
Figure
7 depicts the throughput speedup for RU and ECS architectures over CS (y
axis is depicted in logarithmic scale). It suggests that the average throughput for ECS is
almost proportional to the number of clients at least for the light update streams (0%,
2%, and 4%) in the range of 4 to 32 stations. For the 4% update stream, the relative
throughput for ECS is 17 times better than its CS-RU counterparts (at 56 clients). It
is worth noting that even for the worst case (8% updates), the ECS system performance
remains about 9 times higher than that of CS. The decline though starts earlier, at
clients, where it still maintains about 10 times higher job processing capability. At the
lower part of the Figure 7, the RU over CS speedup is shown. Although RU performs
generally better than CS under the STS-U workload for the reasons given earlier, its
corresponding speedup is notably much smaller than that of ECS. The principal reasons
2% ECS/CS
4% ECS/CS
6% ECS/CS
8% ECS/CS1e+015RU/CS reduction curves
Reduction
Clients
Server Disk Reduction (SQS-U)
Figure
8: ECS/CS and RU/CS Disk Reduction
for the superiority of the ECS are the off loading of the server disk operations and and
the parallel use of local disks.
Figure
8 supports these hypotheses. It presents the server disk access reduction
achieved by ECS over CS and similarly the one achieved by RU over CS. ECS server
disk accesses for 0% update streams cause insignificant disk access (assuming that all
pertinent user data have been cached already in the workstation disks). For 2% update
rate QUSs the reduction varies from 102 to 32 times over the entire range of clients. As
expected, this reduction drops further for the more update intensive streams (i.e. curves
4%, 6%, and 8%). However, even for the 8% updates the disk reduction ranges from 26
to 8 times which is a significant gain. The disk reduction rates of the RU architecture
over the CS vary between 2.4 and 2.6 times throughout the range of the clients and for
all QUS curves. This is achieved predominantly by the use of the client cache memories
which are larger than the corresponding main memory multiprogramming partitions of
the CS configuration avoiding so many page replacements.
Figure
9 reveals one of the greatest impediments in the performance of ECS that
is the log operations. The above graph shows the percentage of the log pertinent disk
performed (both log reads and writes) over the total number of server disk
accesses. In the presense of more than 40 clients the log disk operations constitute a
large fraction of server disk accesses (around 69%).
Percentage of Log Accesses over Total Server Disk Accesses
Clients
Percentage
Figure
9: Percentage of Server Page Accesses due to Log Operations
Figure
depicts the time spent on the network for all configurations and for
update rates: 0%, 4%, 8%. The ECS model causes the least amount of traffic in the
network because the increments of the results are small in size. The RU models causes
the highest network traffic because all datapages must be transferred to the workstation
for processing. On the other hand, CS transfers only qualifying records(results). For 56
clients, the RU network traffic is almost double the CS (2.15), and 7 times more than
that of ECS.
Figure
11 summarizes the results of the LTS-U experiment. Although the nature
of query mix has been changed considerably, we notice that the speedup curves have not
(compare with Figure 7). The only difference is that all the non-zero update curves have
been moved upwards closer to the 0% curve and that the mtts appear much later from
to 50 clients. Positive speedup deviations are observed for all the non-zero curves
compared with the rates seen in Figure 7. It is interesting to note that gains produced
by the RU model are lower than in the STS-U experiment. We offer the following
explanation: as the number of pages to be processed by the RU server disk increases
significantly (larger selection/join selectivities and projection operators involved in the
experiment) imposing delays at the server site (disk utilization ranges between .93 and
workstation CPUs can not contribute as much to the system average throughput.
4% ECS
8% ECS21e+0552RU related
Curves
CS related
Curves
Clients
Time Spent in the Network (SQS-U)
Figure
10: Time Spent on the Network
5.3 Effect of Model Parameters
In this section, we vary one by one some of the system parameters which have a significant
impact on the performance of the examined architectures, run the SQS-U experiment
and finally compare the obtained results with those of the Figure 7. The parameters
that vary are:
Client disk access time to 9msec: This corresponds to 40% reduction in average
access time. Figure 12 presents the throughput speedup achieved for the ECS experiments
using this new client access time over the ECS results depicted in Figure 6. Not
surprisingly, the new ECS performance values are increased an average 1.23 factor for
all curves which represents very serious gains. The no-update curve indicates a constant
53% throughput rate improvement while the rest curves indicate significant gains in the
range 4 to 40 clients. For more than 40 clients the gains are insignificant and the explanation
for that is that the large number of updates (that increases linearly with the
number of submitting clients) imposes serious delays on the server. Naturally, the RU
and CS throughput values are not affected at all in this experiment.
Server disk access time set to 7 msec: We observe that all but one(8%) non-zero
update curves approach the 0% curve and the mtts area appear much later at about 40
stations. By providing a faster disk access time server jobs are executed faster and cause
0% ECS/CS
2% ECS/CS
4% ECS/CS
6% ECS/CS
8% ECS/CS
0% RU/CS21e+015
Rest RU/CS curves
Throughput
Clients
Figure
11: RU/CS and ECS/CS Throughput Speedups for LQS-U
fewer restarts.
Client CPU set to 110MIPS: The results are very similar to those of Figure 6 with
the exception that we notice an average increase of 15.16 jobs/min throughput rate for
all update curves. This indicates that extremely fast processing workstations alone have
moderate impact on the performance of the ECS architecture.
Server CPU set to 90MIPS: We observe a shifting of all curves towards the top right
corner of the graph. This pushes the mtts further to the right and allows for more clients
to work simultaneously. There are two reasons for this behavior: 1) updates are mildly
consuming the CPU and therefore, by providing a faster CPU they finish much faster
and fast CPU results in lower lock contention which in turn leads to fewer deadlocks.
Server log processing is also carried out faster as well.
5.4 Other Experiments
In this section, we perform four more experiments to examine the role of "clean" up-date
workloads, specially designed QUSs so that only a small number of clients submits
updates, the effect of the continuous caching parameter cont caching perc in the ECS
model as well as the ability of all models to scale-up.
Pure Update Workload: In this experiment, the update rate (6%) remains constant
Throughput
ECS Speedup for Client Disk Access 9msec over 15msec
Clients
Figure
12: ECS Speedup for Client Disk Access Time at 9msec over Client Access Time
at 15 msec
and the QUS are made up of only update jobs (pure update workload). Figure 13
gives the throughput rates for all configurations. The CS and ECS models show similar
curves shape-wise. For the range of 4-10 clients their throughput rates increase and
beyond that point under the influence of the extensive blocking delays and the created
deadlocks their performance declines considerably. At 56 clients the ECS achieved only
half of the throughput obtained than that when 8 clients when used. Overall, CS is
doing better than ECS because the latter has to also write into logs (that is extra disk
page accesses) at commit time. It is also interesting to note the almost linear decline in
the performance of the RU configuration. Since there is strictly one writer at a time in
this set of experiments all the jobs are sequenced by the MPL processor of the model
and are executed one at a time (due to lack of readers). The concurrency assists both
CS and ECS clients in the lower range (4 to 8) to achieve better throughput rates than
their RU counterparts, but later the advantages of job concurrency (many writers at the
same time - CS,ECS cases) diminish significantly.
Limited Number of Update Clients: In this experiment, a number of clients is
dedicated to updates only and the rest clients submit read only requests. This simulates
environments in which there is a (large) number of read-only users and a constant number
of updaters. This class of databases includes systems such as those used in the stock
CS
RU
Throuhgput
Clients
Pure Update Workload
Figure
13: QUSs consist of Update Jobs Only
markets. Figure 14 presents the throughput speedup results of the SQS-U experiment.
Note that three clients at each experiment are designated as writers and the remaining
query the database. The RU/CS curves remain in the same performance value levels as
those observed in Figure 7 with the exception that as the number of clients increases the
effect of the writers is amortized by the readers. Thus, all the non-zero update curves
converge to the 0% RU/CS curve for more than clients. The ECS/CS curves suggest
spectacular gains for the ECS configuration. The performance of the system increases
almost linearly with the number of participating clients for all the update curves. Note
as well that the mtts have been disappeared from the graph. Naturally, the mtts appear
much later when more than 56 clients are used in the configuration (not shown in the
Figure
Changing data locales for the ECS clients: So far, we have not considered the
case in which clients continuously demand new data from the server to be cached in
their disks. The goal of this experiment was to address this issue and try to qualify
the degradation in the ECS performance. To carry out this experiment, we use the
parameter cont caching perc (or ccp) that represents the percentage of new pages from
the relations involved in a query to be cached in the client disk. This parameter enabled
us to simulate the constantly changing client working set of data. Figure 15 shows the
0%, 2% and 6% curves of Figure 6 (where ccp is 0%) superimposed with the curves for
Throughput
Limited Number of Update Job Strems
Clients
0% RU/CS
6% ECS/CS
8% ECS/CS
4% ECS/CS
2% ECS/CS
0% ECS/CS
8% RU/CS
Figure
14: Throughput Speedup Rates for Three Writer Only Clients per Experiment
the same experiment (SQS-U) but with ccp equal to 1% and 2%. Taken into account
that every query requires a new part of the server data, this percentage contributes
to large numbers of additional disk accesses and augmented processing time on the
part of the server. This very fact makes the almost linear format of the original 0%
curve to disappear. More specifically, while 56 clients in the original experiment attain
1316.3 jobs/min under ccp=1% they achieve 416.1 jobs/min and under
accomplish only 206.6 jobs/min. The performance degradation for the heavy updating
QUS (i.e. 6%) is less noteworthy since there is already serious server blocking.
Ability to scale-up: We are also interested in the scale-up behavior of all architectures
under the presence of a large number of workstations. For this purpose, we ran an
experiment where the number of clients ranges from 4 to 120 per server. Figure 16
depicts the resulting curves for the STS-U experiment. The graph indicates that beyond
the mtts region the speedup of the ECS over CS is gradually decreasing (after 40 clients).
For more update intesive QUSs the speedup decline is less sharp than that of the 2%
curve. This detarioration is due to the saturation of the server's resources that the great
client number creates. Note however, even at the 120 clients, the ECS architecture can
process between 6.1 to 19.9 times more jobs than the CS architecture (non-zero update
curves). The 0% update curve shows no saturation and continues to increase almost
linearly. The major reasons for this are that the performance of the CS configuration for
0% ECS
2% ECS
0% ECS
2% ECS
6% ECS
Throughput SQS-U Experiment with Three cont_caching_perc (ccp) Values
Clients
6% ECS
6% ECS
2% ECS
Figure
15: Experiments with Three Values for the ccp parameter
more than 56 clients remains stable in the range between 9.00 and 12.5 job/min (due to
the high utilization of its resources) and that the network shows no signs of extremely
heavy utilization. The gains for the RU configuration remain in the same levels with
those reported in experiment STS-U.
6 Conclusions
We have presented, modeled and compared three contemporary DBMS architectures all
aiming for high throughput rates. The simulation results show the following characteristics
ffl The RAD-UNIFY architecture gives an average 2.1 times performance improvement
over the CS by utilizing the main memories and the CPUs of the clients.
ffl Although the ECS performance declines when the QUS queries to updates ratio
decreases, it still offers serious speedup rates over the other two architectures.
Under pure update workloads, the ECS gives the worst performance for more than
clients.
ffl Under light update rates (1%-5%), the performance speedup of ECS over both CS
and RU is almost proportional to the number of participating workstations in the
10.0030.0050.00Model Scalability Experiment
Throughput
Clients
0% ECS/CS
2% ECS/CS
4% ECS/CS
6% ECS/CS
8% ECS/CS
0% RU/CS
Rest of RU/CS curves
Figure
Scalability Experiment
range of 4-32 for our experiments. For streams without updates, ECS/CS speedup
increases almost linearly with the number of workstations.
ffl Under either heavier update rates and many concurrently imposed modifications,
the ECS Server becomes the system bottleneck. We have introduced the "maxi-
mum throughput thresholds" (mtts) to identify these bottleneck points with respect
to each update rate.
ffl Faster workstation disk access time improves the performance of ECS.
ffl ECS performance is diminished significantly whenever clients constantly demand
new data elements from the server.
ffl Database environments with few exclusive writers and a number of readers offer
very good performance results for the ECS configuration.
ffl Under all, but pure update workloads, the ECS architecture is more scalable than
the other two.
Future work includes experimentation using local databases to the workstations,
exploring the behavior of the configurations under different server concurrency control
protocols and developing periodic update propagation strategies for bringing transaction
throughput close to the 0% update curves.
Aknowledgements: The authors are grateful to the anonymous referees for their valuable
comments and suggestions as well as, Jennifer Carle and George Panagopoulos for
commenting on earlier versions of this paper.
--R
Models for Studing Concurrency Control Performance: Alternatives and Implications.
Data Caching Issues in an Information Retrieval System.
Cache Coherence Protocols: Evaluation Using a Multiprocessor Simulation Model.
A Methodology for Database System Performance Evaluation.
Data Caching Tradeoffs in Client-Server DBMS Architectures
Server Based Information Retrieval Systems Under Light Update Loads.
Performance and Scalability of Client-Server Database Architec- tures
A Study of Three Alternative Workstation- Server Architectures for Object-Oriented Database Systems
Computer Systems Performance Evaluation.
Granularity of Locks and Degrees of Consistency in a Shared Database
Performance Analysis of Several Back-End Database Architectures
Performance Considerations for an Operating System Transaction Manager.
Cooperative Object Buffer Management in the Advanced Information Management Prototype.
An Environment for Developing Fault-Tolerant Software
The Incremental Access Method of View Cache: Concept
Evaluation of an Enhanced Workstation-Server DBMS Architecture
Modern Client-Server DBMS Architectures
Principles and Techniques in the Design of ADMS
Benchmarking simple database operations.
A Highly Available File System for a Distributed Workstation Environment.
Differential Files: Their Application to the Maintenance of Large Databases.
Unix Networking Programming.
Architectures of Future Data Base Systems.
Database and Knowledge-Base Systems: Volume <Volume>II-The</Volume> New Technologies
Cache Consistency and Concurrency Control in a Client/Server DBMS Architecture.
Maintaining Consistency of Client-Cached Data
--TR
Performance analysis of several back-end database architectures
Cache coherence protocols: evaluation using a multiprocessor simulation model
Principles and techniques in the design of ADMS:.F:6WWp
Concurrency control performance modeling: alternatives and implications
Benchmarking simple database operations
Performance Considerations for an Operating System Transaction Manager
Coda
Data caching issues in an information retrieval system
A study of three alternative workstation server architectures for object-oriented database systems
Maintaining consistency of client-cached data
Architecture of future data base systems
UNIX network programming
An Environment for Developing Fault-Tolerant Software
An incremental access method for ViewCache
Data caching tradeoffs in client-server DBMS architectures
Cache consistency and concurrency control in a client/server DBMS architecture
Evaluation of an enhanced workstation-server DBMS architecture
Differential files
Principles of Database and Knowledge-Base Systems
A methodology for database system performance evaluation
Cooperative Object Buffer Management in the Advanced Information Management Prototype
Performance and Scalability of Client-Server Database Architectures
--CTR
Svend Frlund , Pankaj Garg, Design-time simulation of a large-scale, distributed object system, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.8 n.4, p.374-400, Oct. 1998
Vinay Kanitkar , Alex Delis, Time Constrained Push Strategies in Client-Server Databases, Distributed and Parallel Databases, v.9 n.1, p.5-38, January 1, 2001
Vinay Kanitkar , Alex Delis, Real-Time Processing in Client-Server Databases, IEEE Transactions on Computers, v.51 n.3, p.269-288, March 2002
Alexander Thomasian, Distributed Optimistic Concurrency Control Methods for High-Performance Transaction Processing, IEEE Transactions on Knowledge and Data Engineering, v.10 n.1, p.173-189, January 1998
Alex Delis , Nick Roussopoulos, Performance and Scalability of Client-Server Database Architectures, Proceedings of the 18th International Conference on Very Large Data Bases, p.610-623, August 23-27, 1992
Je-Ho Park , Vinay Kanitkar , Alex Delis, Logically Clustered Architectures for Networked Databases, Distributed and Parallel Databases, v.10 n.2, p.161-198, September 2001
Alex Delis , Nick Roussopoulos, Techniques for Update Handling in the Enhanced Client-Server DBMS, IEEE Transactions on Knowledge and Data Engineering, v.10 n.3, p.458-476, May 1998
Vinay Kanitkar , Alex Delis, Efficient processing of client transactions in real-time, Distributed and Parallel Databases, v.17 n.1, p.39-74, January 2005
Alfredo Goi , Arantza Illarramendi , Eduardo Mena , Jos Miguel Blanco, An Optimal Cache for a Federated Database System, Journal of Intelligent Information Systems, v.9 n.2, p.125-155, Sept./Oct. 1997 | DBMS architectures;software architecture configurations;RAD-UNIFY type;design rationales;database management systems;simulation models;performance evaluation;client-server;functional components;workstations;simulation results;local area networks;software engineering |
631036 | A Formal Analysis of the Fault-Detecting Ability of Testing Methods. | Several relationships between software testing criteria, each induced by a relation between the corresponding multisets of subdomains, are examined. The authors discuss whether for each relation R and each pair of criteria, C/sub 1/ and C/sub 2/, R(C/sub 1/, C/sub 2/) guarantees that C/sub 1/ is better at detecting faults than C/sub 2/ according to various probabilistic measures of fault-detecting ability. It is shown that the fact that C/sub 1/ subsumes C/sub 2/ does not guarantee that C/sub 1/ is better at detecting faults. Relations that strengthen the subsumption relation and that have more bearing on fault-detecting ability are introduced. | Introduction
Over the last several years, researchers have defined a wide variety of software testing
techniques, studied their properties, and built tools based on some of them. Recently,
there has been increasing interest in the question of how techniques compare to one
another in terms of their ability to expose faults.
Aspects of this problem have been addressed through experiments [2], simulations [1,
7], and analysis [9, 14]. While these approaches offer certain insights, the results lack
generality, either because of the inherent nature of experimentation and simulation, or
because of the assumptions underlying the simulations and analyses. In particular those
simulations and analyses [1, 7, 14] investigated criteria that partition the input domain
of the program into disjoint subdomains. In this paper, we analyze the more realistic
situation in which the criteria divide the input domain into overlapping subdomains. We
Author's address: Computer Science Dept., Polytechnic University, 333 Jay St., Brooklyn, N.Y.
11201. Supported by NSF Grant CCR-8810287 and by the New York State Science and Technology
Foundation Center for Advanced Technology program.
y Author's address: Courant Institute of Mathematical Sciences, New York University, 251 Mercer
Street, New York, NY 10012. Supported by NSF grant CCR-8920701 and by NASA grant NAG-1-1238.
characterize various relationships between criteria, and show circumstances under which
we can conclude that test suites chosen using one criterion are more likely to expose
faults than test suites chosen using another. We also apply these results to compare
well-known control flow based and data flow based techniques.
Subsumption has frequently been used to compare criteria. Criterion C 1 subsumes
criterion C 2 if every test suite that satisfies C 1 also satisfies C 2 . In this paper, we show
that according to three probabilistic measures of fault-detecting ability, the fact that C 1
subsumes C 2 does not guarantee that C 1 is better at detecting faults than C 2 . These
measures are related to two different test data selection strategies. We also explore the
question of how the subsumes relation can be strengthened in order to obtain a relation
that has more bearing on fault-detecting ability. We define two natural relations, covers
and partitions, each of which is stronger than subsumes, and show that, in the worst case,
neither of them provide much additional insight into fault-detecting ability. Finally, we
define two variations of these relations, properly covers and properly partitions, and prove
that when C 1 properly covers or properly partitions C 2 , C 1 is guaranteed to be better at
detecting faults than C 2 when assessed using certain reasonable probabilistic measures.
1.1 Preliminary Definitions
A multi-set is a collection of objects in which duplicates may occur, or more formally,
a mapping from a set of objects to the non-negative integers. We shall delimit multi-sets
by curly braces and use set-theoretic operator symbols to denote the corresponding
multi-set operators, throughout. For a multi-set S 1 to be a sub-multi-set of multi-set S 2 ,
there must be at least as many copies of each element of S 1 in S 2 as there are in S 1 .
Therefore, f0; but it is not the case that f0; 1; 1g ' f0; 1; 2g.
A test suite is a multi-set of test cases, each of which is a possible input to the
program. Many systematic approaches to testing are based on the idea of dividing the
input domain of the program into subsets, called subdomains, then requiring the test suite
to include elements from each subdomain. These techniques are sometimes referred to as
partition testing, but in fact, most of them subdivide the input domain into overlapping
subdomains, and thus do not form a true partition of the input domain. In this paper,
we will refer to such strategies as subdomain-based testing. In subdomain strategies
based on the program's structure, each subdomain consists of all elements that cause a
particular code element to be executed. For example, in branch testing, each subdomain
consists of all inputs that cause execution of a particular branch; in path testing, each
subdomain consists of all inputs that cause execution of a particular path; in data flow
testing using the all-uses criterion, each subdomain consists of all inputs that execute
any path from a particular definition of a variable v to a particular use of v without any
intervening redefinition of v. Other subdomain-based testing strategies include mutation
testing, in which each subdomain consists of all inputs that kill a particular mutant;
specification-based testing techniques, in which each subdomain consists of inputs that
satisfy a particular condition or combination of conditions mentioned in the specification;
and exhaustive testing, in which each subdomain consists of a single point. In most of
these strategies, most programs and specifications give rise to overlapping and duplicate
subdomains.
A test data adequacy criterion is a relation C ' Test Suites\ThetaPrograms\ThetaSpecifications,
that, in practice, is used to determine whether a given test set T does a "thorough" job
of testing program P for specification S. If C(T; holds, we will say "T is adequate
for testing P for S according to C", or, more simply, "T is C-adequate for P and S".
A testing criterion C is subdomain-based if, for each program P and specification S,
there is a non-empty multi-set SDC (P; S) of subdomains (subsets of the input domain),
such that C requires the selection of one or more test cases from each subdomain in
If C requires the selection of more than one test case from a given subdomain,
they need not be distinct. In general, SDC (P; S) is a multi-set, rather than a set, because
for some criteria it is possible for two different requirements to correspond to the same
subdomain. For example, the same set of test cases might cause the execution of two
different statements, and hence appear twice in the multi-set of subdomains arising from
the statement testing criterion. Depending on the details of the criterion and the test
selection strategy, it may then be necessary to choose test cases from such a subdomain
more than once. Note that since SDC is assumed to be non-empty and at least one test
case must be chosen from each subdomain, the empty test suite is not C-adequate for
any subdomain criterion.
Subdomain-based criterion C is said to be applicable to (P,S) if and only if there exists
a test suite T such that C(T; and C is universally applicable if it is applicable
to (P; S) for every program/specification pair (P; S). Note that since the empty test suite
is not C-adequate, C is applicable to (P; S) if and only if the empty subdomain is not
an element of SDC
Throughout this paper all testing criteria discussed will be universally applicable
subdomain-based criteria, unless otherwise specifically noted. In fact, many of the criteria
that have been defined and discussed in the software testing literature are not
universally applicable. In path-based or structural criteria, this occurs because infeasible
paths through the program can lead to testing requirements that can never be fulfilled
(represented by empty subdomains belonging to SDC (P; S)), while in mutation testing
this occurs due to mutants that are equivalent to the original program. However, it
is the universally applicable analogs of those criteria (obtained by removing the empty
subdomain from SDC (P; S)) that are actually usable in practice. It is important to note
that the relationship between the universally applicable analogs of criteria may be different
than that between the original criteria. For example, in [3] we defined universally
applicable analogs of the data flow testing criteria defined in [12, 13], and showed that
while the all-du-paths criterion subsumes the all-uses criterion, the universally applicable
analog of all-du-paths does not subsume the universally applicable analog of all-uses.
In order to illustrate some of our points with realistic examples, we review the definitions
of several well-known criteria. Since some of these criteria are based on program
structure, it is necessary at this point to make certain assumptions about the programs
under test. We will limit attention to programs that are written in Pascal with no goto
statements. 1 We will also assume that every program has at least one conditional or
repetitive statement, and that at least one variable occurs in every Boolean expression
controlling a conditional or repetitive statement in the program. We also require programs
to satisfy the No Feasible Anomalies (NFA) property: every feasible path from
the start node to a use of a variable v must pass through a node having a definition of
v. This is a reasonable property to require, since programs that do not satisfy NFA have
the possibility of referencing an undefined variable. Although the question of whether
a given program satisfies NFA is undecidable, it is possible to check the stronger, no
anomalies property (NA), which requires that every path from the start node to a use of
a variable v to pass through a node having a definition of v. Furthermore, NA can be
enforced by adding a dummy definition of each variable to the start node.
The all-edges criterion (also known as branch testing) requires that every edge in the
program's flow graph be executed by at least one test case. Data flow testing criteria [8,
10, 11, 13] require that the test data exercise paths from points at which variables are
defined to points at which their values are subsequently used. Occurrences of variables in
the program under test are classified as being either definitions, in which values are stored,
or uses, in which values are fetched. In [13] variable uses are further classified as being
either p-uses, i.e., uses occurring in the Boolean expression in a conditional statement,
or c-uses, i.e., uses occurring in assignment statements or arguments in a procedure
call. The all-uses criterion [13] requires that the test data cover every definition-use
association (dua) in the program, where a dua is a triple (d,u,v) such that d is a node
in the program's flow graph in which variable v is defined, u is a node or edge in which
v is used, and there is a definition-clear path with respect to v from d to u. 2 A test case
covers dua (d,u,v) if t causes a definition-clear path with respect to v from d to u to
be executed. Similarly, the all-p-uses criterion [13] is a restricted version of all-uses that
only requires that the test data cover every (d,u,v) in which u is an edge with a p-use
of variable v. Precise definitions of the criteria for the subset of Pascal in question are
given in [3].
Notice that these criteria are not universally applicable; in fact, for many programs
the all-p-uses and all-uses criteria are not applicable because some dua is not executable.
For example, any program having a for loop whose bounds are non-equal constants has
an unexecutable dua involving the initialization of the counter variable and the edge
exiting the loop. Such a dua is unexecutable because the only executable path from the
definition to the use traverses the loop at least once, and in so doing, redefines the counter
variable [3]. In the sequel we will use the terms all-edges, all-p-uses, and all-uses to refer
to the universally applicable analogs of these criteria, unless otherwise specifically noted.
This assumption is used in the proof of Theorem 5, below.
2 A definition-clear path with respect to v is a path in which none of the nodes, except perhaps the
first and last, have a definition of variable v.
A program satisfies the no syntactic undefined p-uses property (NSUP) if for every
p-use of a variable v, there is a path from program entry to the p-use along which v is
defined. Rapps and Weyuker showed that for the class of programs that satisfies NSUP,
for the original (not universally applicable) criteria, all-p-uses subsumes all-edges. In [3],
we showed that a stronger restriction on the class of programs considered is needed to
make subsumption hold for the universally applicable analogs of the criteria. Namely,
we required the no feasible undefined p-uses property (NFUP), which requires that for
every p-use of a variable v, there is an executable path from program entry to the p-use
along which v is defined. Since any program that satisfies NFA also satisfies NFUP, for
the class of programs we are considering, all-uses subsumes all-edges.
Detecting Ability of Subdomain Criteria
In this section, we define three probabilistic measures of the fault-detecting ability of a
testing criterion. In [1, 7, 14] the probability that a test set generated to satisfy some
subdomain-based testing strategy would expose a fault was compared to the probability
that a randomly generated test set of the same size would expose a fault. In contrast, we
are interested in comparing the probability that test sets generated to satisfy different
subdomain-based strategies will detect at least one fault.
We will focus on two conceptually simple test selection strategies. The first requires
the tester to independently randomly select test cases from the domain until the adequacy
criterion has been satisfied. The second strategy assumes that the domain has
first been divided into subdomains and then requires independent random selection of a
predetermined number, n, of test cases from each subdomain. We assume throughout
that the random selection is done using a uniform distribution. These strategies, Sel 1
and Sel 2 , are shown in Figure 1.
Our analytical results on the fault-detecting ability of subdomain criteria are based
on the assumption that test suites are selected using Sel 1 or Sel 2 . In practice it is rare
to use either of these strategies, but rather something closer to a combination of the two.
Typically a tester begins by selecting some test cases. Sometimes these test cases are
chosen because the tester believes they have some particular significance, and sometimes
they are chosen in a more ad hoc fashion. After some amount of testing, the tester
checks to see which features of the program still need to be exercised in order to satisfy
the criterion and tries to select test cases accordingly. This corresponds to selecting test
cases from the appropriate subdomains in a manner similar to the second strategy. Thus,
for example, one might select an all-edges adequate test suite by generating an initial
test suite, checking to see which edges had not yet been covered, and then selecting test
cases specifically aimed at covering each of the edges that remain unexercised.
Note that when Sel 1 is used, a test case that lies in the intersection of two or more
subdomains may "count" toward each subdomain, while when using Sel 2 it will not.
test suite;
repeat
t := a uniformly randomly selected element of the input domain;
add t to T
until T is C-adequate
test suite;
for each subdomain D 2 SDC (P; S) do
begin
for to n do
begin
t := a uniformly randomly selected element of D;
add t to T
Figure
1: Two test suite selection strategies
Thus, Sel 1 may lead to selection of substantially smaller test suites than Sel 2 (1):
Also note that if the multi-set SDC contains m copies of some subdomain D,
then Sel 2 (n) requires that n test cases be selected independently from D m different
times, for a total of m \Theta n test cases from D. This can make Sel 2 sensitive to details of
the definitions of the testing criterion. For example, the number of all-edges subdomains
for a given program P will depend on details of the construction of the P's flow graph,
such as whether or not an extra exit node is added to the graph. One could avoid such
problems by assuming that duplicate elements of SDC (P; S) are removed before test
case selection. However, in practice one usually works with predicates describing the
subdomains, rather than with explicit lists of their elements. Since it is undecidable
whether two predicates are equivalent, it is not always possible to identify duplicate
subdomains. Consequently, we do not assume that duplicates have been eliminated.
Obviously there are many reasonable measures of fault-detecting ability that could
be defined. Our measures are related to the probability that a test suite will expose
a fault. Following the notation used in [14], given a program P, specification S, and
criterion C, we let d the number of failure-causing inputs in D i , where
g.
As Weyuker and Jeng [14] have noted, the fault-detecting ability of a criterion is
related to the extent to which failure-causing inputs are concentrated by its subdomains.
The first measure we consider is
Let D max denote a subdomain that has the highest concentration of failure-causing inputs,
i.e., a subdomain for which m i =d i is maximal. Since we are considering subdomain-based
criteria, any C-adequate test set contains at least one test case from D max . When using
Sel 1 or Sel 2 , each element of D max is equally likely to be selected to "represent" D max ,
and therefore M 1 gives a crude lower bound on the probability that a C-adequate test
suite selected using Sel 1 or Sel 2 will expose at least one fault.
The second measure we consider is
Y
gives the exact probability that a test set chosen using Sel 2 (1) will expose at least
one fault. Duran and Ntafos [1] and Hamlet and Taylor [7] performed simulations investigating
whether hypothetical partition testing strategies (i.e., subdomain-based testing
strategies in which the subdomains are pairwise disjoint) are more likely to expose
faults than random testing according to this measure. Weyuker and Jeng investigated
similar questions analytically [14]. In this paper, we investigate the more realistic
case of subdomain-based criteria in which the subdomains may intersect, and compare
subdomain-based criteria to one another rather than to random testing.
One problem with using M 2 as a measure is that one subdomain-based criterion C 1
may divide the domain D into k 1 subdomains while another criterion C 2 divides D into
subdomains where Then, in a sense, M 2 gives C 1 an unfair advantage since
test cases while C 2 only requires k 2 , and hence M 2 is comparing the
fault-detecting qualities of different sized test suites. In order to correct this imbalance,
we first define a measure that is a generalization of M 2 . Instead of always selecting one
test case per subdomain, n test cases are chosen per subdomain. We call this measure
Y
Of course this measure still has the deficiency cited above that different numbers of
test cases might be needed to satisfy C 1 and C 2 . The reason M 3 is a useful generalization
of M 2 is that it allows us to adjust for the test suite size differences. Letting k 1 and
denote the number of subdomains of C 1 and C 2 , respectively, for program P and
specification S, and letting compensate for the inequality in test set
size by comparing
M 3 is actually a special case of the measure called P p in [1, 7, 14]. In those papers,
a strategy could select a different number of test cases for each subdomain, whereas M 3
requires the same number to be selected from each subdomain.
3 Relations between criteria
In this section, we define five relations between criteria: the narrows, covers, partitions,
properly covers, and properly partitions relations. We compare these relations to one
another and to the subsumes relation, illustrate the relations with examples, and examine
what knowing that C 1 narrows, covers, partitions, properly covers, or properly partitions
C 2 for program P and specification S tells us about the relative values of each measure
of fault-detecting ability. Specifically, we investigate whether for each relationship R, for
every program P and specification S,
The resulting theorems are summarized in Table 1.
Many relations that have been previously studied in connection with fault-detecting
ability depend on the distribution of failure-causing inputs. These include Gourlay's
properly sometimes always sometimes
covers
properly always always sometimes
partitions C 2
Table
1: Summary of Results
power relation [6] and the better and probbetter relations proposed by Weyuker, Weiss,
and Hamlet [15]. In contrast, each of the relations between criteria introduced here is
induced by a relation between the corresponding multi-sets of subdomains. Thus, for
these relations the question of whether specification
depends purely on the way the criteria divide the input domain into subdomains, and
not on such factors as the way failure-causing inputs are distributed in the input domain.
Consequently, the results here are very general. If we prove that
that C 1 is at least as good as C 2 according to some probabilistic measure and if we
prove that holds for all programs and specifications in some class, then we are
guaranteed that C 1 is at least as good as C 2 for testing programs in that class, regardless
of which particular faults occur in the program.
3.1 The narrows relation
for every subdomain
there is a subdomain D 0 2 SDC1 (P; S) such that D 0 ' D. This notion
is known as refinement in point set topology. C 1 universally narrows C 2 if for every
program, specification pair (P,S), C 1 narrows C 2 for (P,S).
Observation 1 For each program P and specification S, the narrows relation is reflexive
and transitive. If SDC2
The next theorem shows that the narrows relation is closely related to the subsumes
relation. However, the question of whether C 1 narrows C 2 deals only with the relationships
between the multi-sets of subdomains induced by the two criteria, while the question
of whether C 1 subsumes C 2 may involve additional considerations, such as the number of
test cases from each subdomain required by each criterion. For example, consider criteria
such that for any (P,S), C 1 and C 2 give rise to the same set of subdomains,
but C 2 requires selection of two test cases from each subdomain whereas C 1 only requires
selection of one test case from each subdomain. Trivially, C 1 universally narrows C 2 .
However, C 1 does not subsume C 2 , since a test suite consisting of one element from each
subdomain is C 1 -adequate but not C 2 -adequate. As the next theorem shows, as long as
the criteria require selection of at least one element from each subdomain (as opposed to
requiring selection of at least k elements for some k ? 1) subsumption is equivalent to
the universally narrows relation.
Theorem 1 Let C 1 and C 2 be subdomain-based criteria, each of which explicitly requires
selection of at least one test case from each subdomain. Then C 1 subsumes C 2 if and only
if C 1 universally narrows C 2 .
Proof:
Assume C 1 universally narrows C 2 . Let T be a test suite that is C 1 -adequate for some
program P and specification S. T contains at least one element from each subdomain in
Thus, since each subdomain in SDC2 (P; S) is a superset of some subdomain
belonging to SDC1 T contains at least one element from each subdomain in
Conversely, assume C 1 does not universally narrow C 2 . There exists a program P and
a specification S such that some subdomain D 2 SDC2 (P; S) is not a superset of any
subdomain of SDC1 (P; S). Thus, for each D 0 2 SDC1 T be a
test suite obtained by selecting one element from D 0 \Gamma D for each D 0 2 SDC1 . Then T
is C 1 -adequate but not C 2 -adequate for (P,S). So C 1 does not subsume C 2 . 2
The next group of theorems establish that the fact that a criterion C 1 narrows criterion
does not guarantee that C 1 is better able to detect faults than C 2 , according to any
of our three measures.
Theorem 2 There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
Let P be a program whose input domain is the integers between \GammaN and N , where
N ? 1. Let C 1 be the criterion that requires selection of at least one test case that is
zero and at least one test case that is non-zero. Let C 2 be the criterion that requires
selection of at least one test case that is greater than or equal to zero and at least one
test case that is less than or equal to zero. Suppose
That is, suppose P is correct on all inputs except and
There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
be as in the proof of Theorem 2. Let
That is, P is incorrect on all inputs except
each d i , and m 1 are as above, but
There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
Let P, S, C 1 , and C 2 be as in Theorem 3. Since jSDC1
so
1):Corollary 1 For each measure M i , there exists criteria C 1 and C 2 , program P and
specification S such that C 1 subsumes C
3.2 The covers relation
The next relation we examine, the covers relation, strengthens the narrows relation in a
natural way. The covers relation is interesting to examine because many criteria that had
previously been shown to subsume the all-edges criterion actually also cover all-edges.
for every subdomain
there is a non-empty collection of subdomains fD belonging
to SDC1 (P; S) such that D 1 universally covers C 2 if for every program,
specification pair (P,S), C 1 covers C 2 for (P,S).
Observation 2 For each program P and specification S, the covers relation is reflexive
and transitive. If C 1 covers C 2 for (P,S) then C 1 narrows C 2 for (P,S). If SDC2
The following example illustrates the distinction between the narrows and covers
relations.
Example 1:
Consider the criteria in the proof of Theorem 2. Since D 1 ' D 3 and D 1 ' D 4 , C 1 narrows
. However, since D 3 6= D 1 , D 3 6= D 2 , and D 3 6= (D 1 [ does not cover C 2 . 2
We next show that the all-p-uses and all-uses criteria universally cover the all-edges
criterion.
Theorem 5 The all-p-uses criterion universally covers the all-edges criterion.
Proof:
Let P be a program, let e be an executable edge in P, and let D e be the subdomain
corresponding to e, that is, the set of inputs that execute paths that cover edge e.
Case 1: e has a p-use of some variable v. Let be the definitions of v for
which there is a feasible definition-clear path with respect to v from ffi i to e. For each
be the subdomain ftjt covers dua v)g. Since the program
satisfies the NFA property, every feasible path from the start node to e passes through
at least one of the
Case 2: e does not have a p-use of any variable. Then, since P has no goto statements,
there are edges e do have p-uses of variables, s.t. D
em . To
see this, let s be the innermost nested conditional or repetitive statement such that e is
in the subgraph corresponding to s. If s is an if-then, if-then-else, or case statement
is the edge going from the decision node toward e; if s is a while or
for statement then is the edge entering the loop; if s is a repeat statement
are the back-edge of the loop and the edge exiting the loop.
If e is not contained in any conditional or repetitive statement, e are the edges
out of the first decision node encountered on a path beginning at the start node. By case
1, each D e i is a union of all-p-uses subdomains; thus the union of all of these all-p-uses
subdomains equals D e . 2
Corollary 2 The all-uses criterion universally covers the all-edges criterion.
Proof:
For any program P and specification S, SD all\Gammap\Gammauses so by Observation
2, all-uses covers all-p-uses. Since covers is a transitive relation (Observation 2),
all-uses covers all-edges. 2
The next group of theorems shows that the fact that C 1 covers C 2 does not guarantee
that C 1 is better than C 2 according to any of the measures.
Theorem 6 There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
Let P be a program whose input domain is the integers between \GammaN and N , where N - 1,
let
let C 1 and C 2 be the criteria whose subdomains are g.
P. Let S be a specification such that the only
failure-causing inputs are \GammaN and N . Then
S):Note that in the proof of Theorem 6, the fact that the failure-causing inputs lie outside
the intersection of D 2 and D 3 contributes to the fact that D 1 has a higher concentration
of failure-causing inputs than D 2 or D 3 . We next show that this phenomenon can
occur with "real" adequacy criteria, by exhibiting a program and specification for which
Example 2:
Consider the program shown in Figure 2. 3 Assume that the inputs x and y lie in the
range . The executable edges and executable definition-
p-use associations are shown in column 2 of Table 2. Their corresponding subdomains
are shown in column 3 of Table 2.
Assume that 3 - MAX - N . Notice that
MAX otherwise
On any input (x; y) such that y - MAX \Gamma 2, the path taken will go through either node
4 or node 5 on the first traversal of the loop (depending on the parity of x+ y), and then
will alternate between going through node 4 and going through node 5 on subsequent
traversals of the loop. Consequently, among all the inputs that cause execution of edge
(6,3), all of them except those in which both of the definition-use
associations (4,(6,3),y) and (5,(6,3),y).
We now manufacture a specification for which the program will fail on precisely
those inputs that lie within the subdomain corresponding to edge (6,3), but outside the
intersection of subdomains corresponding to (4,(6,3),y) and (5,(6,3),y), i.e., in D
3 Note that the fact that the program has the same statement in both branches of the if-then-else
statement is merely a convenience which makes it easy to achieve the desired behavior. It is certainly
possible to devise a program without this characteristic which also has the desired behavior.
dua
a (1,2) whole input domain n 2 n
l (2,(3,5),y) even(y
or
or
or
or
or ((y - MAX) and odd(y
or ((y - MAX) and even(y
Table
2: Subdomains of Program shown in Figure 2
program P(input,output);
const
var
begin
-3- repeat if odd(y+x)
-4- then y := y +1
-5- else y := y +1;
-6- until (y ?= MAX);
end.
Figure
2: Program for which M 1
MAX otherwise
The subdomains for the all-edges criterion are shown in the first eight rows of Table 2,
while those for the all-p-uses criterion are shown in the last ten rows. Thus,
Note that this last inequality holds since MAX - 3 by assumption.We next investigate the relationship between covering and measure M 2 .
Theorem 7 There exists a program P, specification S, and criteria C 1 and C 2 such that
Figure
3: Program illustrating M 2 (all-uses;
Proof:
By Corollary 2, it suffices to exhibit a program P and specification S for which
whose flow graph is shown
in
Figure
3. Assume that the input domain is fxj0 - x ! 200g, that c is a constant, and
that the specification S is such that the set of failure-causing inputs is f0; 197; 198; 199g.
The subdomains arising from the all-edges and all-uses criteria and the corresponding
values of m and d are as shown in Table 3. Notice that each criterion induces nine subdomains
for this program, and in each case two of the subdomains contain no failure-causing
inputs.
Since
and
dua
Table
3: Subdomains of Program shown in Figure 3
The following corollary follows from the fact that the number of all-edges subdomains
is equal to the number of all-uses subdomains in the program in Figure 3.
Corollary 3 There exists a program P, specification S, and criteria C 1 and C 2 such
that
3.3 The partitions relation
partitions C 2 for (P,S) if for every subdomain
there is a non-empty collection fD of pairwise disjoint sub-domains
belonging to SDC1 (P; S) such that universally partitions
for every program/specification pair (P,S), C 1 partitions C 2 for (P,S). Note that the
term partition is used here in the strict mathematical sense.
Observation 3 For each program P and specification S, the partitions relation is reflexive
and transitive. If C 1 partitions C 2 for (P,S) then C 1 covers C 2 for (P,S). If
The next example illustrates the distinction between the covers and partitions relations
Example 3:
Consider again the program whose flow graph is shown in Figure 2. The executable
edges and executable definition-p-use associations and the corresponding subdomains
are shown in Table 2.
Recall that the all-p-uses criterion universally covers the all-edges criterion. To see
that all-p-uses does not partition all-edges for this program, consider edge (6;
main g). The only elements of SD all\Gammap\Gammauses that are contained in D g are Dm=D n and
since all the other subdomains contain inputs in which y - MAX \Gamma 1. Since
D n and D o have a non-empty intersection, and neither is equal to D g , all-p-uses does not
partition all-edges for this program.
The executable definition-c-use associations for this program are shown in Table 4.
Note that for each definition-c-use association except (1,2,input), there is a definition-p-
use association with the same subdomain. Consequently, the above argument also shows
that for this program all-uses does not partition all-edges since the set of definition-use
associations for the all-uses criterion is simply the union of the sets of associations for
all-p-uses and all-c-uses. 2
Next, we examine the relationship between partitioning and fault exposing ability, as
assessed by the measures
id def-c-use assoc. equiv. def-p-use assoc.
x (4,7),y) (4,(6,7),y) [q]
y (5,7,y) (5,(6,7),y) [r]
Table
4: Definition-c-use associations of Program shown in Figure 2
Theorem 8 If C 1 partitions C 2 for program P and specification S then M 1
Proof:
be disjoint subdomains belonging to SDC1
such that
each
, so
, and therefore M 1
Theorem 9 There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
Consider again the program and specification in the proof of Theorem 7, for which
was shown to be less than M 2 (all-edges; All-uses partitions all-
edges for this program, since the only element of SD all-edges that does not belong to
SD all-uses is fxj0 - x ! 200g, which is equal to a union of disjoint elements of SD all\Gammauses ,
Corollary 4 There exists a program P, specification S, and criteria C 1 and C 2 such that
3.4 The properly covers relation
We have seen that the fact that C 1 covers C 2 does not guarantee that C 1 is better at
detecting faults according to M 2 . However, when an additional constraint on the nature
of the covering is added, the situation improves.
m g, and let SDC2
g. C 1
properly covers C 2 for (P,S) if there is a multi-set
such that M ' SDC1 (P; S) and
n;kn
Note that the number of occurrences of any subdomain D 1
i;j in the above expression is
less than or equal to the number of occurrences of that subdomain in the multi-set SDC1 .
universally properly covers C 2 if for every program P and specification S, C 1 properly
covers C 2 for (P,S).
Observation 4 The properly covers relation is reflexive and transitive. If C 1 properly
covers C 2 for (P,S) then C 1 covers C 2 for (P,S). If SDC2
properly covers C 2 for (P,S). If the elements of SDC2 (P; S) are disjoint (i.e., if C 2 induces
a true partition of the input domain), then C 1 covers C 2 if and only if C 1 properly covers
The next example illustrates the distinction between the covers and properly covers
relations.
Example 4:
Consider a program P with integer input domain fxj0 - x - 3g, and criteria C 1 and C 2
such that
3g. Then D
so C 1 covers (in fact partitions) C 2 . However C 1 does not properly cover C 2 because the
subdomain D b is needed in the coverings of both D d and D e , but only occurs once in the
multi-set SDC1 .
On the other hand consider criterion C 3 where g. C 3 does
properly cover C 2 ; it is legitimate to use D b twice in the covering, since it occurs twice in
SDC3 . Note that such duplication of subdomains frequently occurs in real subdomain-
based criteria; for example, in Table 2, D
Also consider C 4 with subdomains D
Then C 4 properly covers C 2 because D
subdomain is contained in the intersection of C 2 subdomains, there is no danger that one
of them will need to be used more than once in covering all the C 2 subdomains. 2
Example 5:
The next example also illustrates the the properly covers relation. We show that for the
program P in Figure 2, all-uses properly covers all-edges, but all-p-uses does not properly
cover all-edges.
Consider the subdomains arising from the edges and definition-p-use associations of
shown in Table 2 and the definition-c-use associations shown in Table 4. Recall that
all-c-uses . Note that
Since the multi-set
this is a proper covering. On the other hand, when the subdomains arising just from
the all-p-uses criterion are used to cover the all-edges subdomains, it is necessary to
use some subdomains more often than they occur in the multi-set SD all-p-uses
Consequently, all-p-uses does not properly cover all-edges for P and S. 2
The next three theorems show that if C 1 properly covers C 2 , then C 1 is guaranteed
to be better according to M 2 , but not necessarily according to M 1 or M 3 .
Theorem There exists a program P, specification S, and criteria C 1 and C 2 such
that C 1 properly covers C 2 for P and S, but M 1
Proof:
Consider again the program P and specification S in Figure 2. As shown in Example 5,
all-uses properly covers all-edges for P and S. Since for this program M 1 (all-p-uses;
shows that M 1 (all-edges;
On the other hand, the fact that C 1 properly covers C 2 does guarantee that it is
better according to M 2 .
Theorem 11 If C 1 properly covers C 2 for program P and specification S, then M 2 (C
In fact, given program P, specification S, and criteria C 1 and C 2 such that C 1 covers C 2 for P
and S, one can always manufacture a criterion C 0such that C 0properly covers C 2 for P and S, and
simply adding duplicates of certain C 1 subdomains to SDC1 (P; S).
The following lemma is proved in the Appendix:
Y
We now prove Theorem 11.
Proof:
Assume C 1 properly covers C 2 for program P and specification S. Let SDC2
m g, and let
be a multi-set such that M ' SDC1 (P; S) and
n;kn
Then
Y
Line (2) follows from Lemma 1, and line (3) holds because no D 1
i;j occurs more times in
the product in line (2) than in the product in line (3). Consequently, M 2 (C
Theorem 12 There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
3g. As shown in Example 4, (where C 1 was
called properly covers (in fact properly partitions) C 2 . Let P be a program with
an integer input domain fxj0 - x - 3g that is correct with respect to its specification
for every input except
and
so
3.5 The properly partitions relation
g, and let SDC2
g. C 1
properly partitions C 2 for (P,S) if there is a multi-set
such that M ' SDC1
n;kn
and for each i, the collection fD 1
g is pairwise disjoint. C 1 universally properly
partitions C 2 if for every program P and specification S, C 1 properly partitions C 2 for
(P,S).
Observation 5 The properly partitions relation is reflexive and transitive. If C 1 properly
partitions C 2 for (P,S) then C 1 partitions C 2 for (P,S) and C 1 properly covers C 2 for
(P,S). If SDC2 properly partitions C 2 for (P,S). If the
elements of SDC2 (P; S) are pairwise disjoint (i.e., if C 2 induces a true partition of the
input domain) then C 1 partitions C 2 if and only if C 1 properly partitions C 2 .
Theorem 13 If C 1 properly partitions C 2 for program P and specification S then M 1
Proof:
If C 1 properly partitions C 2 for (P,S) then C 1 partitions C 2 for (P,S), so the result follows
from Theorem 8. 2
Theorem 14 If C 1 properly partitions C 2 for program P and specification S then
Proof:
If C 1 properly partitions C 2 for (P,S) then C 1 properly covers C 2 for (P,S), so the result
follows from Theorem 11. 2
Theorem 15 There exists a program P, specification S, and criteria C 1 and C 2 such that
Proof:
In the proof of Theorem 12, C 1 properly partitions C 2 . 2
In this paper, we have defined several relationships between software testing criteria, each
induced by a relation between the corresponding multi-sets of subdomains. We have also
investigated whether for each of these relations R,
at detecting faults than C 2 , according to various measures. Each of our three measures
of fault-detecting ability is related to the probability that a test suite selected according
to a particular strategy will detect a fault.
The first relation examined was the narrows relation, which was shown to be closely
related to a commonly used means of comparing criteria called subsumption. We showed
that the fact that criterion C 1 narrows criterion C 2 does not guarantee that C 1 is better
at detecting faults than C 2 according to any of the measures. The next relation examined
was the covers relation, which strengthens the narrows relation in a natural way. Some
well known pairs of criteria, such as all-uses and all-edges are related to one another
according to this relation. We also showed that the fact that criterion C 1 covers criterion
does not guarantee that C 1 is better at detecting faults than C 2 according to any
of the measures. The partitions relation further strengthens the covers relation. While
the fact that C 1 partitions C 2 does guarantee that C 1 is better according to one of the
does not guarantee that C 1 is better according to the other measures.
The last two relations between criteria, the properly covers and properly partitions
relations were designed to overcome some of the deficiencies of the covers and partitions
relations. We proved that the fact that C 1 properly covers C 2 does guarantee that C 1 is
better than C 2 according to measure M 2 , and that the fact that C 1 properly partitions
guarantees that C 1 is better than C 2 according to measures M 1 and M 2 .
Since for most criteria of interest the universally narrows relation is equivalent to the
subsumes relation, one could interpret our results as saying that subsumption is a poor
basis for comparing criteria. However, it is important to note that the results here are
worst case results in the sense that we consider only whether or not the fact that one
criterion subsumes another guarantees improved fault-detecting ability. The question
of what C 1 subsuming (or narrowing, covering, or partitioning) C 2 tells us about their
relative ability to detect faults in "typical" programs remains open.
We have recently shown that the all-p-uses and all-uses criteria do properly cover a
variant of branch testing known as decision coverage and investigated the relationships
among several other subdomain-based criteria, including other data flow testing criteria,
mutation-testing, and multiple-condition coverage [5]. Other directions for future analytical
research include finding conditions on programs that guarantee that C 1 properly
covers C 2 for various well-known criteria C 1 and C 2 ; finding conditions under which C 1
is guaranteed to be better than C 2 according to M 3 , and finding weaker conditions that
guarantee that C 1 is better than C 2 according to M 2 . Our results also suggest several
more pragmatic research problems, including the design of new criteria that are guaranteed
to properly cover commonly used criteria such as all-edges, and the design of test
data selection tools that approximate the selection strategies upon which our results are
based.
Acknowledgments
Some of the results in this paper appeared in the Proceedings of the ACM SIGSOFT '91
Conference on Software for Critical Systems [4].
--R
An evaluation of random testing.
An experimental comparison of the effectiveness of the all-uses and all-edges adequacy criteria
An applicable family of data flow testing criteria.
Assessing the fault-detecting ability of testing methods
Analytical comparison of several testing strategies.
A mathematical framework for the investigation of testing.
Partition testing does not inspire confidence.
A data flow analysis approach to program testing.
Some observations on partition testing.
A data flow oriented program testing strategy.
On required element testing.
Data flow analyisis techniques for program test data selection.
Selecting software test data using data flow information.
Analyzing partition testing strategies.
Comparison of program testing strate- gies
--TR
Selecting software test data using data flow information
An Applicable Family of Data Flow Testing Criteria
Some observations on partition testing
Partition Testing Does Not Inspire Confidence (Program Testing)
Analyzing Partition Testing Strategies
Comparison of program testing strategies
An experimental comparison of the effectiveness of the all-uses and all-edges adequacy criteria
Assessing the fault-detecting ability of testing methods
Data flow analysis techniques for test data selection
--CTR
Dick Hamlet, What can we learn by testing a program?, ACM SIGSOFT Software Engineering Notes, v.23 n.2, p.50-52, March 1998
Antonia Bertolino , Lorenzo Strigini, Using testability measures for dependability assessment, Proceedings of the 17th international conference on Software engineering, p.61-70, April 24-28, 1995, Seattle, Washington, United States
Weyuker , T. Goradia , A. Singh, Automatically Generating Test Data from a Boolean Specification, IEEE Transactions on Software Engineering, v.20 n.5, p.353-363, May 1994
Allen S. Parrish , Stuart H. Zweben, On the Relationships Among the All-Uses, All-DU-Paths, and All-Edges Testing Criteria, IEEE Transactions on Software Engineering, v.21 n.12, p.1006-1009, December 1995
Finding failures by cluster analysis of execution profiles, Proceedings of the 23rd International Conference on Software Engineering, p.339-348, May 12-19, 2001, Toronto, Ontario, Canada
R. K. Singh , Pravin Chandra , Yogesh Singh, An evaluation of Boolean expression testing techniques, ACM SIGSOFT Software Engineering Notes, v.31 n.5, September 2006
W. Eric Wong , Yu Qi , Kendra Cooper, Source code-based software risk assessing, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Silvia Regina Vergilio , Jos Carlos Maldonado , Mario Jino, Constraint Based Criteria: An Approach for Test Case Selection in the Structural Testing, Journal of Electronic Testing: Theory and Applications, v.17 n.2, p.175-183, April 2001
Sandro Morasca , Stefano Serra-Capizzano, On the analytical comparison of testing techniques, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
Tsong Yueh Chen , Yuen Tak Yu, On the Expected Number of Failures Detected by Subdomain Testing and Random Testing, IEEE Transactions on Software Engineering, v.22 n.2, p.109-119, February 1996
W. E. Howden , Yudong Huang, Software trustability analysis, ACM Transactions on Software Engineering and Methodology (TOSEM), v.4 n.1, p.36-64, Jan. 1995
Phyllis G. Frankl , Elaine J. Weyuker, An analytical comparison of the fault-detecting ability of data flow testing techniques, Proceedings of the 15th international conference on Software Engineering, p.415-424, May 17-21, 1993, Baltimore, Maryland, United States
R. M. Hierons, Comparing test sets and criteria in the presence of test hypotheses and fault domains, ACM Transactions on Software Engineering and Methodology (TOSEM), v.11 n.4, p.427-448, October 2002
Hong Zhu, A Formal Analysis of the Subsume Relation Between Software Test Adequacy Criteria, IEEE Transactions on Software Engineering, v.22 n.4, p.248-255, April 1996
Dick Hamlet, Foundations of software testing: dependability theory, ACM SIGSOFT Software Engineering Notes, v.19 n.5, p.128-139, Dec. 1994
Andy Podgurski , Wassim Masri , Yolanda McCleese , Francis G. Wolff , Charles Yang, Estimation of software reliability by stratified sampling, ACM Transactions on Software Engineering and Methodology (TOSEM), v.8 n.3, p.263-283, July 1999
Walter J. Gutjahr, Partition Testing vs. Random Testing: The Influence of Uncertainty, IEEE Transactions on Software Engineering, v.25 n.5, p.661-674, September 1999
J. Weyuker, More Experience with Data Flow Testing, IEEE Transactions on Software Engineering, v.19 n.9, p.912-919, September 1993
Elaine J. Weyuker, Using operational distributions to judge testing progress, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Phyllis G. Frankl , Yuetang Deng, Comparison of delivered reliability of branch, data flow and operational testing: A case study, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.124-134, Sept. 2000
Christoph C. Michael , Jeffrey Voas, The ability of directed tests to predict software quality, Annals of Software Engineering, 4, p.31-64, 1997
Chi Keen Low , T. Y. Chen , Ralph Rnnquist, Automated Test Case Generation for BDI Agents, Autonomous Agents and Multi-Agent Systems, v.2 n.4, p.311-332, November 1999
Silvia Regina Vergilio , Jos Carlos Maldonado , Mario Jino , Inali Wisniewski Soares, Constraint based structural testing criteria, Journal of Systems and Software, v.79 n.6, p.756-771, June 2006
Philip J. Boland , Harshinder Singh , Bojan Cukic, Comparing Partition and Random Testing via Majorization and Schur Functions, IEEE Transactions on Software Engineering, v.29 n.1, p.88-94, January
P. G. Frankl , E. J. Weyuker, Provable Improvements on Branch Testing, IEEE Transactions on Software Engineering, v.19 n.10, p.962-975, October 1993
Richard A. DeMillo , Aditya P. Mathur , W. Eric Wong, Some Critical Remarks on a Hierarchy of Fault-Detecting Abilities of Test Methods, IEEE Transactions on Software Engineering, v.21 n.10, p.858-861, October 1995
W. Eric Wong , Tatiana Sugeta , J. Jenny Li , Jos C. Maldonado, Coverage testing software architectural design in SDL, Computer Networks: The International Journal of Computer and Telecommunications Networking, v.42 n.3, p.359-374, 21 June
Prem Devanbu , Stuart G. Stubblebine, Cryptographic verification of test coverage claims, ACM SIGSOFT Software Engineering Notes, v.22 n.6, p.395-413, Nov. 1997
Mrcio Eduardo Delamaro , Jos Carlos Maldonado , Alberto Pasquini , Aditya P. Mathur, Interface Mutation Test Adequacy Criterion: An Empirical Evaluation, Empirical Software Engineering, v.6 n.2, p.111-142, June 2001
Marcio E. Delamaro , Jos C. Maldonado , Aditya P. Mathur, Interface Mutation: An Approach for Integration Testing, IEEE Transactions on Software Engineering, v.27 n.3, p.228-247, March 2001
C. C. Michael , G. McGraw , M. A. Schatz, Generating Software Test Data by Evolution, IEEE Transactions on Software Engineering, v.27 n.12, p.1085-1110, December 2001
P. G. Frankl , S. N. Weiss, An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing, IEEE Transactions on Software Engineering, v.19 n.8, p.774-787, August 1993
Premkumar Thomas Devanbu , Stuart G. Stubblebine, Cryptographic Verification of Test Coverage Claims, IEEE Transactions on Software Engineering, v.26 n.2, p.178-192, February 2000
Michael Ellims , James Bridges , Darrel C. Ince, The Economics of Unit Testing, Empirical Software Engineering, v.11 n.1, p.5-31, March 2006
Hong Zhu , Lingzi Jin , Dan Diaper , Ganghong Bai, Software requirements validation via task analysis, Journal of Systems and Software, v.61 n.2, p.145-169, March 2002
Matthew B. Dwyer , John Hatcliff , Robby Robby , Corina S. Pasareanu , Willem Visser, Formal Software Analysis Emerging Trends in Software Model Checking, 2007 Future of Software Engineering, p.120-136, May 23-25, 2007
Hong Zhu , Patrick A. V. Hall , John H. R. May, Software unit test coverage and adequacy, ACM Computing Surveys (CSUR), v.29 n.4, p.366-427, Dec. 1997 | fault-detecting ability;software testing criteria;program testing;subdomains;system recovery;multisets;formal analysis;subsumption relation;probabilistic measures |
631040 | A New Approach to Version Control. | A method for controlling versions of software and other hierarchically structured entities is presented. Using the variant structure principle, a particular version of an entire system is formed by combining the most relevant existing versions of the various components of the system. An algebraic version language that allows histories (numbered series), subversions (or variants), and joins is described. It is shown that the join operation is simply the lattice least upper bound and together with the variant structure principle, provides a systematic framework for recombining divergent variants. The utility of this approach is demonstrated using LEMUR, a programming environment for modular C programs, which was developed using itself. The ways in which this notion of versions is related to the possible world semantics of intensional logic are discussed. | Introduction
Software systems undergo constant evolution. Specifications change, improvements are
made, bugs are fixed, and different versions are created to suit differing needs. As these
changes are made, families of systems arise, all very similar, yet different. Handling such
changes for a large system is a non-trivial task, as its different components will evolve
differently.
Existing version control and software configurations systems have succeeded in solving
some of the problems of dealing with this evolution. Pure version control systems such as
sccs [22] and rcs [28, 30], using delta techniques to save storage space, keep track of the
changes made by the different programmers to a file. Other space saving techniques have
also been developed [8, 16, 18, 29]. Software configuration systems such as make [7] allow
for the automatic reconfiguration of a system when changes are made to a component. Also,
more detailed analysis of changes to components reduces much useless compiling [24, 31].
Integrated systems attempt to combine these ideas. Among the better known are System
Modeller [11, 23], Tichy's work at CMU [26, 27], GANDALF [4, 9, 17], Adele [1, 2, 5, 6],
DSEE [12], Jasmine [15], shape [13, 14] and Odin [3]. These systems, to a greater or lesser
degree, allow for the development of large projects being developed by many different
programmers. They use software databases, version control for the files, sometimes also for
modules, as well as allowing the restriction of certain tasks to certain individuals. Some of
them integrate versioning of files right into the operating system.
Despite these advances, the integration of hierarchically-structured entities and version
control is still not satisfactory. Using a system such as rcs and sccs, for each file, there
is a tree of revisions. The trunk is considered to be the 'main' version, and the branches
correspond to 'variants'. Often, when a number of changes have been made to a variant,
the changes are 'merged' (sometimes textually) back into the trunk. The tree structure
does not show how this merge took place.
If integrated environments, such as Adele, are used, then for each module family, there
can be variants of the specification. For each specification, there can be variants of the
implementation. And then for each implementation, there is an rcs-like structure for the
development of the implementation.
In both cases, a tree structure is used for versions; yet, the tree structure is not appropriate
for software development, because of the constant 'merging' of different changes to
the same system. A directed acyclic graph (dag) would be more appropriate. For example,
suppose a program is written to work with a standard screen in English. Two people independently
modify the program. The first adds a graphics interface, and the other changes
the error messages to French. And then someone asks for a version which has both graphics
and French messages. This new version inherits from its two ancestors, just as classes can
inherit from several ancestors in object-oriented programming.
The concept of variant is not fully developed. Parnas [19] described the need for families
of software, and showed that having variants is a good idea, however the concept has still
not been formalised. In the discussion on variants in [32], no one could give a definition
of variant. In [14], we read, "We suspect that it is still an unsolved problem of software
engineering to produce portable software designs in the sense of predicting and planning
the possibility that certain modules of a system sprout variant branches. It is still a fact
that variants happen."
Perhaps the problem is that variants must be planned, instead of being allowed to
happen. Furthermore, one should be able to refer to the version of a complete system, in
the same language as one does for the versions of components.
This paper addresses the concept of variant, and how the variants of a complete system
relate to the versions of individual components. Section 2 presents the need for versions
of complete systems, and informally presents how versions of components and complete
systems should interact. Section 3 formally presents an algebra for versions, which allows
subversions and join versions, along with a refinement relation between versions; the version
space therefore creates a lattice. Section 4 formally presents the relationship between
versions of complete systems and of components, using the 'variant substructure principle'.
Section 5 illustrates how this version language is used in an already existing C programming
environment. Finally, section 6 discusses some of the ideas, presenting possible extensions,
as well as showing that they could be integrated into existing configuration management
systems.
Global versions
The main weakness of existing tools is that the different versions of a component have only
a local significance. It might be the case, for example, that there is a a third version of
component A and also a third version of component B. But there is no a priori reason to
expect any relationship between the third versions of separate components.
The only exception is in the concept of variant. For integrated environments such as
Adele, a variant represents a different interface to a module, and so has more than local
significance. But the components of the implementation of each interface are completely
separate, thereby creating a situation of code duplication, or of juggling with software
configuration.
This lack of correspondence between versions of different components makes it difficult
to automatically build a complete system. Instead, users are allowed to mix and match different
versions of different components arbitrarily. These tools give the users the 'freedom'
of building any desired combination; but they also burden them with the responsibility of
deciding which of the huge number of possible combinations will yield a consistent, working
instance of the system.
In our approach, however, version labels (which are not necessarily numbers) are intended
to have a global, uniform significance. Thus the fast version of component A is
meant to be combined with the fast version of component B. Programmers are expected
to ensure that these corresponding versions are compatible.
One advantage of this approach is that it is now possible to talk of versions of the complete
system-formed, in the simplest case, by uniformly choosing corresponding versions
of the components. Suppose, for example, that we have created a fast version of every
component of a (say) compiler. Then we build the fast version of the compiler by combining
the fast versions of all the components.
Of course, in general it is unrealistic to require a distinct fast version of every component.
It may be possible to speed up the compiler by altering only a few components, and only
these components will have fast versions. So we extend our configuration rule as follows:
to build the fast compiler we take the fast version of each component, if it exists; otherwise
we take the ordinary 'vanilla' version.
We generalize this approach by defining a partially ordered algebra of version labels.
The partial order is the refinement relation: V v W , read as 'V is refined by W , or 'V
is relevant to W , means (informally) that W is the result of further developing version V .
The basic principle is that in configuring version W of a system, we can use version V of a
particular component if the component does not exist in a more relevant version. That is,
we can use version V of the component as long as the component does not exist in version
We then use this refinement ordering to automate the building of complete system. The
user specifies only which version of the complete system is desired; our 'variant structure
principle' defines this to be the result of combining the most relevant version of each
component.
3 Version space
In this section we introduce our version algebra, giving the rules and practical applications
of each of the version operators. The simplest possible algebra would allow only one version.
We call this version the vanilla version, written ffl, the empty string.
3.1 Project history
The simplest versions are those which correspond to successive stages in the development
process: version 1, version 2, version 3, etc. An obvious extension is to allow subsequences:
1.1, 1.2, or 2.3.1.
Having such a version control system would not just facilitate maintenance. It would
also aid the recovery from error, be it physical, such as the accidental destruction of a file,
or logical, such as the introduction of a flawed algorithm. Furthermore, it would allow the
recuperation of previously rejected ideas. It is not uncommon for an idea to be conceived,
partially thought through and rejected, only to be needed six months later.
It was to solve this kind of problem that programs such as sccs and rcs were designed;
in fact, the notion of numeric string to keep track of the successive stages is quite suitable.
However, the . of rcs has two different meanings. Version 1.2.3.4 actually means subversion
3.4 of version 1.2, and is not on the trunk of the version tree. Versions 1.2.3.4
and 1.3 are therefore incomparable, even though it would appear from the figures that 1.3
succeeds 1.2.3.4.
In our version space, numeric versions can only build one branch. Subversions must be
used to create forks. Our initial set of possible versions can be described by the following
where n is a non-negative integer.
The refinement order as described earlier indicates how one version is derived from
others. This order, written v, must be well-founded and transitive:
For our current set of versions, we use the intuitive, dictionary order:
N is not a prefix of M
So, for example, 1.2.3.4 v 1.3 v 2.4.5.
How numeric versions would be used would depend on the environment in which they
are used. One example would be to keep a complete record of all changes to all files.
This could be done by having six-number version names, corresponding to the date, as in
1992.06.18.11.18.29. Another approach would be to use more numbers as editing of
a file is taking place, and fewer numbers for the 'real' versions, that have some meaning:
versions 1.2.1, 1.2.2 and 1.2.3 could then correspond to the successive edits of version
1.2, ultimately yielding version 1.3.
3.2 Differing requirements
If a piece of software is going to be used by people in differing environments, it is likely
that the requirements of those users will differ.
One of the most important differences would be at the level of user interface. Some
aspects are a matter of personal taste, such as does one prefer to use graphics and menus,
or does one prefer text? Others are a necessity. A Syrian would want to read and write
in Arabic, a Japanese using katakana, hiragana and kanji, and a Canadian would want to
be able to choose between English and French. Even if the essential functionality were the
same, the differences in user interface would be significant.
But differences in functionality can also appear. For example, most Lisp systems are
Brobdingnagian 1 , as everything, including the kitchen sink, is included. Yet the typical
Lisp user has no need for many of the packages that are offered. Rather than being forced
to take the mini or the maxi version, users should be able to pick and choose among the
packages that they need. For this particular example, autoload features can be used, but
this is not the case for all systems.
Differences in implementation may also arise as one ports a system from one machine
to another. The versions for machines X and Y may be identical, but differ with that for
machine Z.
To handle these problems, we need to introduce the concept of a subversion, called
variant in many systems. This problem was partially addressed in sccs and rcs, with
the introduction of branches; unfortunately, relying on the numeric strings to identify the
branches becomes very unwieldy. We choose the path of naming the subversions. Our new
space of possible versions becomes:
Brobdingnag was an imaginary country of giants in Jonathan Swift's Gulliver's Travels .
where x is any alphanumeric string. For example, the graphics%mouse version of a user
interface would be the mouse subversion of the graphics version of the user interface. Parentheses
can be inserted at will to reduce ambiguity.
Unlike in rcs, names are not variables, but constants. They do not represent anything
except themselves. Under the refinement relation, they are all incomparable.
We need one more axiom for subversions:
We consider the % operator to have ffl as identity and to be associative:
Subversions can be very powerful. For example, consider the task of simultaneously
maintaining separate releases. This is a common example, as it is normal to have a working
version and a current version being developed, yet it is difficult to handle properly. Suppose
that the two current releases are 2.3.4 and 3.5.6. If we wish to make repairs to 2.3.4,
a subversion is required. For if we were to create a version 2.3.4.1 to fix the bug, then
that version would still be considered to be anterior to 3.5.6, which does not correspond
to reality. Rather we would want a 2.3.4%bugfix.
The reader might wonder why the grammar allowed for V %V rather than V %x. Consider
the task of Maria and Keir each working separately on their own subsystems, each
with their own sets of versions and subversions. When their work is merged, to prevent
any ambiguity, all of Maria's versions could be preceded by Maria%; similarly for Keir.
3.3 Joins of versions
Subversions allow for different functionalities. But it is not uncommon for different subver-
sions to be compatible. For example we can easily imagine wanting a Japanese Lisp system
with infinite precision arithmetic with graphics for machine X. To handle this sort of thing,
we need to be able to join versions. Our Lisp version would be Japanese+graphics+infinite+X.
Our final space of versions becomes:
To make the order complete, we add two more axioms:
The operator is idempotent, commutative and associative, and left-distributes %:
The operator is defined so that it is the least upper bound operator induced by the v
relation. Consider versions V 1
is the least upper bound of V 1
if and
only if for all V such that V 1
holds. Consider such a
then
is the least upper bound.
The variant substructure principle, presented in x4, calls for the use of the most relevant
components when a particular configuration is being built. If the operator were not
defined as the least upper bound operator, then the term 'most relevant' would have no
meaning.
The fact that the version language allows the join of independent versions does not
necessarily mean that any arbitrary join actually makes sense. However, should a merge
be made, and it does make sense, then the join perfectly addresses the need to describe
the merge. Do note that no merging of text or of code, as in [10], is taking place here.
The merging only takes place at the version name, and the configuration manager must
ensure that the components do make sense together. The only checking that takes place is
syntactic, at the level of the version names (see x4).
3.4 Canonical form
The equality axioms allow a canonical form for all version expressions. In fact, except for
the commutative and associative rules of +, the equations simply become rewrite rules:
For joins of versions, we must introduce a total order on subversions, which corresponds to
some form of 'dictionary
x alphabetically precedes y
x y
With the dictionary order, we can get a canonical form for the joins as well:
Here are a few examples:
last first \Gamma! first last
It is assumed that + is right-associative.
4 Versions and structure
Up to now, we have been referring indiscriminately to versions of complete systems and to
versions of components. A question arises: how do these interact?
As was already explained, we do not require that every component exist in every version.
Instead, we consider the absence of a particular version as meaning that a more 'generic'
version is adequate; in the simplest case, as meaning that the 'vanilla' version is appropriate.
This means, for example, that when we configure the French version we use the French
version of each component, if it exists; otherwise we use the standard one.
In general, however, the vanilla version is not always the best alternative. Suppose, for
example, that we need the Keir%apple%fast version (which can be understood as the 'fast
version of Keir's apple version'). If a certain component is not available in exactly this
version, we would hardly be justified in assuming that the vanilla one is appropriate. If
there is a Keir%apple version, we should certainly use it; and failing that, the Keir version,
if it exists. The plain one is indicated only if none of these other more specific versions is
available.
Our general rule is that when constructing version V of a system, we choose the version of
each component which most closely approximates V (according to the ordering on versions
introduced earlier). We could call this the 'most relevant' version. More precisely, to select
the appropriate version of a component C, let V be the set of versions in which C is available.
The set of relevant versions is fV Vg. The most relevant version is the maximum
element of this set-if there is one. If there is no maximum element, there is an error
condition-and there is no version V of the given system.
We can generalize the principle as follows: suppose that an object S has components
. Then the version of S which is most relevant to V is formed by joining
versions
of C 1
of C 2
in each case V i is the version of C i most relevant to V.
Furthermore, the version of V constructed is
This principle, which we will call the 'variant structure principle', describes exactly the
way in which subversions of a system can 'inherit' components from a superversion. It also
accords well with motivations given for the various version forming operators described in
an earlier section. For example, it specifies that in constructing version 3.2 an object, we
take version 2.8.2 of an object which exists in versions 3.4, 2.8.2, 2.7.9, 1.8, 1.5.6
and ffl. It specifies that in building version Keir%apple%fast we select the Keir%apple
version of a component that exists in versions Keir, Keir%apple, Keir%fast, apple%fast,
Maria%apple, fast and ffl.
Finally, the principle also explains how + solves the problem of combining versions;
for example, combining Maria's orange and Keir's apple versions (see x3). The desired
version for the system would be Maria%orange+Keir%apple. According to our rule, for each
component we select the version most relevant to Maria%orange when no Keir version is
available, and the version most relevant to Keir%apple when no Maria version is available.
Thus if the versions of component C available are Keir%pear, Fred%apple, Keir, apple
and ffl, we take version Keir. On the other hand, if the versions available are Keir%apple,
Maria, and ffl, then there is no best choice and the system does not exist in the desired
version.
Notice that it is possible to construct version Maria%orange+Keir%apple even when no
component exists in that version. However, it also makes sense for an individual component
to exist in a version with a + in it. This allows otherwise incompatible projects to be merged.
Consider, for example, the situation just described; both Keir and Maria have seen fit to
alter component C, which exists in both Keir%apple and Maria versions. According to our
principle, we cannot form a Maria%orange+Keir%apple version of the system because there
is no appropriate version of the component in question. The solution is for Keir and Maria
to get together and produce a mutually acceptable compromise version which is compatible
with both the Keir%apple and Maria variants of the system. If they can do so, they label
this compromise component as the Maria+Keir%apple version of the component. Having
done this, our principle now says that there is a Maria%orange+Keir%apple version of the
whole system, because now the compromise version is the most relevant. (Recall that in
the version ordering, both Maria and Keir%apple lie below Maria+Keir%apple which in
turn lies below Maria%orange+Keir%apple.)
5 Lemur
To test our notion of versions, we added it to Sloth, an existing software engineering
environment for C programs developed by the authors. The resulting, 'evolved' program is
Lemur. For a more complete presentation of Sloth, as well as a comparison with related
work, see [20].
5.1 Sloth
Sloth is a set of tools designed to facilitate the reusability of C programs. A system of modules
was devised, more sophisticated than the method traditionally used for C programs.
Each module is a unix directory: there are two interface files (extern.i for externally
visible variables and define.i for manifest constants), two implementation files (var.i
for local variables and proc.i for local routines), as well as the body.i containing the
initialization code. The import file states which modules are needed for this module to run
correctly.
Sloth has three commands. The vm command is used to view files and the mm to
modify them. The lkm command, original to Sloth, builds, for each module, a uselist
file containing a list of all the modules that it depends on by computing the transitive
closure of the import dependencies. It then builds a prog.c file from all of the component
files and compiles it; the resulting prog.o file is linked with the prog.o files of the other
modules to make a complete system.
Sloth has shown itself to be remarkably useful, and the intended goal of reusability
is being met. The PopShop, in which several compilers are written, consists of over 100
different modules, and builds more than 10 different applications. The reader is asked to
refer to [20] for more details.
5.2 Lemur: Sloth with versions
Lemur is an evolved form of Sloth. Lemur allows the user to create and label different
versions of the individual files which make up a module. A label can be any element of the
version space described above, represented in a simple linear syntax and used as an extension
of the file name. For example, if Keir needs a separate version of the procedure definition
file of a module, he would create (inside the module) a new file proc Keir%apple.i.
And if his apple version had its own fast version, and if this fast subversion required
further changes to the module's procedure definitions, he would create an additional file
proc Keir%apple%fast.i. Note the new files do not replace the old ones; the different
versions coexist. Note also that not every file exists in every version. For example, the fast
subversion of Keir%apple may require only a few changes. As a result, there will only be
a few files with the full Keir%apple%fast label.
With Lemur, only the basic component files have explicit, user-maintained versions.
The users do not directly create separate versions of whole modules, or of applications.
Instead, Lemur uses the principle of the previous section to create, automatically, any
desired version of an application.
Suppose, for example, that Maria would like to compile and run her Maria%orange
version of the project (call it comp). She invokes the Lemur configure command with
comp as its argument but with Maria%orange as the parameter of the -v option. Lemur
proceeds much as if the -v option were absent. It uses the import lists to form a 'uselist' of
all modules required; it checks that their .o files are up to date, recompiling if necessary;
and then it links together an executable (which would normally be called comp). The
difference, though, is that with each individual file it looks first for a Maria%orange version,
instead of the 'vanilla' one. And when the link is completed, the executable is named
comp Maria%orange.
If every file needed has a Maria%orange version, the procedure is straight forward. As
we said earlier, however, we do not require that every file exist in the version requested.
When the desired version is not available, Lemur follows the principle of x4 and selects
the most relevant version which is available. In this instance, it means that if there is no
Maria%orange version Lemur looks for a Maria version; and if even that is unavailable,
it settles for the vanilla version of the file in question. This form of inheritance, implicit
in the variant structure principle, allows source code sharing between a version and its
subversions.
Lemur also follows the principle of x4 when creating and labelling the .o files for
individual modules. Suppose again that the Maria%orange version has been requested
and that the relevant .o file of the fred module must be produced. Lemur does this
automatically, using in each case the most relevant versions of the internal fred files required
and of the declarations imported from other modules. When the compilation is complete,
Lemur does not automatically label the resulting .o file fred Maria%orange.o. It does
so only if one of the files involved actually had the full Maria%orange label. Otherwise
it labels the .o file as fred Maria.o, assuming at least one Maria version was involved.
And if all the files involved were in fact vanilla, the .o file produced is given the vanilla
name fred.o. In general, it labels the .o file with the least upper bound of the versions of
the files involved in producing it. The resulting label may be much more generic than the
version requested; and this means that the same .o file can be used to build other versions
of the system. The inheritance principle therefore allows us to share object code between
a version and its subversions.
The high granularity of modules in Sloth allows one to do all sorts of interesting things.
For example, one could write a test version of a module interface which would allow a
tester to look at the values of inner variables. The advantage is that the code itself (the
implementation) would not change at all, nor would the files even be touched.
As the import file is separate, one can have one version of a module depend on one set
of modules, and another version depend on another set of modules. In other words, the hierarchical
structure (the shape) of the system can itself change from one version to another.
In this sense Lemur actually goes beyond the principle of the previous section, which assumed
the structure of a system to be invariant. However, we can easily reformulate the
general principle by stipulating that every structured object has an explicit 'subcomponent
list' as one of its subcomponents. We then allow the subcomponent list to exist in various
versions. When we configure the object, we first select the most relevant version of the
component list; then we assemble the most relevant versions of the components appearing
on this list. This is how Lemur generalizes Sloth's import lists.
It is possible, using make and rcs, to have versions of modules where different versions
consist of completely different modules. If such is the case, different makefiles have to be
written for each version, a lot of information has to be repeated. And a global makefile
has to be written to ensure that the right version of the makefile is used to create the
configuration. The whole process is quite complex.
With Lemur, no makefiles need be written. Everything is done automatically. An
extension to Lemur, Marmoset [21], allows different languages to be used, using a very
simple configuration file, much simpler than standard makefiles.
5.3 Bootstrapping of Lemur
To test our notion of version, Lemur was bootstrapped: we used Lemur to create versions
of itself. The original Sloth was written in a monolithic manner and did not handle
versions. It was rewritten, using the original version, into a modular form, much more
suitable for maintenance and extension. Once this version (basic Lemur, functionally the
same as Sloth) was working, it was used to create a system which allowed files to exist in
multiple versions (true Lemur). True Lemur was then used to create subversion Lemur,
which allowed Lemur to use not only basic versions, but also subversions, i.e., version
x.y, x.z, and x.y.a. A subversion of subversion Lemur was created to accept numeric
versions (numeric Lemur). A new subversion was created to accept join versions (join
Lemur). Additional variants have also been created to allow different options; these were
subsequently joined together.
5.4 Implementation
lkm builds modules one at a time, starting with those which do not depend on any other
modules. It makes the most general version possible of each module. It then goes on to the
more complex modules, still building the most general version possible; this version will,
of course, depend on the versions of the modules that it depends on. Finally, it builds the
most general possible version of the object file.
There can be situations where there is not a most general version. For example, one
could ask for the x+y version, and for one file there is a x version and a y version, but not
a x+y version. For the vm and lkm programs, this is an error condition. On the other
hand, mm asks the user if they wish to create a new file, and if so, what version of that file
should be taken as the initial copy of the new version of the file.
6 Discussion
The problem of variants of software system is a difficult one. We claim that the language
proposed in this paper is a step in the right direction. No difference is made between
version, revision or variant. All subsystems are on the same level. If a variant evolves to
the point where it becomes a completely different product, that is just fine, and nothing
special has to be done.
Of course, this paper in no way addresses how variants and versions are to be managed,
in the sense of controlling how access to components of systems by programmers and users
is made. The ideas in this paper do not put into question the need for software databases
which restrict access to certain parts of a system so that it is not being modified in an
uncontrolled manner.
The language
Since we are using a lattice to describe our version space, one might ask if the meet of
two versions has meaning. In fact, it does: A&B would be a version which is refined by
each of A, B and A+B. For example, one could conceive of a common transliteration for
Russian and Bulgarian, where, for example, if one writes 'Dzhon' (John), the result would
be 'Don', this scheme being defined in Russian&Bulgarian. Then if one asked for the
Russian version, then the Russian&Bulgarian file could be used if there were no Russian
one.
With the current system, it is possible to have horrendously long version names. With
the repeated use of + and %, it might be difficult to figure out what is happening!
One solution is to allow version variables; that is to introduce new versions defined in
terms of existing ones. For example, suppose that the francophone Belgian users want a
French language mouse graphics version with infinite precision arithmetic. This corresponds
to the version algebra expression French+graphics%mouse+infinite which is clear enough
but rather unwieldy. With version variables the user could introduce the definition
and thereafter request the Belgian version or even use it in expressions like Belgian+fast.
In fact the same effect can be achieved more elegantly by allowing inequalities rather
than equalities. For example, the above definition can be given incrementally by the three
inequalities
Belgian ?= French
Belgian ?= graphics%mouse
Belgian ?= infinite
Sometimes this complexity would come about because many modules are being named,
and the versions within each module are different. In this case, the use of local version
names, defined through inequalities as above, would allow the hiding of how the software
was developed, one of the key goals of modules.
What we are proposing is a real version language, with constants and variables, with
different scopes, defined using inequalities. So the next step would obviously be to add
types. In fact, this is not surprising, since several configuration management systems allow
for typed version names. The language component of a version name, for example, could
be one of Lucid or LUSTRE.
Related work
As was said in the introduction, our approach to version control is original, so there is no
work directly related. However, there is no reason that the ideas that were developed in
this paper could not be applied to existing systems, and not just to software configuration
systems (see below). In this sense, we will consider two systems which appear to be flexible
enough for this change to be easily made: Odin and shape.
Odin [3] is a system that allows one to formalize the software configuration process.
For each object, be it an atomic object or a tool which will manipulate the objects, axioms
can be given declaring what the object does. One can then use pre- and post-conditions
to define the actions of tools. There is no reason that versioning cannot be added to the
entire system. Atomic objects could have versions, and the rules defining what programs
do would then pass the versions on. So, for example, the mm, vm and lkm operations could
then be defined in Odin.
shape [13, 14] is a system which attempts to integrate the better features of Adele, Make
and DSEE. There is an attributed file system interface, either to a standard file system or a
database, which makes access to versions transparent to the user. The version language is
in disjunctive normal form (ORs of ANDs). If this version language were changed to allow
+, and the variant substructure principle were applied, then shape would be generalised
significantly.
Some speculative remarks
There is a close connection between the notion of version discussed here and the logi-
cal/philosophical notion of a 'possible world' (see, for example, [25]). Possible worlds arise
in a branch of logic, called 'intensional logic', which deals with assertions and expressions
whose meaning varies according to some implicit context. Usually the context involves space
and time: the meaning of 'the previous president' varies according to where the statement
refers, e.g., the U.S. or France, and when it refers (1989 or 1889). Other statements, e.g.,
'my brother's former employer', require more extensive information.
Obviously the notion of possible world, in its most literal form, raises mind-boggling
philosophical questions. But taken more formally, as indicating some sort of context (time,
place, speaker, orientation), it has proved extremely useful in formalizing some hitherto
mysterious and paradoxical aspects of natural language semantics (Montague being a pioneer
in this area).
We can interpret an element of the version space as a possible world. In this possible
world, there is an 'instance' of the software in question; but this instance can differ from
the instances in other possible worlds. For example, in this world the error messages are in
English, whereas in a neighboring world they are in French. The principle described earlier
tells us how a compound object varies from one world to the next, provided we know how
its parts vary. And Lemur in a small way allows us to 'visit' one of these worlds and
construct the instance of the software, without worrying about what the software looks like
on the other worlds.
Lemur is the result of intensionalizing one tool, namely Sloth. We can surely imagine
doing the same for other tools and even, if we are ambitious, unix itself. In this
Montagunix we would specify, say with a command, which world we would like to visit-
say, the French+graphics%mouse world. Having done that, it would give us the illusion
that the appropriate instance is the only one that exists. In other words, when we examine
the source, we would find only one copy; and only one copy of the .o files, and test files
as well. These files could be scattered through a directory structure and we could move
around.
Of course behind the scenes, Montagunix would be monitoring our activity and automatically
choosing the most relevant version of every file we request. Other versions would
be hidden from us. When we create a file, it would attach the appropriate version tag to
it. Montagunix could give each developer the illusion of having their own private copy of
a project in the same way that time sharing gave users the impression of having their own
private computer. But the result is more sophisticated, because of the refinement relation
between versions. However, we do not know what would be the implications when different
users on the same network had conflicting software!
Acknowledgements
Many thanks to Gordon Brown who coded Lemur. The quality
of the code is exceptional, and we are pleased to announce that Lemur is available from
the first author.
--R
Experience with a database of programs.
Protection and cooperation in a software engineering environment.
The Odin System: An Object Manager for Extensible Software Environments.
The Representation of Families of Software Systems.
A configuration manager: the Adele data base of programs.
Structuring large versioned software products.
for revision control.
Automatic deletion of obsolete information.
Integrating noninterfering versions of programs.
Organizing software in a distributed envi- ronment
An integrated toolset for engineering software configurations.
A software system modelling facility.
Efficient applicative data types.
The GANDALF project.
technique and string-to-string correction
Designing software for ease of extension and contraction.
A UNIX tool for managing reusable software com- ponents
Reducing the complexity of software configuration.
The source code control system.
Controlling Large Software Development in a Distributed En- vironment
Living with inconsistency in large systems.
Formal Philosophy: Selected Papers by Richard Montague.
Software development based on module interconnection.
Software Development Control based on System Structure Description.
The string-to-string correction problem with block moves
Smart recompilation.
Report on the First International Workshop on Software Version and Configuration Control.
--TR
RCSMYAMPERSANDmdash;a system for version control
Smart recompilation
The Odin system: an object manager for extensible software environments
An editor for revision control
Experience with a data base of programs
Jasmine: a software system modelling facility
technique and string-to-string correction
Workshop on software version and configuration control
An integrated toolset for engineering software configurations
Integrating noninterfering versions of programs
A Unix tool for managing reusable software components
The Adele configuration manager
The string-to-string correction problem with block moves
Protection and Cooperation in a Software Engineering Environment
Efficient applicative data types
Software development control based on module interconnection
Designing software for ease of extension and contraction
Organizing software in a distributed environment
Design, implementation, and evaluation of a Revision Control System
Computer-Aided Software Engineering in a distributed workstation environment
The representation of families of software systems.
Software development control based on system structure description
Controlling large software development in a distributed environment
--CTR
John Plaice , Blanca Mancilla, Collaborative intensional hypertext, Proceedings of the fifteenth ACM conference on Hypertext and hypermedia, August 09-13, 2004, Santa Cruz, CA, USA
Nikolaos S. Papaspyrou , Ioannis T. Kassios, GLU embedded in C++: a marriage between multidimensional and object-oriented programming, SoftwarePractice & Experience, v.34 n.7, p.609-630, June 2004 | numbered series;divergent variants;programming environment;configuration management;formal languages;version control;lattice least upper bound;world semantics;formal logic;subversions;hierarchically structured entities;modular C programs;LEMUR;intensional logic;join operation;algebraic version language;programming environments;systematic framework;variant structure principle |
631068 | An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing. | An experiment comparing the effectiveness of the all-uses and all-edges test data adequacy criteria is discussed. The experiment was designed to overcome some of the deficiencies of previous software testing experiments. A large number of test sets was randomly generated for each of nine subject programs with subtle errors. For each test set, the percentages of executable edges and definition-use associations covered were measured, and it was determined whether the test set exposed an error. Hypothesis testing was used to investigate whether all-uses adequate test sets are more likely to expose errors than are all-edges adequate test sets. Logistic regression analysis was used to investigate whether the probability that a test set exposes an error increases as the percentage of definition-use associations or edges covered by it increases. Error exposing ability was shown to be strongly positively correlated to percentage of covered definition-use associations in only four of the nine subjects. Error exposing ability was also shown to be positively correlated to the percentage of covered edges in four different subjects, but the relationship was weaker. | Introduction
Considerable effort in software testing research has focussed on the development of software
test data adequacy criteria, that is, criteria that are used to determine when software
has been tested "enough", and can be released. Numerous test data adequacy criteria
have been proposed, including those based on control flow analysis [25, 26], data flow
analysis [23, 29, 31, 34] and program mutation [9]. Tools based on several of these criteria
have been built [8, 14, 28] and many theoretical studies of their formal properties
and of certain aspects of their relations to one another have been done [6, 12, 15, 34].
But surprisingly, relatively little work has focussed on the crucial question: how good at
exposing errors are the test sets that are deemed adequate according to these criteria?
In this paper, we describe an experiment addressing this question. One factor that
makes it difficult to answer this question is that for a given program P and adequacy
criterion C, there is typically a very large number of adequate test sets. If P is incorrect,
then usually some of these test sets expose an error while others do not. Most previous
experiments have failed to sample this very large space of test sets in a statistically sound
way, and thus have given potentially misleading results. The goals of this research were
twofold: 1) to develop an experiment design that allows the error-detecting ability of
adequacy criteria to be compared in a meaningful way, and 2) to use that design to
measure and compare several adequacy criteria for a variety of subject programs.
In Section 2, below, we define a notion of the effectiveness of an adequacy criterion
that, for a given erroneous program and specification, measures the likelihood that an
adequate set will expose an error. The higher the effectiveness of criterion C, the more
confidence we can have that a program that has been tested on a C-adequate test set
without exposure of an error is indeed correct. Our experiment measured effectiveness
by sampling the space of C-adequate test sets. For each of nine subject programs, we
generated a large number of test sets, determined the extent to which each test set
satisfied certain adequacy criteria, and determined whether or not each test set exposed
an error. The data were used to measure and compare effectiveness of several adequacy
criteria and to address several related questions.
We limited our attention to three adequacy criteria. The all-edges criterion, also
known as branch testing, is a well-known technique which is more widely used in practice
than other, more sophisticated adequacy criteria. It requires the test data to cause the
execution of each edge in the subject program's flow graph. The all-uses data flow testing
criterion requires the test data to cause the execution of paths going from points at which
variables are assigned values to points at which those values are used. It has received
considerable attention in the research community and is considered promising by many
testing researchers. For comparison, we also consider the null criterion, which deems any
test set to be adequate.
The design of the experiment allowed us to address three types of related questions.
Given a subject program and a pair of criteria, C1 and C2, we investigated,
1. Overall comparison of criteria: Are those test sets that satisfy criterion C1
more likely to detect an error than those that satisfy C2? More generally, are those
test sets that satisfy X% of the requirements induced by C1 more likely to detect
an error than those that satisfy Y % of the requirements induced by C2?
2. Comparison of criteria for fixed test set size: For a given test set size, n, are
those test sets of size n that satisfy criterion C1 more likely to detect an error than
those that satisfy C2?
3. Relationship between coverage and effectiveness: How does the likelihood
that the test set detects an error depend on the extent to which a test set satisfies
a criterion and the size of the test set?
The overall comparisons of the criteria give insight into which criterion should be selected
when cost is not a factor. If C1 is more effective than C2, but C1 typically demands
larger test sets than C2, one may ask whether the increased effectiveness arises from
differences in test set sizes, or from other, more intrinsic characteristics of the criteria.
The comparison of criteria for fixed test set size addresses this issue by factoring out
differences in test set size. Lastly, investigation of the relationship between coverage
and effectiveness is useful because in practice it is not unusual to demand only partial
satisfaction of a criterion.
We believe that the results reported here should be of interest both to testing researchers
and to testing practitioners. While practitioners may be primarily interested
in the experiment's results and their implications for choosing an adequacy criterion,
researchers may also find the novel design of the experiment interesting.
This paper is organized as follows. Section 2 of this paper defines effectiveness and
reviews the definitions of the relevant adequacy criteria. Section 3 describes the design
of the experiment, Section 4 describes the statistical analysis techniques used, and Section
5 describes the subject programs on which the experiment was performed. The
experiment's results are presented in Section 6 and discussed further in Section 7. The
experiment design is compared to related work in Section 8, and conclusions are presented
in Section 9.
Background
2.1 Effectiveness of an adequacy criterion
The goal of testing is to detect errors in programs. We will say that a test case t exposes
an error in program P, if on input t, P's output is different than the specified output.
A test set T exposes an error, or is exposing, if at least one test case t in T exposes an
error.
Consider the following model of the testing process:
ffl A test set is generated using some test data generation technique.
ffl The program is executed on the test set, the outputs are checked, and the adequacy
of the test set is checked.
ffl If at least one test case exposes an error, the program is debugged and regression
tested; if no errors are exposed but the test set is inadequate, additional test cases
are generated.
ffl This process continues until the program has been executed on an adequate test
set that fails to expose an error.
At this point, the program is released. Although the program is not guaranteed to be
correct, the "better" the adequacy criterion, the more confidence one can have that it is
correct.
Note that we have explicitly distinguished between two aspects of testing: test generation
and application of a test data adequacy criterion. A test generation technique
is an algorithm which generates test cases, whereas an adequacy criterion is a predicate
which determines whether the testing process is finished. Test generation algorithms and
adequacy criteria that are based on the structure of the program being tested are called
program-based or white-box; those that are not based on the structure of the program are
called black-box. Black-box techniques are typically specification-based, although some,
such as random testing, are not. It is possible in principle to use white box techniques
as the basis for test generation. For example, one could examine the program text and
devise test cases which cause the execution of particular branches. However, it is usually
much more difficult to generate a test case that causes execution of a particular branch
than to simply check whether a branch has been executed by a given test case. Thus,
in practice, systematic approaches to test generation are usually black-box, while systematic
approaches to checking adequacy are often white-box. Black box test generation
techniques may involve devising test cases intended to exercise particular aspects of the
specification or may randomly sample the specification's domain. The testing techniques
investigated in this paper combine black box test generation techniques and white-box
adequacy criteria. In particular, we explore whether one white box adequacy criterion is
more likely than another to detect a bug when a particular (random) black box testing
strategy is used.
We now define a measure of the "goodness" of an adequacy criterion that captures
this intuition. Let P be an incorrect program whose specification is S, and let C be an
adequacy criterion. Consider all the test sets T that satisfy C for P and S. It may be
the case that some of these test sets expose an error, while others do not. If a large
percentage of the C-adequate test sets expose an error, then C is an effective criterion
for this program.
More formally, consider a given probability distribution on the space of all C-adequate
test sets for program P and specification S. We define the effectiveness of C to be the
probability that a test set selected randomly according to this distribution will expose
an error. In practice, test sets are generated using a particular test generation strategy
G, such as random testing with a given input distribution, some other form of black-box
testing, or a systematic white-box strategy. This induces a distribution on the space of
C-adequate test sets. We define the effectiveness of criterion C for P and S relative to
test generation strategy G to be the probability that a C-adequate test set generated by
G will expose an error in P. In this paper, we will be concerned with effectiveness of
criteria relative to various random test generation strategies.
To see that this notion of effectiveness captures the intuition of the "goodness" of an
adequacy criterion, let pC (P ) denote the effectiveness of criterion C for program P. The
probability that a randomly selected C-adequate test set T will not expose an error, i.e.,
that we will release P, treating it as if it were correct, is In particular, if P
is incorrect, this is the probability that after testing with a C-adequate test set we will
mistakenly believe that P is correct. Now suppose pC1 (P
in some class P. Then, since the probability of our mistakenly
believing P to be correct after using C2 as an adequacy criterion is at least as great as
if we had used C1 as the adequacy criterion. Thus after testing P without exposing an
error, we can have at least as much confidence that P is correct if we used criterion C1 as
if we used criterion C2. Weiss has defined a more general notion of the effectiveness of an
adequacy criterion and discussed its relation to confidence in program correctness [37].
Most previous comparisons of adequacy criteria have been based on investigating
whether one criterion subsumes another. Criterion C1 subsumes criterion C2 if, for every
program P and specification S, every test set that satisfies C1 also satisfies C2. It might
seem, at first glance, that if C1 subsumes C2, then C1 is guaranteed to be more effective
than C2 for every program. This is not the case. It may happen that for some program
specification S, and test generation strategy G, test sets that only satisfy C2 may
be better at exposing errors than those that satisfy C1. Hamlet has discussed related
issues [21]. Weyuker, Weiss, and Hamlet [40] and Frankl and Weyuker [17, 16] have
further examined the relationship between subsumption and error-detecting ability.
2.2 Definitions of the adequacy criteria
This study compares the effectiveness of three adequacy criteria: the all-edges criterion,
the all-uses criterion, and the null criterion. Two of these criteria, all-edges and all-
uses, are members of a family of criteria, sometimes called structured testing criteria,
that require the test data to cause execution of representatives of certain sets of paths
through the flow graph of the subject program. The null criterion considers any test
set to be adequate; thus application of the null criterion is the same as not using any
adequacy criterion at all. We have included the null criterion in this study in order to
allow comparison of all-edges and all-uses adequate sets to arbitrary sets.
The all-edges criterion, also known as branch testing, demands that every edge in the
program's flow graph be executed by at least one test case. All-edges is known to be a
relatively weak criterion, in the sense that it is often easy to devise a test set that covers
all of the edges in a buggy program without exposing the bug. A much more demanding,
but completely impractical, criterion is path testing, which requires the execution of every
path through the program's flow graph.
In an effort to bridge the gap between branch-testing and path-testing, Rapps and
Weyuker [34] defined a family of adequacy criteria based on data flow analysis similar
to that done by an optimizing compiler. The all-uses criterion belongs to this family 1 .
Other data flow testing criteria have also been defined [23, 29, 31]. Roughly speaking,
these criteria demand that the test data exercise paths from points at which variables are
defined to points at which their values are subsequently used. Occurrences of variables in
the subject program are classified as being either definitions, in which values are stored,
or uses, in which values are fetched. For example, a variable occurrence on the left-hand
side of an assignment statement is a definition; a variable occurrence on the right hand
side of an assignment statement or in a Boolean expression in a conditional statement is
typically a use. A definition-use association (dua) is a triple (d,u,v) such that d is a node
in the program's flow graph in which variable v is defined, u is a node or edge in which v
is used, and there is a definition-clear path with respect to v from d to u. A test case t
covers dua (d,u,v) if t causes the execution a path that goes from d to u without passing
through any intermediate node in which v is redefined. The all-uses criterion demands
that the test data cover every dua in the subject program.
We had previously designed and implemented a tool, ASSET [11, 12, 14], that checks
the extent to which a given test set for a given Pascal program satisfies all-uses and various
other data flow testing criteria. ASSET analyzes the subject program to determine all of
the definition-use associations in a particular program unit and builds a modified program
whose functionality is identical to the original subject program except that it also outputs
a trace of the path followed when a test case is executed. After executing the modified
program on the given test set, ASSET analyzes the traces to determine the extent to
which the adequacy criterion has been satisfied and outputs a list of those definition-use
associations that still need to be covered. For this experiment, we modified ASSET
so that it could also check whether a test set satisfies all-edges and replaced ASSET's
interactive user interface by a batch interface.
One problem with the all-edges and all-uses criteria, as originally defined, is that
for some programs, no adequate test set exists. This problem arises from infeasible
paths through the program, i.e., paths that can never be executed. The problem is
1 The original criteria were defined for programs written in a simple language. The definitions were
subsequently extended by Frankl and Weyuker [15] to programs written in Pascal. We adopt their
conventions and notation here.
particularly serious for the all-uses criterion because for many commonplace programs,
no adequate test set exists. For example, the problem occurs with any program having
a for loop in which the lower and upper bounds are non-equal constants. Frankl and
Weyuker defined a new family of criteria, the feasible data flow testing criteria, which
circumvents this problem by eliminating unexecutable edges or definition-use associations
from consideration [12, 15], and showed that under reasonable restrictions on the subject
program, the feasible version of all-uses subsumes the feasible version of all-edges.
It is important to note that the original (infeasible) criteria are not really used in
practice; they do not apply to those programs that have infeasible edges or duas, and
they are the same as the feasible versions for other programs. Ideally, testers should
examine the program to eliminate infeasible edges and duas from consideration. In
reality, they often stop testing when some arbitrary percentage of the edges or duas has
been covered, without investigating whether the remaining edges/duas are infeasible, or
whether they indicate deficiencies in the test set. For this reason, we felt that it was also
important to examine the relationship between the percentage of the duas covered by a
test set and its likelihood of exposing an error. In the remainder of this paper, we will,
by abuse of notation, use the terms all-edges and all-uses to refer to the feasible versions
of these criteria.
3 Experiment Design
The goal of this experiment was to measure and compare the effectiveness of various
adequacy criteria relative to random test generation strategies for several different subject
programs. To measure the effectiveness of criterion C, we can
ffl generate a large number of C-adequate test sets,
ffl execute the subject program on each test set,
ffl check the outputs and consider a test set exposing if and only if the program gives
the wrong output on at least one element of the test set, and
ffl calculate the proportion of C-adequate test sets that are exposing.
If the proportion of C1-adequate test sets that expose an error is significantly higher
than the proportion of C2-adequate test sets that expose an error, we can conclude that
criterion C1 is more effective than C2 for the given program and test generation strategy.
In the present experiment, we generated test sets randomly and compared the all-
edges, all-uses, and null criteria. The programs on which we experimented, and the exact
notion of "randomly generated test sets" used for each are described in Section 5 below.
We collected the data in such a way as to allow for comparison of a family of variants
of all-edges and all-uses. Rather than just checking whether or not each test set satisfied
all-edges (all-uses), we recorded the number of executable edges (duas) covered by the
test set. This allowed us to use the collected data to measure not only the effectiveness
of all-edges and all-uses, but also of such criteria as X% edge coverage and Y% dua
coverage. In addition it allowed us to investigate the correlation between percentage of
edges (duas) covered and error exposing ability.
For each subject program, we first identified the unexecutable edges and duas and
eliminated them from consideration. We then generated a large set of test cases called the
universe, executed each test case in the universe, checked its output, recorded whether it
was correct, and saved a trace of the path executed by that test case. 2 We sampled the
space of adequate test sets as follows: we selected various test set sizes, n, then for each
n, generated many test sets by randomly selecting n elements from the universe, using a
uniform distribution. Note that we did not use a uniform distribution on the space of test
sets, but rather, used a distribution that arises from a practical test generation strategy.
We then determined whether or not each test set was exposing and used ASSET to check
how many executable edges and duas were not covered by any of the paths corresponding
to the test set.
Some care was necessary in choosing appropriate test set sizes. If the generated test
sets are too small, then relatively few of them will cover all edges (all duas), so the
results will not be statistically significant. On the other hand, if the test sets are too
large, then almost all of them will expose errors, making it difficult to distinguish between
the effectiveness of the two criteria. To overcome this problem, we generated our test
sets in "batches", where each batch contained sets of a fixed size. After observing which
sizes were too large or too small, we generated additional batches in the appropriate size
range, if necessary. This "stratification" of test sets by size also allowed us to investigate
whether all-uses is more effective than all-edges for test sets of a given size.
The design of the experiment imposed several constraints on the subject programs:
ffl The input domain of the program had to have a structure for which there was some
reasonable way to generate the universe of test cases. For example, while there
are several reasonable ways to randomly generate matrices, it is less clear how to
randomly generate inputs to an operating system.
ffl Because of the large number of test cases on which each program was executed, it
was necessary to have some means of automatically checking the correctness of the
outputs.
In two of the subjects (matinv1 and determinant), there were a few executable duas that were not
covered by any element of the universe. We dealt with this by considering any dua that wasn't covered
by the universe to be unexecutable.
ffl The failure rate of the program had to be low; i.e., errors had to be exposed by
relatively few inputs in the universe. Otherwise, almost any test set that was
big enough to satisfy all-edges would be very likely to expose an error. We were
surprised to discover that many of the programs we considered as candidates for
this experiment, including many that had been used in previous software quality
studies, had to be rejected because of their high failure rates.
The available tool support (ASSET) imposed an additional constraint - the subject
programs had to be either written in Pascal or short enough to translate manually. When
translation was necessary, program structure was changed as little as possible.
Note that our experiment considered the effectiveness of all-edges adequate sets in
general, not the effectiveness of those all-edges adequate sets that fail to satisfy all-uses.
This models the situation in which the tester releases the program when it has passed
an all-edges adequate test set without caring whether or not the test set also satisfies
all-uses. In an alternative model, the tester would classify a test set as all-edges adequate
only if it satisfied all-edges and did not satisfy all-uses. If all-uses is shown to be more
effective than all-edges using our model, then the difference would be even greater if we
were to use the alternative model.
Also note that our design introduces a bias in favor of all-edges. We used test sets
that were big enough to insure the selection of a statistically significant number of all-uses
adequate sets, not just a significant number of all-edges adequate sets. This resulted in
the selection of many all-edges adequate sets that were bigger, thus more likely to expose
an error, than the all-edges adequate sets that would be selected by a practitioner using
our model of the testing process.
4 Data Analysis Techniques
Recall that we are interested in comparing the effectiveness of all-uses to all-edges and
to the null criterion for each of a variety of subject programs. We treat each subject
program's data as that of a separate experiment. Throughout this section, when we
refer to the effectiveness of a criterion we mean its effectiveness for a particular program
and test generation strategy. For clarity, we describe the techniques used to compare
all-uses with all-edges. The techniques for comparing all-uses and all-edges to the null
criterion and for comparing coverage of X% of the duas to coverage of Y % of the edges
are identical.
If we have randomly chosen N C-adequate test sets, and X is the number of these that
exposed at least one error, then -
the sample proportion, is a good estimator
of pC , the effectiveness of C. In fact, if the probability that a C-adequate test set exposes
an error is governed by a binomial distribution, then -
pC is a minimum variance unbiased
estimator of the effectiveness of C [3], i.e., it is good statistic for estimating effectiveness.
4.1 Overall comparison of criteria
The first question posed was whether or not all-uses adequate test sets are significantly
more effective than all-edges adequate test sets. Let -
p u be the proportion of all-uses
adequate sets that exposed an error and let -
e be the proportion of all-edges adequate
sets that exposed an error. If -
p u is significantly higher than -
e then there is strong
statistical evidence that all-uses is more effective than all-edges. If not, the data do not
support this hypothesis.
This observation suggests that hypothesis testing techniques are suitable for answering
this question. In hypothesis testing, a research, or alternative, hypothesis is pitted against
a null hypothesis, and the data are used to determine whether one hypothesis is more
likely to be true than the other. Our research hypothesis, that all-uses is more effective
than all-edges, is expressed by the assertion p e ! p u . The null hypothesis is that the
two criteria are equally effective, expressed by p Note that we chose a one-sided
test because we wanted to know whether all-uses is more effective than all-edges, not
just whether they are different. It is important to realize that the goal in hypothesis
testing is quite conservative; we uphold the null hypothesis as true unless the data is
strong testimony against it, in which case we reject the null hypothesis in favor of the
alternative.
Since we are using the sample proportions as estimators of the effectiveness of the
criteria, our decision to accept or reject the null hypothesis reduces to a decision as to
whether or not the difference between the sample proportions is significantly large. In
particular, we should reject
e is greater than some prespecified
critical value.
Using a standard statistical technique for establishing the critical values [5], we call
a sample sufficiently large if there are at least five exposing and five unexposing test
sets in the sample. For sufficiently large samples, the difference - p
e is approximately
normally distributed, with mean p
e
. If we assume that the
parent populations are binomial then oe 2
is the population
size. This enables us to calculate critical values, significance probabilities and confidence
intervals for p . The significance probabilities indicate the strength of the evidence
for rejection of hypotheses, and the confidence intervals give an indication of how much
better one criterion is than the other, if at all. To be conservative in our interpretation of
the data, we chose a significance level of meaning that if the null hypothesis is
rejected, the probability that all-edges is actually as effective as all-uses is at most 1/100.
In several of our subjects, every all-uses adequate test set exposed an error, so that
the normal approximation could not be used. In these cases, we calculated confidence
intervals separately for p u and p e . Inspection of the data showed that all-uses was clearly
more effective than all-edges for these subjects, making further analysis unnecessary.
4.2 Comparison of criteria for fixed size test sets
The second question we asked dealt with the effect of test set size on the previous results.
The all-uses adequacy criterion in general requires larger test sets than does the all-edges
criterion. Since the probability that a test set exposes an error increases as its size
increases, for some subjects all-uses may be more effective than all-edges simply because
it demands larger test sets. On the other hand, the increased effectiveness of all-uses
may result from the way the criterion subdivides the input domain [39].
To determine whether differences in the effectiveness of the criteria were primarily
due to differences in the sizes of adequate test sets, we analyzed the data on a "by-size"
basis. In Tables 6, 7, 8, we display the sample data for each of the subject programs
by size, arranging close sizes into groups. The intent of this table is to give descriptive
evidence of the relationship between all-uses and all-edges for fixed size test sets. Where
there was enough data, we also did hypothesis testing on the individual size groups and
reported the results in the right hand columns.
4.3 Relationship between coverage and effectiveness
The third question to be answered is whether there is a relationship between the extent
to which a test set satisfies the all-uses (or all-edges) criterion and the probability that
the test set will expose an error. This is the most difficult of the questions, and the
technique we employed to answer it is logistic regression.
A regression model gives the mean of a response variable in a particular group of
variables as a function of the numerical characteristics of the group. If Y is the response
variable and are the predictors, we denote the mean of Y , given fixed
values -
. Ordinary linear (or higher order) regression models are
not suitable for data in which the response variable takes on yes-no type values such as
"exposing" or "not exposing", in part because regression equations such as
put no constraints on the value of - Y j-x
. The right hand side can take on any real
value whereas the left hand side must lie between 0 and 1. The right hand side is also
assumed to follow a normal distribution whereas the left hand side generally does not.
There are other serious problems that make linear regression a poor choice for modeling
proportions [1].
Logistic regression overcomes these problems and provides many important advantages
as well. In logistic regression the left hand side of Equation 1 is replaced by the
logit of the response variable,
log
and the right hand side can be any real-valued function of the predictors. The expression
is frequently called the odds ratio of the response variable. Because - Y j-x
lies between 0
and 1, the odds ratio can assume any positive real value, and so its logarithm, the logit,
can assume any real value. Thus, in logistic regression, Equation 1 becomes one in which
neither left nor right hand side is constrained. Algebraic manipulation shows that if
log
then
exp f(-x)
This equation is the regression equation, and the goal is to find the "simplest" function
f(-x) that explains the data well.
In our analysis, we treated test set size and fraction of coverage of definition-use
associations (or edges) as the predictor variables, and used logistic regression to determine
the extent, if any, to which the probability of exposing an error was dependent upon these
variables. We used the CATMOD module of SAS to assist with the regression analysis,
using maximum likelihood estimation, and measuring goodness of fit with - 2 tests of the
model parameter estimates and of the likelihood ratio.
5 Subject Programs
Our nine subjects were obtained from seven programs, all of which had naturally occurring
errors. Three of our programs were drawn from the Duran and Ntafos study [10];
high failure rates made the rest of the Duran-Ntafos subjects unsuitable for our experi-
ment. The programs buggyfind, textfmt, and transpose, described below, were used
by Duran and Ntafos; the remainder came from other sources. We obtained two subjects
from buggyfind by using two different input distributions. Recall that ASSET monitors
coverage of edges or duas in a single program unit. We obtained two subjects from a
matrix inversion program by instrumenting two different procedures.
In this section, we describe the programs, the procedures for selecting random test
data for them, and the method used to check outputs. Table 1 gives the numbers of lines
of code, edges, duas, executable edges, executable duas in the instrumented procedures,
and proportions of failure causing test cases in each universe.
subject LOC edges duas exec exec failure
edges duas rate
detm
28
strmtch2
textfmt 26 21 50
transpose 78 44 97 42 88 0.023
Table
1: Subject Programs
5.1 Buggyfind
Hoare's find program [24] takes as input an array a and an index f and permutes the
elements of a so that all elements to the left of position f are less than or equal to
a[f] and all elements to the right of position f are greater than or equal to a[f]. Boyer,
Elspas, and Levitt [4] analyzed an erroneous variant of this program, dubbed Buggyfind,
which represented a failed attempt to translate Hoare's program into LISP.
For our experiment, we translated the LISP version into Pascal, and tested it using
two different distributions of random inputs. For find1, the test universe consisted of
1000 randomly generated arrays, where the array sizes were randomly selected between
zero and 10, the elements were randomly selected in the range 1 to 100, and the values of
f were randomly selected between 1 and the array size. For find2 the universe contained
one test case for each array with elements selected from f0; 1; 2; 3; 4g and range
for each n from 0 to 5. This distribution more closely approximates uniform random
selection from all test cases with array sizes from 0 to 5.
In both find1 and find2 we checked the output by checking whether all elements
to the left of position f were less than or equal to a[f] and all elements to the right of
position f were greater than or equal to a[f].
5.2 TextFormat
Goodenough and Gerhart [19] analyzed an erroneous text formatting program. They
identified seven problems with the program and traced them to five faults. Four of these
faults either were too blatant to be useful for this experiment, or could not be replicated in
Pascal versions of the program. Our textfmt program is a Pascal version of Goodenough
and Gerhart's corrected textfmt program, in which we have re-inserted the remaining
fault. This fault, which corresponds to Goodenough and Gerhart's problems N5 and N6,
causes leading blanks and adjacent blanks/line-breaks to be handled improperly. We
would have liked to re-insert the other faults to produce additional subject programs,
but either they could not be replicated in Pascal, or they led to failure rates that were
too high.
Each test case was a piece of text, 15 characters long, generated by repeated uniform
random selection of a character from the set consisting of uppercase letters, lowercase
letters, and blank and newline characters. Outputs were checked by comparing the output
text to a correct version, using the UNIX diff command.
5.3 Transpose
The next subject program was a transpose routine from a sparse matrix package, Algorithm
408 of the Collected Algorithms of the ACM [30], in which two faults had subsequently
been identified [20]. We translated the corrected FORTRAN program into
Pascal, and re-introduced one of the faults. Our universe could not expose the other
alleged fault. The failure occurs when the first row of the matrix consists entirely of
zeros.
Since the sparse matrix package was designed to reduce memory storage for matrices
whose densities do not exceed 66%, we chose the test cases randomly from among the
set of all R by C matrices with densities between 0 and 66%, where
The matrix transpose package required that C be at most R. Positions of the zeros in the
matrices were chosen randomly, and the non-zero entries were filled with their row-major
ordinal values. To check the outputs we simply compared elements M[i,j] and M[j,i] for
all i and j.
5.4 String-matching programs
Two of our subject programs were brute-force string-matching programs. They input
some text and a pattern and are supposed to output the location of the first occurrence
of the pattern in the text, if it occurs, and zero if the pattern never occurs. The first
subject, strmtch1, resulted from a flawed attempt to modify a string-matching program
from a Pascal textbook [7] so that it could handle variable length texts and patterns.
The error occurs when a pattern of length zero is entered; in this case the program
returns the value two. Note that there are several reasonable specifications of what the
program should do in this case, but returning the value two is not among them. We had
previously observed that the all-uses criterion is guaranteed to expose this error, because
one of the duas can only be executed when the pattern length is zero. We did not know
the effectiveness of all-edges or the null criterion for this program.
Our second erroneous string match program, strmtch2, reflects a different error that
also occurred naturally. In the implementation, the maximum length of a pattern is
shorter than it should be, so the program is sometimes working with a truncated version
of the pattern, hence sometimes erroneously reports that it has found a match.
For each of the string-matching programs, the universe consisted of all (text, pattern)
pairs on a two letter alphabet with text length and pattern length ranging from zero to
four. Outputs were checked by comparing them to those produced by a correct program.
5.5 Matrix Manipulation
Three of the subject programs were derived from a mathematical software package written
as a group project in a graduate software engineering course. One of the programs in
this package was a matrix inversion program, which used LU decomposition [33]. The
error in this program was not just an implementation error, but rather was a case of
choosing a known algorithm that did not quite meet the specification. The problem arose
because the LU decomposition algorithm detects some, but not all, singular matrices.
Thus, given some singular matrices, the program returns an error message, while given
others, it returns a matrix that is purported to be the inverse of the input matrix. It
is interesting to note that several well-known numerical methods textbooks [33] describe
the LU decomposition algorithm with at most a brief mention of the singularity problem;
it is thus very easy to imagine a professional programmer misusing the algorithm.
The algorithm has two steps, called decomposition and backsolving. The decomposition
step, implemented in procedure ludcmp, returns the LU decomposition of a row-wise
permutation of the input matrix. In the backsolving step, achieved by repeated calls to
the procedure lubksb, the triangular matrices are used to compute the inverse.
In subject program matinv1, procedure ludcmp was instrumented, while in matinv2,
lubskb was instrumented. In both cases, test sets were drawn from the same universe,
which consisted of square matrices with sizes uniformly selected between 0 and 5 and with
integer entries selected uniformly between 0 and 24. Outputs were checked by multiplying
the output matrix by the input matrix, and comparing to the identity matrix.
Subject program determinant used the LU decomposition to compute the determinant
of a square matrix. The program operated by calling the ludcmp procedure,
then multiplying the diagonal elements of the resulting lower-triangular matrix. Like
the matrix inversion program, determinant produces an error message on some singular
matrices, but computes the determinant incorrectly on others. While the errors in these
programs are related to one another, it is worth noting that the sets of inputs on which
the two programs fail are not identical.
We instrumented the ludcmp procedure, and generated another universe in the same
way as the universe used for the matrix inversion subjects. To check the outputs, we
compared them to results obtained using an inefficient but correct program based on
calculating minors and cofactors.
6 Results
The results of the experiment are presented below, organized into three subsections corresponding
to each of the three types of questions we asked initially:
1. Are those test sets that satisfy criterion C1 more likely to detect an error than
those that satisfy C2?
2. For a given test set size, are those test sets that satisfy criterion C1 more likely to
detect an error than those that satisfy C2?
3. How does the likelihood that the test set detects an error depend on the extent to
which a test set satisfies the criterion?
In each of these subsections we present and describe tables that summarize the data and
its statistical analysis, and we interpret this analysis.
6.1 Overall comparison of criteria
Tables
2, 3, and 4 summarize the results of the comparisons of effectiveness of all-uses
to all-edges, all-uses to null, and all-edges to null. The columns labeled N e , N u , and
give the total numbers of adequate test sets for criteria all-edges, all-uses, and null,
respectively, and the columns labeled -
give the proportions of these that
expose errors. The sixth column of each table gives the significance probability, where
applicable; an entry of * indicates that hypothesis testing could not be applied. A
"yes" in the column labeled "p e ! p u ?" indicates that all-uses is significantly more effective
than all-edges. The columns labeled "p analogous
questions. Where the answer to this question is "yes", confidence intervals are shown
in the last column. In those cases where the normality assumption held, a confidence
interval for the difference in effectiveness between the two criteria (e.g., p
while in the other cases, confidence intervals around the effectiveness of each criterion
are shown. For example, the first row of Table 2 indicates that for determinant we are
99% confident that the effectiveness of all-edges lies between 0 and 0.08, whereas that of
all-uses lies between 0.52 and 1.0. The second row indicates that for find1 we are 99%
confident that the effectiveness of all-uses is between 0.06 and 0.16 greater than that of
all-edges.
Examination of these tables shows that for five of the nine subjects, all-uses was more
effective than all-edges at 99% confidence; for six subjects, all-uses was more effective
than the null criterion; and for five subjects all-edges was more effective than null. Note
that for strmtch2 all-uses would be considered more effective than all-edges at 98%
confidence. Further interpretation of these results is given in Section 7.
Subj. N e -
Confidence
detm 169 0.041 7 1.000 * yes [0.00,0.08] vs. [0.52,1.00]
find1 1678 0.557 775 0.667 0.000 yes [0.06,0.16]
find2 3182 0.252 43 0.256 0.476 no
matinv1 3410 0.023 76 1.000 * yes [0.02,0.03] vs. [0.94,1.00]
strmtch1 1584 0.361 238 1.000 * yes [0.33,0.39] vs. [0.98,1.00]
strmtch2 1669 0.535 169 0.615 0.015 no
textfmt 1125 0.520 12 1.000 * yes [0.48,0.56] vs. [0.68,1.00]
transpose 1294 0.447 13 0.462 0.456 no
Table
2: All-edges vs. All-uses
Subj.
Confidence
detm 6400 0.032 7 1.000 * yes [0.03,0.04] vs. [0.52,1.00]
find1 2000 0.484 775 0.667 0.000 yes [0.13,0.24]
find2 3500 0.234 43 0.256 0.366 no
matinv1 4000 0.020 76 1.000 * yes [0.01,0.03] vs. [0.94,1.00]
matinv2 5000 0.001 4406 0.001 0.500 no
strmtch1 2000 0.288 238 1.000 * yes [0.26,0.31] vs. [0.98,1.00]
strmtch2 2000 0.456 169 0.615 0.000 yes [0.06,0.26]
textfmt 2000 0.391 12 1.000 * yes [0.36,0.42] vs. [0.68,1.00]
transpose 3000 0.407 13 0.462 0.336 no
Table
3: Null Criterion vs. All-uses
Subj.
detm 6400 0.032 169 0.041 0.255 no
find1 2000 0.484 1678 0.557 0.000 yes [0.03,0.12]
strmtch1 2000 0.288 1584 0.361 0.000 yes [0.03,0.11]
strmtch2 2000 0.456 1669 0.535 0.000 yes [0.04,0.12]
textfmt 2000 0.391 1125 0.520 0.000 yes [0.08,0.18]
transpose 3000 0.407 1294 0.447 0.006 yes [0.00,0.08]
Table
4: Null Criterion vs. All-edges
Subj. N e 0
Sig.
Confidence
detm 6304 0.033 168 0.042 0.259 no
strmtch1 1960 0.293 1424 0.384 0.000 yes [0.05,0.13]
strmtch2 1950 0.465 795 0.564 0.000 yes [0.05,0.15]
textfmt 1772 0.429 123 1.000 * yes [0.40,0.46] vs. [0.96,1.00]
transpose 2913 0.411 100 0.420 0.428 no
Table
5: All-but-two duas vs. All-but-two edges
We next compare test sets that cover X% of the duas to test sets that cover Y % of the
edges. In particular, Table 5 compares test sets that cover all but two duas to those that
cover all but two edges. For example, since determinant has 103 executable duas and
executable edges, the table compares 98% dua coverage to 97% edge coverage for that
program. Note that although there was only one program, strmtch2, for which the result
of hypothesis testing changed from "yes" to "no" in going from 100% coverage to "all-
but-two" coverage, the effectiveness of all-uses fell dramatically for several subjects. On
the other hand, in one subject, find2, "all-but-two" dua coverage was actually slightly
more effective than 100% dua coverage. Additional comparisons of X% dua coverage to
Y % edge coverage can be made by examining the raw data [38].
6.2 Comparison of criteria for fixed size test sets
In
Tables
6, 7, and 8, the test sets are grouped according to their sizes. In four of the
nine subjects, all-uses adequate test sets are more effective than all-edges adequate sets
and null-adequate sets of similar size. Thus it appears that in four of the five subjects for
which all-uses was more effective than all-edges and in four of the six for which all-uses
was more effective than the null criterion, the improved effectiveness can be attributed
to the inherent properties of all-uses, not just to the fact that all-uses adequate test sets
are larger on the average than all-edges and null adequate test sets. In contrast, all-edges
adequate sets were not more effective than null-adequate sets of the same size for any of
the subjects. This indicates that, in those cases where all-edges was more effective than
null, the increased effectiveness was primarily due to the fact that all-edges demanded
larger test sets than the null criterion.
Subj. size N e -
19-24
19-24 579 0.021 12 1.000 * yes
strmtch2 1-5 238 0.366 1 1.000 *
transpose 10-20 359 0.306 1 0.000 *
Table
All-edges vs. All-uses By Size
detm 1-6 6100
19-24 600 0.020 12 1.000 * yes
strmtch2 1-5 1600 0.473 1 1.000 *
transpose 10-20 1200
Table
7: Null vs. All-uses By Size
detm 1-6 6100 0.034 5 0.000 *
find1 1-5 1600 0.500 211 0.299 1.000 no
find2 1-5 3100 0.243 205 0.088 1.000 no
16-20 2000 0.295 1999 0.296 0.472 no
7-12 1500 0.022 545 0.013 0.889 no
19-24 600 0.020 579 0.021 0.451 no
strmtch1 1-5 1600 0.299 194 0.155 1.000 no
strmtch2 1-5 1600 0.473 238 0.366 0.998 no
transpose 10-20 1200 0.298 359 0.306 0.385 no
Table
8: Null vs. All-edges By Size
Subject f(s; c)
detm *
strmtch2 *
transpose
Table
9: Logistic Regression Results for dua Coverage
Subject f(s; c)
Table
10: Logistic Regression Results for edge Coverage
6.3 Relationship between coverage and effectiveness
The results of logistic regression are shown in Tables 9 and 10. As was discussed in
Section 4.3, each regression equation is of the form
where f(s; c) is a function of the predictor variables, s, the test set size, and c, the fraction
of duas or edges covered by the test set. The table gives the functions f(s; c) for each
subject program for which we were able to find a good-fitting model. The asterisks in
some table entries indicate that the data are so scattered that any function that gives a
good fit is too complex to offer much insight into the relationship, if any exists, between
coverage and effectiveness. The regression equations give us information about the way
in which the effectiveness of a test set depends, if at all, upon coverage of the criterion and
test set size. Because the functions f(s; c) have several terms, it is difficult to understand
very much about the dependence relationship by inspection of the table alone. However,
for some of the subjects, f(s; c) is simple enough that one can restrict one's attention
to the coefficient of the term containing the highest power of c. If this coefficient is
positive and has a large magnitude, then effectiveness depends strongly and positively
on coverage. This is the case, for example, for the find1 subject for both dua and edge
coverage. For find1, all terms involving c are positive for both types of coverage, so we
can safely conclude that effectiveness is strongly correlated to dua and edge coverage.
For some of the subject programs, we have included graphs of Prob(exposing) versus
proportion of duas (edges) covered at selected test set sizes so that the reader can see
the relationship more clearly. In the figures, the all-uses and all-edges graphs are super-
imposed; to distinguish them, we use dashed lines for the all-edges graph and dot-dashed
lines for the all-uses graphs. In Figure 1, one can see that for find1 the probability of
exposing an error increases monotonically as coverage increases, for test sets with 15 test
inputs. In fact, for any test set size, this would be true; we picked
purposes only.
For some of the other subjects, the relationship is harder to determine. Careful analysis
of Table 9, however, shows that for find1, matinv1, and strmtch1, effectiveness
depends strongly and positively on coverage of duas. Similarly, analysis of Table 10 shows
that for find1, matinv1, strmtch1 and strmtch2, effectiveness depends positively on
coverage of edges. The graphs in most of these cases tend to look very much like the
graph for strmtch1 illustrated in Figure 2. In these cases, the probability of exposing an
error is negligible unless a sufficiently large percentage of duas or edges is covered. Then
as more duas or edges are covered, the probability rises sharply to a plateau on or near
1.0. We offer a possible explanation for this in the next section.
In summary, there is a clear dependence of effectiveness on extent of coverage of the
all-uses criterion in only three of the nine subjects; in a fourth subject, transpose, the
probability of exposing an error increases as the percentage of duas covered increases,
except that it drops slightly when the percentage gets close to 100%. In four of the
subjects such a dependence exists for the all-edges criterion.
Close examination of the data led us to several interesting observations. While all-uses
was not always more effective than all-edges and the null criterion, in most of those cases
where it was more effective, it was much more effective. In contrast, in those cases in
which all-edges was more effective than the null criterion, it was usually only a little bit
more effective.
For buggyfind, all-uses performed significantly better than all-edges when the find1
universe was used, apparently because all-uses required larger test sets. However, when
the find2 universe was used there was little difference between the criteria. Also, the
Figure
1: Coverage vs. Prob(exposing) for find1
Figure
2: Coverage vs. Prob(exposing) for strmtch1
Figure
3: Coverage vs. Prob(exposing) for find2
Figure
4: Coverage vs. Prob(exposing) for textfmt
effectiveness of each criterion is dramatically better for find1 than for find2. This shows
that even relatively minor changes in the test generation strategy can profoundly influence
the effectiveness of an adequacy criterion and that blanket statements about "random
testing" without reference to the particular input distribution used can be misleading.
For matinv1, all-uses appears to be guaranteed to detect the error, while for matinv2,
in which a different procedure is instrumented, all-uses performs poorly. This is in part
due to the fact that it is very easy to satisfy all-uses in matinv2, and partly due to the
possibility that the procedure instrumented in matinv2 has nothing to do with the bug.
Both matinv1 and detm involve instrumenting the ludcmp procedure. While 100%
dua coverage appears to guarantee detection of the error in both of these cases, 98%
dua coverage is much more effective for detm than for matinv1 (0.556 vs 0.026). This is
interesting because it shows that an adequacy criterion can be more or less effective for
a given procedure depending upon the program in which that procedure is used.
In four of the subjects, determinant, matinv1, textfmt, and strmtch1, coverage of
all duas appears to guarantee detection of the error. We knew this prior to the experiment
for strmtch1, but were surprised to see it for the other three programs.
Figure
3 contains the graphs for find2. In the figure, two graphs each for all-uses and
all-edges, for test set sizes of 10 and 20, are superimposed. For both test set sizes, the
all-edges graph shows that the probability of exposing an error monotonically increases
as the number of covered edges increases. The all-uses graphs have upward slope until
roughly 70% of the duas have been covered, after which they take a downturn. This
phenomenon might be the result of insufficient data above 70% coverage combined with
good a fit of the regression curve to the data.
The graphs shown in Figures 2 and 4, and the data from determinant and matinv1
found elsewhere [38], exhibit an interesting phenomenon. At high values of coverage, the
probability P of error detection is 1.0; then, as coverage decreases, a point is reached at
which P decreases rapidly. The raw data [38] corroborate this:
ffl For matinv1, the effectiveness goes from 1.0 at 100% coverage to 0.03 at 98%
coverage, i.e., for test sets that covered at least 98% of the executable duas.
ffl For strmtch1, the effectiveness is 1.0 at 100% coverage, 1.0 at 98% coverage (all-
but-one dua) and about 0.35 at 95% coverage (all but 2 duas).
ffl For textfmt, the effectiveness of dua coverage from 100% down to 83% is 1.0, then
the effectiveness of 80% dua coverage falls to 0.58. But strangely enough, for values
of c between about 0.4 and 0.6, it is 1.0 again. This arises from the fact that all
test sets with coverage were exposing, and the curve is fitted closely to
the data. It is likely that the only way to achieve coverage of exactly 0.524 is for
a particular set of paths to have been executed by the test set and that executing
this set of paths guarantees exposing the error. At the same time, test sets that
execute a different set of paths that covers more duas are not guaranteed to expose
the error.
ffl For determinant, the effectiveness goes from 1.0 at 100% coverage to 0.556 at 98%
coverage.
Thus it appears that in each of these cases, not only was there one or more duas whose
coverage guaranteed error detection, but that the remaining duas were largely irrelevant.
This phenomenon has profound consequences for testing practitioners, who might
deal with the unexecutable dua problem by testing until some arbitrary predetermined
percentage of the duas have been covered. If it so happens that the test set fails to cover
any of the "crucial" duas, the test set may be much less likely to detect the error than if
it had covered 100% of the executable duas.
For example, in matinv1, there are a total of 298 duas, only 206 (69%) of which are
executable. Suppose it has been decided that test sets that cover 200 of the duas will be
deemed adequate. Then it will be quite possible to test without hitting any of the crucial
duas, so the chance of exposing the error will be much less than it would have been with
coverage of 100% of the executable duas.
Consequently, we recommend that practitioners using data flow testing put in the
effort required to weed out unexecutable def-use associations, and only accept test sets
that achieve 100% coverage of the executable duas. A heuristic for doing this is presented
by Frankl [13] and the issue of how this affects the cost of using a criterion is discussed
by Weiss [37].
We more closely examined the programs in which all-uses was guaranteed to expose
the error, to gain insight into situations in which all-uses seems to perform well. In each
of these cases, the fault could be classified as a "missing path error", i.e., a situation in
which the programmer forgot to take special action under certain circumstances. 3 This
is particularly interesting, because structured testing criteria are usually considered to
be poor at exposing missing path errors, since test cases that execute the "missing path"
are not explicitly demanded. However, in these cases, it turns out that all of the test
cases that cover some particular dua happen to be test cases that would have taken the
missing path, had it been present. Consequently, all-uses guaranteed that these errors
were detected.
3 In strmtch1 the missing path would return 0 or write an error message if the null pattern was
input; in textfmt the missing path would skip certain statements if bufpos = 0, and in determinant
and matinv1 the missing path would return an error message when the input was singular. In the matrix
manipulation program, an explicit check for singularity was presumably omitted for efficiency reasons.
8 Related Work
Most previous work comparing testing techniques falls into two categories: theoretical
comparisons of adequacy criteria and empirical studies of test generation techniques.
There have been many theoretical comparisons of adequacy criteria, but only a few of
these have addressed error detecting ability. In this section, we summarize simulations,
experiments, and analytical studies that have addressed the fault detecting ability of
various testing techniques.
Several papers have investigated the fault detecting ability of a generalization of path
testing called "partition testing". Duran and Ntafos [10] performed simulations comparing
random testing to partition testing, in which, using hypothetical input domains,
partitions, and distributions of errors, they compared the probabilities that each technique
would detect an error. Their conclusion was that, although random testing did
not always compare favorably to path testing, it did well enough to be a cost effective
alternative. Considering those results counterintuitive, Hamlet and Taylor [22] did more
extensive simulations, and arrived at more precise statements about the relationship
between partition probabilities, failure rates, and effectiveness, which corroborated the
Duran-Ntafos results. Jeng and Weyuker [39] attacked the same problem analytically
and showed that the effectiveness of partition testing depends greatly on how failure
causing inputs are distributed among the "subdomains" of the partition. Frankl and
Weyuker [17, 16] investigated the conditions under which one criterion is guaranteed to
be more likely to detect a fault than another, and found that the fact that C1 subsumes
C2 does not guarantee that C1 is better at detecting faults than C2. Stronger conditions
with more bearing on fault-detecting ability were also described. It should be noted that
these studies used a different model of test set selection than we used. For a program
with m subdomains, they considered test sets of size mk arising from an idealized test
generation scheme, namely, independent random selection of k elements from each sub-
domain, using a uniform distribution on each subdomain. In contrast, for various values
of n, we generated test sets by independent random selection of n elements from the
universe, then considered only those sets that were adequate.
Several empirical studies have counted the number of errors detected by a particular
technique on programs with either natural or seeded errors. Duran and Ntafos [10]
executed roughly 50 randomly generated test cases for each of several programs and calculated
the percentages of these that exposed errors. Girgis and Woodward [18] seeded
five textbook FORTRAN programs with errors and subjected them to various test meth-
ods. As Hamlet pointed out [21], experiments of this nature may give misleading results
because they typically rely on a small number of test sets to represent each testing tech-
nique. Furthermore, some of these studies [18] employ a very liberal notion of when a
test case detects an error.
An experiment by Basili and Selby [2], comparing statement testing, partitioning with
boundary value analysis and code-reading by stepwise abstraction, differed somewhat
from the others in this category, in that it used human subjects to generate tests and
evaluated the extent to which their expertise influenced the results. The use of human
subjects in this type of experiment is a laudable goal; after all, as long as testing is under
human control, human factors will influence results. A problem with this approach is
that one cannot necessarily extrapolate to the population one is trying to model, since
the human sample may not be representative of that population.
A third category of studies involved measuring the extent to which test sets generated
using some particular technique satisfied various adequacy criteria, or the extent to which
test sets that satisfy one adequacy criterion also satisfy another. For example, Duran
and Ntafos [10] measured the extent to which randomly generated test sets with roughly
20 to 120 test cases satisfied the LCSAJ, all-edges, required pairs, and TER n criteria.
Several studies of this nature have been performed on mutation testing; for example,
DeMillo, Lipton, and Sayward measured the extent to which randomly generated test
sets satisfied mutation testing on the buggyfind program [9], and Offutt measured the
extent to which test sets that kill first-order mutants also kill second-order mutants [32].
Note that this type of study does not address the question of error-detecting ability.
While the above cited studies each contributed in some way toward better understanding
of software testing, there are several noteworthy differences between each of
them and the experiment described in this paper: Our experiment
ffl compared the error detecting ability of adequacy criteria, as opposed to error detecting
ability of test generation techniques, and as opposed to other characteristics
of adequacy criteria;
ffl was designed to allow the application of rigorous statistical techniques;
ffl investigated real adequacy criteria (as opposed to hypothetical partitions of the
input domain) on real programs with naturally occurring bugs.
None of the above mentioned papers has all three of these attributes.
Finally, we note that there have also been many experimental studies that did use
rigorous statistical techniques to investigate other software quality issues [27, 35, 36].
However since none of these were aimed at evaluating the effectiveness of adequacy cri-
teria, they are not directly relevant here.
9 Conclusions
We have described an experiment comparing the effectiveness of randomly generated test
sets that are adequate according to the all-edges, all-uses, and null test data adequacy
criteria. Unlike previous experiments, this experiment was designed to allow for statistically
meaningful comparison of the error-detecting ability adequacy criteria. It involved
generating large numbers of test sets for each subject program, determining which test
sets were adequate according to each criterion, and determining the proportion of adequate
test sets that exposed at least one error. The data was analyzed rigorously, using
well established statistical techniques.
The first group of questions we posed was whether C1 is more effective than C2 for
each subject and for each pair of criteria. For five of the nine subjects, all-uses was
significantly more effective than all-edges; for six subjects, all-uses was significantly more
effective than the null criterion; for five subjects all-edges was more effective than null.
Closer examination of the data showed that in several of the cases in which all-uses did
well, it actually did very well, appearing to guarantee error detection. We also compared
test sets that partially satisfied all-uses to those that partially satisfied all-edges. In six
subjects, test sets that covered all but two definition-use associations were more effective
than test sets that covered all but two edges. Thus, test sets that cover all (or almost
all) duas are sometimes, but not always more likely to expose an error than those that
cover all (almost all) edges.
The second group of questions limited attention to test sets of the same or similar
size. All-uses adequate test sets appeared to be more effective than all-edges adequate
sets of similar size in four of the nine subjects. For the same four subjects, all-uses
adequate test sets appeared to be more effective than null adequate sets of similar size.
In contrast, all-edges adequate sets were not more effective than null adequate sets of
similar size for any of the subjects. This indicates that in those cases where all-edges was
more effective than the null criterion, the increased effectiveness was due primarily to the
fact that all-edges test sets were typically larger than null-adequate test sets. In most of
the cases where all-uses was more effective than all-edges or than the null criterion, the
increased effectiveness appears to be due to other factors, such as the way the criterion
concentrates failure-causing-inputs into subdomains.
The third group of questions investigated whether the probability that a test set
exposes an error depends on the size of the test set and the proportion of definition-uses
associations or edges covered. This is an important question because it is not uncommon
for testers and testing researchers to assume implicitly that confidence in the correctness
of a program should be proportional to the extent to which an adequacy criterion is
satisfied. Logistic regression showed that in four of the nine subject programs the error-
exposing ability of test sets tended to increase as these test sets covered more definition-use
associations. It also showed that in a different set of four subject programs, there
was a weaker, but still positive correlation between the error-exposing ability of test sets
and the percentage of edges covered by these sets. However, even in those subjects where
the probability that a test set exposes an error depended on the proportion of definition-use
associations or edges covered, that dependence was usually highly non-linear. This
indicates that one's confidence in the correctness of a program should not in general be
proportional to the percentage of edges or duas covered.
In summary, our results show that all-uses can be extremely effective, appearing to
guarantee error detection in several of the subjects. It did not always perform significantly
better than all-edges or the null criterion, but in most of our subjects it did. On the
other hand, all-edges was not very effective for most of our subjects; in fact, in none
of the subjects did all-edges or almost-all-edges adequate test sets perform significantly
better than randomly selected null-adequate test sets of the same size.
We make no claim that our collection of subject programs is representative of all
software, and therefore we do not believe it is sensible to extrapolate from our results
to software in general. The primary contribution of this research is the methodology we
used for the experiment; we believe that our results are both sound and interesting and
should motivate further research. In addition, even this relatively small scale experiment
allowed us to observe the existence of several interesting phenomena, noted in Section 7.
The foremost direction for future research is to perform similar experiments on a much
larger collection of subjects, including large programs. Our design could also be used
to compare other adequacy criteria. Experiments comparing the effectiveness of various
adequacy criteria when non-random test generation strategies are used would also be
useful. We hope that other researchers will join us in performing such experiments in the
future.
Acknowledgments
: The authors would like to thank Prof. Al Baranchik of the Hunter
College Mathematics Department for advice on statistical methods, Mohammed Ghriga
and Roong-Ko Doong for helping prepare the subject programs, Zhang Ming for helping
with the data analysis, Tarak Goradia for useful comments on an earlier version of the
paper, and the Hunter College Geology and Geography Department for use of their
statistical analysis software. An anonymous referee made several useful suggestions on
the presentation of the material.
--R
Analysis of Ordinal Categorical Data.
Comparing the effectiveness of software testing strate- gies
Statistical Concepts and Methods.
Elements of Statistics.
A formal evaluation of data flow path selection criteria.
An extended overview of the Mothra software testing environment.
Hints on test data selection: Help for the practicing programmer.
An evaluation of random testing.
ASSET user's manual.
The Use of Data Flow Information for the Selection and Evaluation of Software Test
Partial symbolic evaluation of path expressions (version 2).
An applicable family of data flow testing criteria.
Assessing the fault-detecting ability of testing methods
A formal analysis of the fault detecting ability of testing methods.
An experimental comparison of the error exposing ability of program testing criteria.
Toward a theory of test data selection.
Remark on algorithm 408.
Theoretical comparison of testing methods.
Partition testing does not inspire confidence.
A data flow analysis approach to program testing.
Proof of a program: Find.
A survey of dynamic analysis methods.
An approach to program testing.
An experimental evaluation of the assumption of independence in multiversion programming.
A data flow oriented program testing strategy.
Algorithm 408: A sparse matrix package (part I)
On required element testing.
Investigations of the software testing coupling effect.
Numerical Recipes: The Art of Scientific Computing.
Selecting software test data using data flow information.
Experimental comparison of three system test strate- gies: preliminary report
An experimental evaluation of the effectiveness of random testing of fault-tolerant software
Methods of comparing test data adequacy criteria.
Comparison of all-uses and all-edges: Design
Analyzing partition testing strategies.
Comparison of program testing strate- gies
--TR
Selecting software test data using data flow information
Numerical recipes: the art of scientific computing
An experimental evaluation of the assumption of independence in multiversion programming
Comparing the effectiveness of software testing strategies
An Applicable Family of Data Flow Testing Criteria
Theoretical comparison of testing methods
Experimental comparison of three system test strategies preliminary report
A Formal Evaluation of Data Flow Path Selection Criteria
Partition Testing Does Not Inspire Confidence (Program Testing)
Analyzing Partition Testing Strategies
Comparison of program testing strategies
Assessing the fault-detecting ability of testing methods
Investigations of the software testing coupling effect
Remark on algorithm 408
An Approach to Program Testing
Proof of a program
Algorithm 408: a sparse matrix package (part I) [F4]
Oh! Pascal!
A Formal Analysis of the Fault-Detecting Ability of Testing Methods
SELECTMYAMPERSANDmdash;a formal system for testing and debugging programs by symbolic execution
The use of data flow information for the selection and evaluation of software test data
--CTR
P. G. Frankl , S. N. Weiss, Correction to "An Experimental Comparison of the Effectiveness of Branch Testing and Data Flow Testing", IEEE Transactions on Software Engineering, v.19 n.12, p.1180, December 1993
Phyllis G. Frankl , Oleg Iakounenko, Further empirical studies of test effectiveness, ACM SIGSOFT Software Engineering Notes, v.23 n.6, p.153-162, Nov. 1998
Tohru Matsuodani , Kazuhiko Tsuda, Evaluation of debug-testing efficiency by duplication of the detected fault and delay time of repair, Information SciencesInformatics and Computer Science: An International Journal, v.166 n.1-4, p.83-103, 29 October 2004
Dick Hamlet, What can we learn by testing a program?, ACM SIGSOFT Software Engineering Notes, v.23 n.2, p.50-52, March 1998
Kalpesh Kapoor , Jonathan P. Bowen, Test conditions for fault classes in Boolean specifications, ACM Transactions on Software Engineering and Methodology (TOSEM), v.16 n.3, p.10-es, July 2007
Jennifer Black , Emanuel Melachrinoudis , David Kaeli, Bi-Criteria Models for All-Uses Test Suite Reduction, Proceedings of the 26th International Conference on Software Engineering, p.106-115, May 23-28, 2004
Mary Jean Harrold , Gregg Rothermel, Performing data flow testing on classes, ACM SIGSOFT Software Engineering Notes, v.19 n.5, p.154-163, Dec. 1994
N. Juristo , A. M. Moreno , S. Vegas, Towards building a solid empirical body of knowledge in testing techniques, ACM SIGSOFT Software Engineering Notes, v.29 n.5, September 2004
W. Eric Wong , Joseph R. Horgan , Saul London , Aditya P. Mathur, Effect of test set minimization on fault detection effectiveness, Proceedings of the 17th international conference on Software engineering, p.41-50, April 24-28, 1995, Seattle, Washington, United States
Dick Hamlet, On subdomains: Testing, profiles, and components, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.71-76, Sept. 2000
L. C. Briand , Y. Labiche , Y. Wang, Using Simulation to Empirically Investigate Test Coverage Criteria Based on Statechart, Proceedings of the 26th International Conference on Software Engineering, p.86-95, May 23-28, 2004
Martina Marr , Antonia Bertolino, Unconstrained duals and their use in achieving all-uses coverage, ACM SIGSOFT Software Engineering Notes, v.21 n.3, p.147-157, May 1996
W. Eric Wong , Yu Qi , Kendra Cooper, Source code-based software risk assessing, Proceedings of the 2005 ACM symposium on Applied computing, March 13-17, 2005, Santa Fe, New Mexico
Hong Zhu, A Formal Analysis of the Subsume Relation Between Software Test Adequacy Criteria, IEEE Transactions on Software Engineering, v.22 n.4, p.248-255, April 1996
Sira Vegas , Victor Basili, A Characterisation Schema for Software Testing Techniques, Empirical Software Engineering, v.10 n.4, p.437-466, October 2005
Phyllis G. Frankl , Yuetang Deng, Comparison of delivered reliability of branch, data flow and operational testing: A case study, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.124-134, Sept. 2000
Bev Littlewood , Peter T. Popov , Lorenzo Strigini , Nick Shryane, Modeling the Effects of Combining Diverse Software Fault Detection Techniques, IEEE Transactions on Software Engineering, v.26 n.12, p.1157-1167, December 2000
Elaine J. Weyuker, Using operational distributions to judge testing progress, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Gregg Rothermel , Lixin Li , Christopher DuPuis , Margaret Burnett, What you see is what you test: a methodology for testing form-based visual programs, Proceedings of the 20th international conference on Software engineering, p.198-207, April 19-25, 1998, Kyoto, Japan
Chi Keen Low , T. Y. Chen , Ralph Rnnquist, Automated Test Case Generation for BDI Agents, Autonomous Agents and Multi-Agent Systems, v.2 n.4, p.311-332, November 1999
Karen J. Rothermel , Curtis R. Cook , Margaret M. Burnett , Justin Schonfeld , T. R. G. Green , Gregg Rothermel, WYSIWYT testing in the spreadsheet paradigm: an empirical evaluation, Proceedings of the 22nd international conference on Software engineering, p.230-239, June 04-11, 2000, Limerick, Ireland
A. Pretschner , W. Prenninger , S. Wagner , C. Khnel , M. Baumgartner , B. Sostawa , R. Zlch , T. Stauner, One evaluation of model-based testing and its automation, Proceedings of the 27th international conference on Software engineering, May 15-21, 2005, St. Louis, MO, USA
Lutz Prechelt , Walter F. Tichy, A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking, IEEE Transactions on Software Engineering, v.24 n.4, p.302-312, April 1998
Phyllis G. Frankl , Richard G. Hamlet , Bev Littlewood , Lorenzo Strigini, Evaluating Testing Methods by Delivered Reliability, IEEE Transactions on Software Engineering, v.24 n.8, p.586-601, August 1998
Phyllis Frankl , Dick Hamlet , Bev Littlewood , Lorenzo Strigini, Choosing a testing method to deliver reliability, Proceedings of the 19th international conference on Software engineering, p.68-78, May 17-23, 1997, Boston, Massachusetts, United States
Prem Devanbu , Stuart G. Stubblebine, Cryptographic verification of test coverage claims, ACM SIGSOFT Software Engineering Notes, v.22 n.6, p.395-413, Nov. 1997
N. Juristo , A. M. Moreno , S. Vegas, Limitations of empirical testing technique knowledge, Lecture notes on empirical software engineering, World Scientific Publishing Co., Inc., River Edge, NJ,
Richard A. DeMillo , Aditya P. Mathur , W. Eric Wong, Some Critical Remarks on a Hierarchy of Fault-Detecting Abilities of Test Methods, IEEE Transactions on Software Engineering, v.21 n.10, p.858-861, October 1995
Heng Lu , W. K. Chan , T. H. Tse, Testing context-aware middleware-centric programs: a data flow approach and an RFID-based experimentation, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Matthew J. Rutherford , Antonio Carzaniga , Alexander L. Wolf, Simulation-based test adequacy criteria for distributed systems, Proceedings of the 14th ACM SIGSOFT international symposium on Foundations of software engineering, November 05-11, 2006, Portland, Oregon, USA
Natalia Juristo , Ana M. Moreno , Sira Vegas, Reviewing 25 Years of Testing Technique Experiments, Empirical Software Engineering, v.9 n.1-2, p.7-44, March 2004
Alessandro Orso , Saurabh Sinha , Mary Jean Harrold, Classifying data dependences in the presence of pointers for program comprehension, testing, and debugging, ACM Transactions on Software Engineering and Methodology (TOSEM), v.13 n.2, p.199-239, April 2004
James H. Andrews , Susmita Haldar , Yong Lei , Felix Chun Hang Li, Tool support for randomized unit testing, Proceedings of the 1st international workshop on Random testing, July 20-20, 2006, Portland, Maine
Mary Jean Harrold, Analysis and Testing of Programs with Exception Handling Constructs, IEEE Transactions on Software Engineering, v.26 n.9, p.849-871, September 2000
Premkumar Thomas Devanbu , Stuart G. Stubblebine, Cryptographic Verification of Test Coverage Claims, IEEE Transactions on Software Engineering, v.26 n.2, p.178-192, February 2000
Gregg Rothermel , Margaret Burnett , Lixin Li , Christopher Dupuis , Andrei Sheretov, A methodology for testing spreadsheets, ACM Transactions on Software Engineering and Methodology (TOSEM), v.10 n.1, p.110-147, Jan. 2001
Hyunsook Do , Sebastian Elbaum , Gregg Rothermel, Supporting Controlled Experimentation with Testing Techniques: An Infrastructure and its Potential Impact, Empirical Software Engineering, v.10 n.4, p.405-435, October 2005
Lionel C. Briand , Massimiliano Di Penta , Yvan Labiche, Assessing and Improving State-Based Class Testing: A Series of Experiments, IEEE Transactions on Software Engineering, v.30 n.11, p.770-793, November 2004
Peifeng Hu , Zhenyu Zhang , W. K. Chan , T. H. Tse, An empirical comparison between direct and indirect test result checking approaches, Proceedings of the 3rd international workshop on Software quality assurance, November 06-06, 2006, Portland, Oregon
Ramkrishna Chatterjee , Barbara G. Ryder , William A. Landi, Complexity of Points-To Analysis of Java in the Presence of Exceptions, IEEE Transactions on Software Engineering, v.27 n.6, p.481-512, June 2001
Gregor V. Bochmann , Alexandre Petrenko, Protocol testing: review of methods and relevance for software testing, Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis, p.109-124, August 17-19, 1994, Seattle, Washington, United States
Barbara G. Ryder , William A. Landi , Philip A. Stocks , Sean Zhang , Rita Altucher, A schema for interprocedural modification side-effect analysis with pointer aliasing, ACM Transactions on Programming Languages and Systems (TOPLAS), v.23 n.2, p.105-186, March 2001
Hong Zhu , Patrick A. V. Hall , John H. R. May, Software unit test coverage and adequacy, ACM Computing Surveys (CSUR), v.29 n.4, p.366-427, Dec. 1997 | regression analysis;program testing;branch testing;error exposing ability;all-edges test data adequacy criteria;data flow testing;software testing experiments;definition-use associations;executable edges;all-uses adequate test sets;errors |
631072 | Exception Handlers in Functional Programming Languages. | Constructs for expressing exception handling can greatly help to avoid clutter in code by allowing the programmer to separate the code to handle unusual situations from the code for the normal case. The author proposes a new approach to embed exception handlers in functional languages. The proposed approach discards the conventional view of treating exceptions, as a means of effecting a control transfer; instead, exceptions are used to change the state of an object. The two types of exceptions, terminate and resume, are treated differently. A terminate exception, when raised, is viewed as shielding the input object. On the other hand, a resume exception designates the input object as curable and requires the immediate application of a handler function. This approach enables the clean semantics of functions raising exceptions without associating any implementation restriction and without loss of the referential transparency and the commutativity properties of functions. | Introduction
Functional Programming languages (FP) [1] describe algorithms
in a clear, concise, and natural way. For any programming
language this is a highly desirable feature. In FP
programs are constructed using primitive and user-defined
functions as building blocks, and functionals or program-
forming operations as composing functions. Further, programs
exhibit a clear hierarchical structure in which high
level programs can be combined to form higher level pro-
grams. Besides, functional languages are free from side-
effects, and express parallelism in a natural way. These
properties make FP attractive not only from the theoretical
perspective, but also from the program construction
point of view.
A key issue in program construction is robustness [8].
Software reliability can be achieved by the judicious use of
fault-tolerant tools. Exception handling [4] is one of the
two techniques used for developing reliable software. Only
a few functional languages, namely Standard ML [7], Parallel
Standard ML (PSML) [6], ALEX [3], Gerald [9], and
The author is with the Department of Electrical Engineering, McGill
University, 3480 University Street, Montreal, H3A 2A7, Canada. This
research work was supported by grants received from MICRONET - Net-work
Centres of Excellence, Canada.
A preliminary version of this work has appeared in the Proceedings of
the 16th International Computer Software and Applications Conference,
Chicago, IL, 1992.
Functional Languages (FL) [2] support constructs for software
fault-tolerance. One reason that could be attributed
to this is the concise semantics and strong mathematical
properties of functional languages make proving correctness
of programs easy. Hence program verification tech-
niques, rather than fault-tolerant tools, are more popular
in functional languages. Despite this fact, we argue
that notations for exception handling are still necessary
for the following reason. The definition of exceptions is not
necessarily restricted to failures. Following Goodenough's
definition [5], we consider exceptional conditions as those
brought to the attention of the operation's invoker. That
is, we do not treat exceptions as failures. With this broadened
outlook, exception handling becomes a useful tool in
functional languages as in any other imperative language.
Exceptions are classified into two types, namely Terminate
and Resume exceptions. In this paper, we define and
develop the required notations for programming Terminate
and Resume exceptions. Though the proposed notations
can, in principle, be used for any functional or applicative
language, we choose Backus' FP [1] for expository pur-
poses. We discuss certain preliminaries on exception handling
in the next section. The same section also presents
our approach to embedding this fault-tolerant tool in FP.
The subsequent section introduces the new constructs for
programming Terminate exceptions. We present a few examples
to explain the meaning and intentions of the proposed
constructs. In Section IV, we demonstrate that the
introduction of new constructs does not destroy the algebraic
properties of FP. Section V deals with Resume ex-
ceptions. Finally, we compare our work with other related
work in Section VI.
II. Background
In this section, the concepts of exception handling are
elucidated in an imperative framework for reasons of simplicity
and ease of understanding.
A. Preliminaries
The specified services provided by a given software module
can be classified into normal (expected and desired), abnormal
(expected but undesired), and unanticipated (un-
expected and undesired) services [4, 5]. In the first case,
the execution of a module terminates normally. The second
case leads to an exceptional result. If this exception is
not handled, then the module certainly fails to provide the
specified service. To handle detected exceptions, the mod-
ule, therefore, contains exception handlers. If, despite the
occurrence of a lower level exception, the module provides
a normal service, we say that the lower level exception is
masked by the handler. On the other hand, if the module
is unable to mask a lower level exception and provides
an exceptional result, then the exception propagates to a
higher level. The last of the three cases corresponds to
an unexpected behavior of a software module. This unexpected
behavior is attributed to the existence of one or
more design faults. Either the same module or any other
lower level module can have these design faults. Unanticipated
exceptions can be handled with the help of default
exception handlers.
The notation
Procedure
has been used to indicate that a procedure P, in addition
to its normal return, also provides an exceptional return E.
(The brackets with the three dots denote the parameters of
the procedure whose details are not germane to the discus-
sion.) In the body of P, the designer can insert a construct
responsible for raising the exception as:
When condition B is true, the exception E is raised. Some
cleanup operations may be performed before signaling E.
This construct represents the case where an exception is
detected by a runtime test B. Alternatively, an exception
could be detected by the system, at run time, to signal
exceptional conditions such as arithmetic overflow, under-
flow, and illegal array index. No explicit conditions need
be programmed for such system-defined exceptions. The
point where an exception is raised (either detected by the
system or by a runtime test) is called the activation point.
A handler H for E can be associated at the place where
P is invoked as:
In fact, the handler can be associated with the invoker of
P or with any ancestors of P, if the programmer desires so.
The place where the handler is associated is termed the
association point for the exception.
Depending on the type of service required, an exception
can be of one of the two types, namely a Terminate exception
or a Resume exception [4, 5]. Consider the procedure
defined as:
Procedure
The exception E is invoked if the runtime test B is satisfied.
Following this the control transfers to the handler H. If E
is a Terminate exception, then, on completion of H, the execution
control continues from the association point. For
Resume exceptions, the statements following
are executed on completion of H. Thus, for
exceptions control switches from the activation
point to the handler and continues execution from the association
point; whereas, for Resume exceptions, the control
temporarily jumps to the handler and then returns to the
activation point (on completion of the handler). This fact
will be made use of in defining the notations for exception
handling in FP.
B. Related Issues
Bretz [3] identifies two problems in supporting exception
handling constructs in functional programming. An
exception, in imperative languages, is treated as a means
of effecting a control transfer. Hence, there is a fundamental
conflict between the functional approach followed
in functional languages and the control flow-oriented view
of exceptions. Due to this, exceptions in functional languages
can result in non-deterministic behavior of the FP
program. For example, if there are two exception points
inside a given function, the result of parallel evaluation of
the function could be different, depending on which exception
is raised first. Consider the evaluation of the function
ADD-SUBEXP defined using the ALEX syntax [3] as
signals I;
An application like
handle I:= x.x terminate
could yield 3 or 5 depending on the order of evaluation of
the subexpressions.
This problem has been considered as intrinsic to incorporating
exceptions in functional languages. Languages
ML [7] and ALEX [3] circumvent this problem by proposing
sequential execution for FP. Such a restriction is severe
and is essentially required because the control flow view
of exceptions has been carried through to functional lan-
guages. We discard this view and define the semantics of
FP functions operating on exception objects without imposing
any restriction on the execution. Though Gerald [9]
and PSML [6] allow parallel execution of subexpressions,
they retain the deterministic behavior by assigning priorities
to exceptions.
Secondly, exception handling might cause side effects in
expressions and hence might violate the property of referential
transparency. Proposed solutions [3, 7] suggest the
association of an environment with the functions. But in
our case, discarding the conventional control flow view of
exception solves this problem naturally. It has been established
in Section IV that the introduction of new constructs
does preserve the algebraic properties of functional
languages. Our approach, like PSML [6], uses error values
for handling exceptions. However, there are some important
differences between the two. Section VI brings out
these differences.
Reeves et al. [9] point out that embedding exception handling
constructs in lazy functional languages can transform
non-strict functions into hyper-strict functions. This is illustrated
with the help of the expression,
handle bad by x.0 terminate in
where \Phi is any non-strict binary operator, non-strict in
both arguments. The signal bad propagates up through the
operators and \Phi, to be handled by the respective handler
function. Even though \Phi is defined to be non-strict in
both arguments, the up-propagation of signal through \Phi
makes it strict. This is because, both subexpressions need
to be evaluated to determine whether or not they raise
any exception. Reeves et al. claim the transformation
of non-strict actors into hyper-strict actors is due to the
up-propagation of signals through non-strict operators. To
overcome this problem, the notion of down-propagation and
firewalls has been defined in [9]. In this paper, however, we
argue that it is not the up-propagation, but the persistent
nature of exception values that causes the above problem.
By this we mean that if an exception value is a part of
an object as in hX then the error signal
e, because of its special status, persists and the above
object is indistinguishable from e. This is also referred to
as following strict semantics on exception values. In Section
III-C, we illustrate this with the help of an example
and show how the above problem could be overcome by
following lazy semantics for error values.
In the following subsection we describe our approach to
incorporating exception handlers in functional languages.
C. Our Approach to Embedding Exception Handlers in FP
Exceptions
In incorporating exception handlers in FP, we discard
the conventional control flow-oriented approach. Instead
a Terminate exception is considered to shield the input
object. That is, when a Terminate exception e is raised
while applying a function F to an object X, the input
object is considered to be shielded by e. Such an object
is represented as X e . The exception object X e can
be a component of a composite object X 1 . The objects
are respectively known as fully and partially
shielded objects. A shielded object X 1 or X e
1 can have
more than one fully shielded object, possibly shielded by
different exception names, as constituent elements. Thus,
is a partially shielded
object. A fully shielded object cannot be shielded either
by the same exception or by another exception. That is,
and X e e 0
are not valid objects. However, the object
is valid, and interestingly e 0 can
be same as e. In this object we observe a hierarchy in
shielding.
Any function, except for the respective handler, operating
on a fully-shielded object is inhibited. In other words,
when an object is fully shielded, the function applied to it
behaves like the identity function. The handler H for an
exception e, when applied to X e , removes the shield of the
object and results in H : X. It must be noted that a handler
can remove only the corresponding shield. Thus, if an
object is shielded by a number of exceptions, shields can
be removed only if the respective handlers are applied in
the appropriate order. A functions operating on a partially
shielded object either behaves like the identity function or
results in the expected value depending on the semantics
of the function. This is because functions are defined to be
non-strict with respect to exception (or shielded) objects 1 .
Even though all functions are strict (over diverging computations) in
the original definition of FP, we relax this and allow functions such as
The semantics of FP functions and functional forms operating
on shielded objects and partially shielded objects
are described in the next section. The reason for choosing
non-strict semantics (henceforth non-strictness in this
paper refers to being non-strict with respect to shielded
objects) is to allow shielded objects to exist and carry over
strict FP functions and functional forms, if any, and ultimately
to be handled by the appropriate handler.
Resume Exceptions
Resume exceptions are handled as locally as possible and
therefore embedding them in functional languages does not
cause any fundamental conflict. Since Resume exceptions
are handled locally, the concept of shielding an object does
not help. However, such sophisticated handling is not required
for Resume exceptions if we view the situation in
the following manner. When a Resume exception is raised,
the input object is considered to have some abnormality
which needs to be cured immediately. Instead of passing
the object (possibly with a shield) to the handler, the handler
is invoked at the activation point as an immediate cure
function. Such a view is simple and serves the purpose.
In the following sections, we introduce notations and
constructs for Terminate and Resume exceptions. As the
new constructs are defined, the domain and semantics of
the FP functions will be redefined as required.
III. Terminate Exceptions
The objects, functions, and functional forms of FP are
extended in the following way to embed Terminate exceptions
A. The Extended FP System
Objects
Formally, an object can be undefined (denoted by ?), or
an atom X, or a sequence of objects hX
each X i which could be (i) normal object, (ii) partially
shielded, or (iii) completely shielded by a single exception.
That is the domain O of objects can be divided into three
disjoint sets, namely (i) the set C of completely shielded objects
of the form X e , (ii) the set P of partially shielded objects
of the form hX
such that and (iii) the set N of normal (not
shielded) objects of the form hX
P. The sets N ; P, and C are such that
O
Primitive Functions
The semantics of the primitive functions operating on
shielded objects is defined below. The meaning of these
functions when applied to normal objects is as in [1].
select, tail, and null, and functionals such as construction and constant
to be non-strict in the appropriate components of the input object. We
chose non-strict semantics as we want to show that our approach to embed
exception handling constructs does not introduce hyper-strictness.
It is straightforward to extend the definition of the functions and the
functional forms to support lazy semantics. It is also possible to prove
the algebraic laws of FP with non-strict functions in lines similar to that
given in [1]. We do not present them here as it is beyond the scope of
this paper.
Select
Tail
Atom
Equal
Equal is a strict function, strict on both components of
the argument. So, if the input object X is partially or fully
shielded, then Equal behaves like the identity function Id.
That is,
Null
Reverse
Reverse
Distribute from Left
The function Distribute from right can be defined in a
similar way.
Add, Subtract, Multiply, Divide, And, Or, Not
These functions expect the input object to be of the form
(except the Not function for which the input is of
the form X 1 ) and are strict in both
Transpose
Append Left
Append left is strict on the second component of the
input. That is,
Similarly Append Right function is strict on the first component
of the input.
Next we deal with the functional forms.
Functional Forms
Composition
(f
Construction
Condition
Constant
The constant functional is non-strict over partially and
fully shielded objects. This means,
Apply to all
Insert
Definitions
The definitions of an FP program are written in the form:
where hfunc-namei and hfunc-defni represent the function
name and the function body respectively. The term inside
the braces is optional. This term is included only if the
function raises any exception. The exceptions raised by
the function are listed in exception-list.
Programming Exceptions
A Terminate exception can be raised using the Esc function
and an exception name e. That is,
Adding syntactic sugar, the Esc function can be written in
FP style as Esc ffi [e; Id]. However, we continue to use the
representation Esc e for simplicity sake.
Handler Functions
A handler for a Terminate exception e is written as
(read as 'on e do H'). The application of this function,
called the handler function, on X is defined as:
In order to write handler functions in FP style, a function
Hand can be defined as
However we prefer the for the sake of
simplicity. Further, from the definition of Esc and handler
functions, it can be observed
is any exception name.
Finally, the default handler can be written as
and has the following semantics.
B. Examples
We illustrate the notations introduced so far by means
of a few examples.
Example 3.1
The first example deals with a system-defined exception.
Consider the select function k, to select the kth element of
an object. Let there be a system-defined exception e ISV
where ISV stands for Illegal Selector Value. Further, as-
sume, at runtime, the application of the function k on an
object raises the exception e ISV whenever k
is greater than n. Then in the exceptional case, the application
results in hX
The handler function 1r is the function
to select the rightmost element of a structured object,
can be applied to the shielded object to produce Xn .
Example 3.2
Consider the ADD-SUBEXP function discussed in Section
II-B. The problem can be programmed in FP as:
The application of (ADD-SUBEXP:1) results in
Applying Finally
Add
that the result will be the same irrespective of the order in
which the subexpressions EXP1 and EXP2 are evaluated.
It is interesting to note that in Gerald [9] and PSML [6],
one of the two exceptions e1 or e2 is given a higher priority
and only that exception is allowed to propagate in the
above example. In evaluating subexpressions in parallel,
deterministic behavior is guaranteed in this way in Gerald.
While it is intuitive why such an approach is adopted in
Gerald which is based on the replacement model of Yemini
and Berry [11], it is not obvious in PSML which uses
error data values to handle exceptions. Further, prioritizing
exceptions results in the loss of commutative property.
Commutativity is regained in Gerald by enforcing certain
lexical scoping rules. In any case, enforcing priorities for
subexpressions under parallel execution model would cause
additional implementation overhead. Neither Gerald nor
PSML addresses the implementation issues involved in prioritizing
exceptions.
Example 3.3
Lastly, we consider the conversion of a string of numbers
to their ASCII equivalents. We assume the functions ASC
and ASC$ produce the ASCII equivalent of a number and
the ASCII character '$' respectively. (The conversion of
a string of numbers can easily be done in FP using the
'apply-to-all' functional. But to illustrate the features of
the proposed notation in a recursive context, we program
the example in the following way.)
[ASCII ffi1, ASCII-STRING ffitl]
Functions Ge, Le, and Or represent, respectively, Greater
than or equal, Less than or equal, and logical Or functions.
The application of ASCII-STRING to an input h64; 65; 30i
results in hA; B;
C. Remarks
Certain remarks are in order.
(i) System-defined exceptions are detected implicitly at
runtime. Such exceptions are raised while executing
the primitive functions of FP. So it is appropriate to
declare system-defined exceptions to be of type Termi-
nate. Handlers for system-defined exceptions can easily
be defined using the proposed notations as shown
in Example 3.1.
(ii) In our proposal, the user has the freedom to define
the handler anywhere he or she likes. Shielded objects
(partially or fully) propagate through strict functions
and reach the handler. The propagation is implicit.
However, non-strict functions may have the dangerous
effect of (partially or completely) pruning the exception
object. Hence, care need to be exercised in
placing the handler functions.
(iii) A shielded object propagates through the dynamic invocation
chain until it encounters an appropriate han-
dler. Hence, the handler association is dynamic. It
may be observed that our view of Terminate exceptions
(that they shield abnormal values) facilitates dynamic
handler association in a natural manner.
(iv) As mentioned earlier, PSML and Gerald require additional
prioritizing schemes to retain deterministic behavior
under a parallel execution model. In our ap-
proach, however, an exception object is non-persistent
and does not cause any indeterminate behavior no
matter in which order the subexpressions are evaluated
(v) Lastly, we review the problem of exceptions introducing
hyper-strictness in functional languages. Consider
the function
SIGNALS e1.
The expression
is similar to
handle bad by x.0 terminate in
where the
select function is used in the former in the place of
the operator \Phi. In [9] it has been argued that the
up-propagation of the signal bad transforms \Phi into
a strict function. However, reducing Circled-plus :
which in turn returns
the value 14. Thus, the evaluation remains non-strict
even in the presence of the exception e1 and its
up-propagation. However, if we allow exception objects
to be persistent, then the application of Circled-
plus on h12; 13; 14i would result in 2 : h13 e1 i, making
the select function strict. Thus, we say that it
is essentially the persistent nature of exception ob-
jects, and not the up-propagation, that transforms
non-strict functions into strict functions.
In the following section, we present a formal approach
to describe Terminate exceptions and their handlers.
IV. A Formal Study on FP with Exceptions
A. Preliminaries
The set of input objects, called the input domain, is
denoted by O.
O
In order to study the algebraic properties of functional pro-
grams, first we need to study the behavior of the primitive
functions of FP. For each primitive function f, the domain
of input objects O can be partitioned into two disjoint
sets, called the Active Domain (represented by AD (f))
and the (denoted by ID (f)). Intuitively,
the Active Domain of a function f consists of those objects
on which the function has its intended effect; for objects
belonging to the Identity Domain of f, the function
is inhibited. Formally, an object X belongs to AD (f) if
X. (For the identity function Id, AD
ID (f) is the set of all objects for which
that the objects in the Active Domain of a function could
still be partially or completely shielded objects.
The Active Domain and the Identity Domain of Esc e
are:
As objects belonging to C form
the Domain of Esc e. Next, we consider the handler
function
It may be noted that the effect of the application of a handler
function on an object belonging to its Active Domain is
slightly different from the effect of f : X, for X 2 AD (f).
This is because performs two actions -
(i) removing the shield of X and (ii) the application of H
on the normal object. Barring this small difference, we can
say that any primitive function f (including Esc e functions
and handler functions) behaves either as f or as the
identify function Id. The behavior of f is, however, deterministic
and depends entirely on whether the input object
is in the Active or the Identity Domain of f.
B. The Choice Operator
To represent the behavior of a function mathematically,
we introduce a deterministic choice operator !.
function f can be written as f
where f1 is same as f and f2 is the identify function. The
application of f on an object X results in
AD (f) and
Again, f 1 should be appropriately defined for handler func-
tions. The details of f i are not required for proving the
algebraic properties and hence will be ignored henceforth.
The following axioms can be written on the choice operator
using the definitions of functional forms presented
in Section III-A.
Axiom 1: [Composition]
Axiom 2: [Condition]
Axiom 3: [Construction]
Axiom 4: [Apply-to-all]
Axiom 5: [Insert]
(f in
The basis for the above axioms is that the selection (or
choice) is deterministic and the selection of i in one function
does not influence the selection of i in the other. Therefore,
the choice operator can be rewritten for one or more variables
with the choice subscripts suitably renamed. Lastly,
n; 8X, as the Active Domain for the constant functional
n is O.
C. Algebraic Laws of FP Programs
In this section, we prove some of the algebraic laws of
FP programs listed in [1]. It is important to realize that
the FP programs considered here have been extended to
include exception and handler functions.
Composition is associative:
Composition and Condition:
Proof: The proof of this law is similar to that of L2. 2
Construct and Composition:
L4:
Using Axiom 3,
By Axiom 1,
L5:
Proof: This law can be proved in lines similar to that of
L2 and L4. Axioms 2 and 3 are used in the proof. 2
Construct and Apply-to-all:
Using Axiom 2,
Using Axiom 4,
Miscellaneous:
Proof: Using axioms 1 and 4, this law can be proved. 2
Proof: This law can be proved by using axiom 2 and the
algebraic law
The above laws are by no means exhaustive. One can
easily prove the other laws described in [1] in lines similar
to those discussed above.
V. Resume Exceptions
In this section we introduce the notations for programming
Resume exceptions.
A. Notations
Resume exceptions are denoted by "
e, where e could possibly
be subscripted. The function Res " e is introduced to
raise the Resume exception. The handler H for a Resume
exception is represented as
the influx symbol
indicating that " e is of type Resume. As mentioned
earlier, when the application of a function on an object X
raises a Resume exception, the object X is treated as cur-
able; the corresponding handler function is applied to the
object X at the activation point. If neither a corresponding
handler nor a default handler is present for a Resume
exception, a runtime error occurs when a Res "
e function is
invoked.
Resume exceptions do not shield objects, and the existing
definitions of objects, functions and functional forms
are sufficient to express them. Explicit compile-time techniques
are required to accomplish dynamic association of
handler to a Resume exception. It is beyond the scope of
this paper to discuss an implementation for the association
of handlers.
We now present a simple example to illustrate how Resume
exceptions can be programmed.
Example 5.1
This example deals with a program that adds the magnitudes
of a sequence of numbers. The program calls a
function SUM which adds two numbers. Exceptions are
raised in SUM whenever either of the numbers considered
for addition are negative. The handler takes an input argument
and returns the magnitude of it. The SUM operation
is resumed after handling the exception.
The Negate function negates the input argument. It can
be seen that the COND function in the above example
raises " e whenever its input is negative. The cure function
Negate is immediately applied to the input object when the
exception " e is raised.
B. Remarks
(i) At first sight, functions raising Resume exceptions appear
to violate referential transparency. Consider a
function F which raises a Resume exception "
e. Suppose
H is the handler for "
e, not necessarily associated
with F. Now F:X may result in one of the two values,
depending on whether the object lies in the normal or
exception domain of F. Thus it appears that F is non-
deterministic. But this is not so and can be reasoned
in the following way. The function F can be considered
as a (sort of) higher order function with H as an argu-
ment. The function F selectively applies H whenever
the input object is in the exception domain. That is, F
is a deterministic function whose range can be divided
into two sets, one corresponding to the normal values
and the other to the exception values.
(ii) Resume exceptions can be indirectly used to define
higher order functions in FP. For example, consider a
function F which can be written as a composition of
two functions. That is, let
If we want to define a function F(H) as
F
where H is supplied as an argument, then using the
Resume exception notation such a function can be defined
as
Thus, the function F can be considered to have a place
holder which can be filled by the handler function.
Different handler functions can be associated with F
to generate a number of F(H) functions. This can be
an interesting use of Resume exceptions.
VI. Related Work
There have been a few attempts [2, 3, 6, 7, 9, 10] to
incorporate exception handling in functional or applicative
languages. Here we compare the earlier proposals with
ours.
(i) The main difference between our work and the related
ones is the radical change in the way exceptions are
viewed. Earlier proposals [3, 7] treat exceptions as a
means of effecting a control transfer. This leads to a
fundamental conflict and as a remedy requires the imposition
of sequential execution. We treat exception
objects either as shielded or as requiring immediate
application of cure functions. This approach naturally
suits the functional style and, therefore, does not
necessitate any constraint on the execution model.
(ii) Our approach is similar to PSML [6] in that both
use error data values to handle exceptions. However
in PSML, as explained in Section III-B, deterministic
program behavior is retained by assigning priorities
to exceptions. Besides, prioritizing exceptions
involves additional implementation overheads. Gerald
[9], which uses the replacement model of Yemini
and Berry [11], also prioritizes exceptions to guarantee
deterministic behavior. On the other hand our ap-
proach, by following non-strict semantics, avoids the
need for such requirements.
(iii) In Gerald it has been argued that the up-propagation
of error signals transforms non-strict functions into
strict functions. To overcome this problem, the use
of firewalls and down-propagation has been proposed.
In Section III-C we established that it is the persistent
nature of error signals in Gerald that makes non-strict
functions into strict functions. As shielded objects
are non-persistent, our approach does not introduce
hyper-strictness.
(iv) In our scheme, an exception is automatically propagated
to higher level modules until an appropriate handler
is found. That is, the propagation of an exception
along the dynamic invocation chain is transparent to
the user. The programming language ML [7] also supports
implicit propagation of exceptions; in contrast,
in ALEX [3], the exceptions must be explicitly transmitted
(v) ML [7] supports only Terminate type of exceptions;
whereas our work, like ALEX 2 , allows both Resume
and Terminate exceptions.
(vi) FL language [2] has been designed with constructs for
exception handling. Though user-defined exceptions
can be programmed in FL, exceptions are mainly used
to signal application of inappropriate input objects to
primitive functions. Further, only one type of exception
is allowed in FL.
(vii) Wadler [10] suggests lazy evaluators need no extra constructs
to provide some exception handling - all function
results can be packaged as a singleton list, with
the null list representing the error result. However this
minimalistic approach requires considerable programmer
effort and discipline. Moreover his method does
not support different kinds of named exceptions.
VII. Conclusions
In this paper we have introduced the notations for exception
handling in FP languages for constructing reliable
software. The notations introduced are illustrated with the
help of example programs. In incorporating exception handling
in FP, the conventional view of treating exceptions
as a means of effecting control transfer has been discarded.
Our view of exceptions allows us to describe the semantics
of FP functions in a functional way, retaining referential
transparency and the nice mathematical properties of
functional languages. In fact, this has been accomplished
without imposing additional execution constraints, such as
sequentializing the execution. Further, the semantics of
the primitive functions of FP are defined in a non-strict
manner over the exception objects. This means that even
if eager evaluation strategies are followed in an implemen-
tation, the result will still be lazy over exception objects.
Lastly, our approach to exception handling does not introduce
hyper-strictness.
Even though we chose Backus' FP and incorporated exception
handling constructs in them, our scheme is general
and could be applied to any functional language. As the
In addition, ALEX supports exceptions of a different class called
Retry [3]. For Retry exceptions, after exiting the handler, the execution
control is switched over to the beginning of the module from where
the exception was raised. We do not attempt to incorporate this class of
exceptions because of the inherent control transfer present in them.
proposed method to exception handling has the advantages
of retaining (i) the nice mathematical properties of functional
language, (ii) deterministic program behavior, and
(iii) non-strictness of lazy languages, it will lead to an implementable
parallel programming language.
Acknowledgments
The author is thankful to the anonymous reviewers for
their helpful comments. The work presented in this paper
would not have taken its present shape without the
numerous discussions the author had with R.A. Nicholl,
University of Western Ontario, London, Canada.
--R
"Can programming be liberated from von Neumann approach? A functional style and its algebra of programs,"
"FL language manual,"
"An exception handling construct for functional languages,"
"Exception handling and software fault tolerance,"
"Exception handling: Issues and a proposed notation,"
"Exception handling in a parallel functional lan- guage: PSML,"
"The definition of Standard ML, version 3"
"System structure for software fault tolerance,"
"Ger- ald: An exceptional lazy functional programming language"
"How to replace failure by a list of successes: A method for exception handling, backtracking and pattern matching in lazy functional languages,"
"An axiomatic treatment of exception handling in an expression-oriented language"
--TR
How to replace failure by a list of successes
An axiomatic treatment of exception handling in an expression-oriented language
Can programming be liberated from the von Neumann style?
Exception handling
An Exception Handling Construct for Functional Languages
--CTR
Margaret Burnett , Anurag Agrawal , Pieter van Zee, Exception Handling in the Spreadsheet Paradigm, IEEE Transactions on Software Engineering, v.26 n.10, p.923-942, October 2000
Takeshi Ogasawara , Hideaki Komatsu , Toshio Nakatani, EDO: Exception-directed optimization in java, ACM Transactions on Programming Languages and Systems (TOPLAS), v.28 n.1, p.70-105, January 2006 | exception handling;programmer;functional languages;functional programming;high level languages;terminate;referential transparency;input object;programming theory;commutativity properties;implementation restriction;resume |
631084 | Provable Improvements on Branch Testing. | This paper compares the fault-detecting ability of several software test data adequacy criteria. It has previously been shown that if C/sub 1/ properly covers C/sub 2/, then C/sub 1/ is guaranteed to be better at detecting faults than C/sub 2/, in the following sense: a test suite selected by independent random selection of one test case from each subdomain induced by C/sub 1/ is at least as likely to detect a fault as a test suite similarly selected using C/sub 2/. In contrast, if C/sub 1/ subsumes but does not properly cover C/sub 2/, this is not necessarily the case. These results are used to compare a number of criteria, including several that have been proposed as stronger alternatives to branch testing. We compare the relative fault-detecting ability of data flow testing, mutation testing, and the condition-coverage techniques, to branch testing, showing that most of the criteria examined are guaranteed to be better than branch testing according to two probabilistic measures. We also show that there are criteria that can sometimes be poorer at detecting faults than substantially less expensive criteria. | Introduction
Although a large number of software testing techniques have been proposed during the
last two decades, there has been surprisingly little concrete information about how effectively
they detect faults. This is true both in the case of theoretical analyses and
Author's address: Computer Science Dept., Polytechnic University, 6 Metrotech, Brooklyn, N.Y.
11201. Supported in part by NSF Grant CCR-9206910 and by the New York State Science and Technology
Foundation Center for Advanced Technology program.
y Author's address: Computer Science Dept., Courant Institute of Mathematical Sciences, New York
University, 251 Mercer Street, New York, NY 10012. Supported in part by NSF grant CCR-8920701
and by NASA grant NAG-1-1238.
empirical studies. In fact there are not even generally agreed-upon notions of what it
means for one testing strategy to be more effective than another.
Most previous comparisons of software testing criteria have been based on the subsumes
relation. Criterion C 1 subsumes criterion C 2 if for every program P , every test
suite that satisfies C 1 also satisfies C 2 . Unfortunately, it is not clear what the fact that
us about their relative effectiveness. Weiss has argued that subsumption
is more useful for comparing the cost of criteria than their effectiveness [22].
Hamlet [13] pointed out that it is possible for C 1 to subsume C 2 , yet for some test
suite that satisfies C 2 to detect a fault while some test suite that satisfies C 1 does not.
Weyuker, Weiss, and Hamlet [24] further investigated the limitations of subsumption
and other relations of that nature including Gourlay's power relation [12], and a newly
proposed relation, the "better" relation.
In contrast, our approach is probabilistic. It is based on the fact that for a given
program, specification, and criterion, there are typically a large number of test suites
that satisfy a given test data adequacy criterion. Often, some of these suites will detect
a fault, while others do not. For this reason, we will compare adequacy criteria by
comparing the likelihood of detecting a fault and the expected number of faults detected
when test suites for each criterion are selected in a particular way. This not only allows us
to compare existing criteria in a concrete way, but also corresponds to a very reasonable
intuitive notion of what it means for one criterion to be better at detecting faults than
another.
In [8, 10] we explored the conditions under which one adequacy criterion is guaranteed
to be at least as good as another according to a certain probabilistic measure of fault-
detecting ability. That analysis was based on investigating how software testing criteria
divide a program's input domain into subsets, or subdomains. We showed that it is
possible for C 1 to subsume C 2 yet for C 2 to be better at detecting faults according to
this measure. A simple example illustrates how this can happen. Assume the domain
D of a program P is f0; 1; 2g, and 0 is the only input on which P fails. Assume that
selection of a test case from the subdomain f0; 1g and a test case from the
subdomain f2g and that C 2 requires selection of a test case from the subdomain f0; 1g
and a test case from the subdomain f0; 2g. Since every test suite that satisfies C 1 also
Selecting one test case from each subdomain yields two
possible test suites, f0; 2g and f1; 2g, only one of which will cause P to fail.
There are four possible C 2 -adequate test suites, f0; 0g, f0; 2g, f1; 0g, and f1; 2g, three
of which will cause P to fail. Thus, a test suite selected to satisfy C 2 is more likely to
detect a fault than one selected to satisfy C 1 .
this situation occurred because the only input that caused a failure was a
member of both subdomains of C 2 , but was a member of only one subdomain of C 1 .
Furthermore, overlapping subdomains are a common occurrence in software testing; most
testing criteria defined in the literature give rise to overlapping subdomains, often even
for very simple programs. Thus, as the examples below illustrate, this phenomenon is
not a mere theoretical curiosity, but one that occurs with real testing criteria and real
programs.
As a response to this type of problem, we introduced a stronger relation between
criteria, the properly covers relation, and proved that if C 1 properly covers C 2 , then
when one test case is independently randomly selected from each subdomain using a
uniform distribution, the probability that C 1 will detect at least one fault is greater than
or equal to the probability that C 2 will detect a fault [8, 10]. This is a powerful result
provided that the model of testing used is a reasonable model of reality. We will address
this issue in Section 2.1.
If we can show that C 1 properly covers C 2 for all programs in some class P, then we
will be guaranteed that C 1 is at least as good as C 2 (in the above sense) for testing any
program P in P, regardless of the particular faults that occur in P. On the other hand, if
does not properly cover C 2 for some program P , then even if C 1 subsumes C 2 , C 2 may
be more likely than C 1 to detect a bug in P . In this paper, we use the above result to
explore the relative fault-detecting ability of several well-known testing techniques. We
also introduce another measure of fault-detecting ability and show that if C 1 properly
covers C 2 , then C 1 is also at least as good as C 2 according to this new measure.
Three families of techniques that have been widely investigated are data flow testing,
mutation testing, and the condition-coverage techniques. We compare the relative fault-
detecting ability of criteria in these families to branch testing, showing that most of
the criteria examined are guaranteed to be better than branch testing according to the
probabilistic measures mentioned above, but that there are criteria that can sometimes
be poorer at detecting faults than substantially less expensive criteria.
Background and Terminology
A multi-set is a collection of objects in which duplicates may occur, or more formally,
a mapping from a set of objects to the non-negative integers, indicating the number
of occurrences of each object. We shall delimit multi-sets by curly braces and use set-theoretic
operator symbols to denote the corresponding multi-set operators throughout.
For a multi-set S 1 to be a sub-multi-set of multi-set S 2 , there must be at least as many
copies of each element of S 1 in S 2 as there are in S 1 . In the sequel, when we say that
some procedure is applied to each element of a multi-set
that the procedure is applied exactly n times: once to e 1 , once to e 2 , etc., without regard
for whether e
The input domain of a program is the set of possible inputs. We restrict attention
to programs with finite input domains, but place no bound on the input domain size.
Since real programs run on machines with finite word sizes and with finite amounts of
memory, this is not an unrealistic restriction. A test suite is a multi-set of test cases,
each of which is an element of the input domain. We investigate test suites rather than
test sets because it is easier in practice to allow occasional duplication of test cases
than to check for duplicates and eliminate them. A test data adequacy criterion is a
relation C ' Programs \Theta Specifications \Theta Test Suites that is used to determine whether
a given test suite T does a "thorough" job of testing program P for specification S. If
say "T is adequate for testing P with respect to S according
to C ", or, more simply, "T is C-adequate for P and S". In addition to providing a
means for evaluating test suites, adequacy criteria can serve as the basis for test selection
strategies, as discussed below.
Many systematic approaches to testing are based on the idea of dividing the input
domain of the program into subsets called subdomains, then requiring the test suite
to include elements from each subdomain. The manner in which the input domain is
subdivided may be based on the structure of the program being tested (program-based
testing), the structure or semantics of its specification (specification-based testing), or
some combination thereof. As a group, these techniques have generally been referred to
as partition testing strategies, but in fact, most such strategies divide the input domain
into overlapping subdomains, and thus do not form true partitions of the input domain.
In this paper, we will refer to these strategies as subdomain-based testing.
More precisely, a testing criterion C is subdomain-based if, for each program P and
specification S, there is a non-empty multi-set of subdomains, SDC (P; S), such that C
requires the selection of one or more test cases from each subdomain in SDC (P; S). Note
that one could define adequacy criteria that require the selection of at least k test-cases
from each subdomain, for some k ? 1. However, we restrict attention here to criteria
that only require at least one test case per subdomain. This is not a serious limitation
since virtually all criteria discussed in the literature only require the selection of at least
one test case per subdomain. To model criteria that do explicitly require k ? 1 test cases
per subdomain, we could include k copies of each subdomain in SDC (P; S) and select one
test case from each copy. However, if this is done, the test selection strategy described
below may select a test suite with less than k distinct test cases from a subdomain.
In general, SDC (P; S) is a multi-set, rather than a set, because for some criteria it
is possible for two different requirements to correspond to the same subdomain. For
example, consider the all-statements criterion, in which each subdomain corresponds to
the set of inputs that cause the execution of a particular statement in the program. If
two different statements are executed by the same test cases, identical subdomains occur
in the multi-set of subdomains. This can happen either because the structure of the flow
graph dictates that every path covering one statement also covers another or because the
semantics of the program force two seemingly independent statements to be traversed
by exactly the same test cases. Of course, given a criterion C for which the multi-set
contains duplicates, one can define a new criterion C 0 in which the duplicates
are eliminated, but, as we shall discuss below, it is frequently easier to keep the duplicates
(and hence select "extra" test cases) than to check to see whether duplicates exist.
Note that since SDC (P; S) is assumed to be non-empty and at least one test case
must be chosen from each subdomain, the empty test suite is not C-adequate for any
subdomain-based criterion. A subdomain-based criterion C is applicable to (P,S) if and
only if there exists a test suite T such that C(P; holds. C is universally applicable
if it is applicable to (P; S) for every program, specification pair (P; S). Note that since
the empty test suite is not C-adequate, C is applicable to (P; S) if and only if the empty
subdomain is not an element of SDC
Throughout this paper all testing criteria discussed will be universally applicable
subdomain-based criteria, unless otherwise noted. In fact, many of the criteria that have
been defined and discussed in the software testing literature are not universally applicable
[7]. For example, the all-statements criterion, which requires every statement in
the program to be executed, is not applicable to any program that has a statement that
cannot be exercised by any input. However, it is the universally applicable analogs of
those criteria that are actually usable in practice. That is, the form of statement testing
that is really used requires every executable statement to be exercised. Formally, these
versions are obtained by removing the empty subdomains from SDC (P; S). But determining
whether a subdomain is empty is in general undecidable. However, in practice,
it is often easy for testers to determine whether a subdomain is empty by inspecting the
program code or specification. Another way that testers pragmatically deal with this
issue is to make a tacit assumption that there is never more than some percentage x of
unexecutable statements (or branches, or whatever the relevant program artifact being
covered) and then require that (100 \Gamma x)% of the statements be exercised. Of course, it is
possible that more that x percent of the code is unexecutable, and then the tester is faced
with the same problem. In any case, our analysis will formally assess only the universally
applicable versions of any adequacy criterion. Note that the relationship between the
universally applicable analogs of criteria may be different than that between the original
criteria [7]. This will be discussed in Section 3.
Given a program P and a specification S, a failure-causing input t is one such that
the output produced by P on input t does not agree with the specified output. We will
say that a test suite T detects a fault in program P if T contains at least one failure-
causing input. Note that we are not concerned here with determining the particular
problem in P that caused the failure, only with determining that some problem exists.
Intuitively, a good testing strategy is one that is likely to require the selection of one or
more failure-causing inputs, if any exist.
2.1 The Model
There are many different ways to select a test suite that satisfies a given adequacy
criterion. We would like to be able to compare the likelihoods that test suites that
satisfy given adequacy criteria detect faults, without regard to how those test suites were
selected. However this problem is too vague to analyze, since the likelihood that a C-
adequate test suite detects a fault can only be defined with respect to a given probability
distribution on the space of test suites satisfying C. Such distributions may be defined
in some precise way, or may arise in practice from testers manually selecting test cases
which they consider to be "natural". Since the notion of what is "natural" differs from one
person to another, such distributions cannot be formally analyzed and are also extremely
difficult to study empirically.
Thus, in order to carry out an analysis of fault-detecting ability, we need to use a
test selection model that is well-defined, is not obviously biased in some criterion's favor,
and is not too far from reality. With this in mind, we assume that the tester selects
a test suite that satisfies a subdomain-based criterion C by first dividing the domain
based on SDC (P; S), and then for each D i 2 SDC randomly selecting an element
of D i . In the sequel, we let d the size of subdomain D i , let m i be the number of
failure-causing inputs in D i and let
Y
Assuming one test case is independently selected from each subdomain according to a
uniform distribution, M gives the probability that a test suite chosen using this test
selection strategy will expose at least one fault. This measure has previously been investigated
in [4, 10, 14, 23] and was called M 2 in [10]. Note that if SDC (P; S) contains a
duplicate subdomain D model requires independent selection of one test case
from each copy of the subdomain.
In several earlier works that compared the effectiveness of testing criteria, [4, 14,
23] the test selection procedure was actually specified as part of the criterion, thereby
obscuring the distinction between test selection and evaluation of test adequacy. For
example, Duran and Ntafos [4] and Hamlet and Taylor [14] compare random testing and
partition testing. For partition testing they speak of "dividing a program's input domain
into classes whose points are somehow 'the same' '' so that "it is sufficient to try one
representative from each class"([14], p. 1402). Weyuker and Jeng [23] compared partition
testing strategies and random testing using a generalized form of test data selection in
which n i test cases were independently selected from each subdomain D i . Again the test
case selection procedure was integrated into the testing criterion.
Note, that all of these papers based their assessments of effectiveness on the measure
M . In addition, in all of these cases, it was assumed that the subdomain division was
performed before any test cases were selected, and that selection from each subdomain
was independent.
In contrast, in this paper we make the role of the selection criterion explicit. It could
be argued that this test selection model does not accurately reflect testing practice because
testers sometimes allow each test case to "count" toward all of the subdomains to
which it belongs. We argue that in fact, our test selection model is not very far from
reality in many cases. In particular, specification-based testing is frequently done by
first determining the test requirements, and then selecting test cases for each requirement
without regard for whether a test case also fulfills additional requirements. This
is especially true when system testing is done by an independent testing group, and test
cases are derived from the specification even before the implementation is complete. In
addition, we expect that as automated test generation tools targeted to program-based
testing techniques become more available, it will also become common to select test suites
for these criteria in a manner similar to the above strategy. Manually crafting test cases
is often far more costly than test case execution, so test suite size is an important issue
under these circumstances, but when the generation is done automatically, it may well be
easier to generate test cases for each test condition (subdomain) than to generate a test
case for a condition, execute it to see which additional conditions have been inadvertently
exercised, and remove them from consideration.
One further note: in practice, most proposed adequacy criteria are monotonic. This
means that if a test suite is adequate for the criterion, then any test suite formed by
adding test cases to the suite is also adequate. But by using our selection method, these
"extra" test cases cannot be added. In a sense we are comparing test suites containing
only elements required by the criterion.
2.2 Relations Among Testing Criteria
In [8, 10], we explored several relations R among subdomain-based criteria, asking
whether S). The most commonly used
relation in the literature for comparing criteria is the subsumes relation. Recall that
criterion C 1 subsumes criterion C 2 if and only if for every program P and specification
S, every test suite that satisfies C 1 also satisfies C 2 . We showed that the fact that C 1
subsumes C 2 does not guarantee that M(C 1 ; simple example illustrating
this appeared in Section 1, above. We therefore introduced a stronger relation
among criteria, the properly covers relation, which is more relevant for comparing the
fault-detecting abilities of criteria.
In order to explain the intuition motivating the properly covers relation and its relationship
to subsumption, we first mention two weaker relations, the narrows and covers
relations, also defined in [8, 10].
for every subdomain
there is a subdomain D 0 2 SDC1 (P; S) such that D 0 ' D. C 1 universally
narrows C 2 if for every program, specification pair
In [10], we showed that for criteria that require selection of at least one element
from each subdomain (as opposed to explicitly requiring selection of k elements for some
only if C 1 universally narrows C 2 . Thus, for all of the
criteria considered in the current paper, the universally narrows relation is equivalent to
subsumption.
for every subdomain D 2
there is a collection fD of subdomains belonging to SDC1
such that universally covers C 2 if for every program, specification
In [10] we showed that various well-known criteria are related to one another by the
covers relation. We then showed that it is possible for C 1 to cover C 2 for (P; S), and still
have S). The problem arises when one subdomain of C 1 is used
in covering two or more subdomains of C 2 . We therefore introduced the properly covers
relation which overcomes this problem.
m g, and let SDC2
g. C 1
properly covers C 2 for (P,S) if there is a multi-set
such that M is a sub-multi-set of SDC1 (P; S) and
n;kn
Note that the number of occurrences of any subdomain D 1
in M is less than or equal to
the number of occurrences of that subdomain in the multi-set SDC1 . In other words, C 1
properly covers C 2 if each of C 2 's subdomains can be ``covered'' by C 1 subdomains (i.e.,
can be expressed as a union of some C 1 subdomains), and furthermore, this can be done
in such a way that none of C 1 's subdomains occurs more often in the covering than it
does in SDC1 . C 1 universally properly covers C 2 if for every program P and specification
The following example illustrates the properly covers relation.
Example 1:
Consider a program P whose input domain is fxj1 - x - 10g. Let C 2 be the criterion
whose subdomains for program P are D 2= fxj1 - x - 6g and D 2= fxj4 - x - 10g.
Let C 1 be the criterion whose subdomains are D 1
properly covers C 2 for P since D 2
2 and D 2
5 . In this case
g with D 1
note that it is possible for one criterion to have more subdomains than another,
without properly covering it. This is the case for the criterion C 0obtained from C 1 by
removing subdomains D 1. C 0narrows but does not properly cover C 2 . 2
Observation 1 For any (P; S), the narrows, covers, and properly covers relations are
all transitive. If C 1 properly covers C 2 , then C 1 covers C 2 . If C 1 covers C 2 then C 1
narrows C 2 . If SDC2
The following theorem is proven in [10]:
Theorem 1 If C 1 properly covers C 2 for program P and specification S, then
Thus, if we can show that C 1 universally properly covers C 2 , then we are guaranteed
that test suites chosen to satisfy C 1 (according to the above test selection strategy) are
at least as likely to detect faults as those chosen to satisfy C 2 .
Another reasonable measure of the fault-detecting ability of a criterion is the expected
number of failures detected. Again, let SDC g. Assuming independent
random selection of one test case from each subdomain, using a uniform distribution, this
is given by
We now show that if C 1 properly covers C 2 for a given program, specification pair, then
C 1 is also guaranteed to do at least as well as C 2 according to the measure E. Examples 2
and 6, below, show that this is not necessarily the case if C 1 subsumes C 2 , but does not
properly cover C 2 .
Theorem 2 If C 1 properly covers C 2 for program P and specification S, then E(C
Proof:
We begin by noting that if
d
d
(1)
where d is the size of D and m is the number of failure-causing inputs in D. Let
n g and let
be a sub-multi-set of SDC1 (P; S) such that
For and for each i, let d (k)
i be the size of D k
i be the number of failure-
causing inputs in D k
i . For each
i;j be the size of D 1
i;j be the number of
failure-causing inputs in D 1
i;j . Then
d (2)
(2)
d (1)
d (1)
Note that (3) follows from (1) and (2), since for each i, D 2
. Since M
is a sub-multi-set of SDC1 summation (4) involves all the summands in (3), and
perhaps some additional ones. The result follows immediately from the fact that each
summand is non-negative. 2
Theorems 1 and 2 can easily be generalized to selection strategies in which non-uniform
distributions on subdomains are used, provided the distributions on overlapping
subdomains satisfy a certain compatibility property. A number of strategies for using an
arbitrary distribution on the entire input domain to induce such compatible distributions
on the subdomains are discussed in [6].
2.3 Program Structure
Testing criteria that are based only on the structure of the program being tested are called
program-based (or structural or white-box) techniques. For such criteria, the multi-set
SD C (P; S) is independent of the specification. The criteria we consider in the remainder
of this paper are all program-based. However, it is important to note that Theorems 1
and 2 hold for any subdomain-based criteria, regardless of the basis for the division of the
domain. Thus, specification-based (or functional or black-box) subdomain-based criteria
could also be compared using techniques similar to those used in this paper.
We sometimes represent a program by its flow graph, a single-entry, single-exit directed
graph in which nodes represent sequences of statements or individual statements,
and edges represent potential flow of control between nodes. A path from node n 1 to
node n k is any sequence (n of nodes such that for each is an
edge. A suffix of a path (n path is
feasible if there exists some input that causes it to be executed and infeasible otherwise.
A variable v has a definition in node n if n contains a statement in which v is assigned
a value. Variable v has a use in node n if n contains a statement in which v's value is
fetched. A use of a variable occurring in the Boolean expression controlling a conditional
or loop statement is sometimes associated with each of the edges leaving that node, and
called a predicate use or p-use. A definition of v in node d reaches a use of v in node or
edge u if there is a definition-clear path with respect to v from d to u, i.e., a path from d
to u along which v is not redefined.
Since the criteria we consider here are based on program structure, it is necessary
to select a fixed language for the programs under test. For this reason, we will limit
attention to programs written in Pascal. Our results do not depend in any essential
way on this choice of language. We will also assume that every program has at least one
conditional or repetitive statement, and that at least one variable occurs in every Boolean
expression controlling a conditional or repetitive statement in the program. Note that
this variable occurrence may be implicit, as in the use of the input file variable in the
statement, while not eof do S.
If the first requirement is not satisfied, then every input traverses exactly the same
path through the program. If the latter requirement is not fulfilled, the Boolean expression
will always evaluate to true or will always evaluate to false, and the other branch
will be unexecutable. Note that we do not require that programs be "well-structured"
or goto-less. In contrast, for reasons described in Section 3 below, we did have this
requirement in our earlier paper [10].
We also require programs to satisfy the no feasible anomalies (NFA) property: every
feasible path from the start node to a use of a variable v must pass through a node having
a definition of v. This is a reasonable property to require, since programs that do not
satisfy this property have the possibility of referencing an undefined variable. 1 Although
there is no algorithm to check whether or not the NFA property holds, it is easy to check
the stronger no anomalies property, which requires that every path from the start node
to a use of v, whether feasible or not, pass through a node having a definition of v.
An algorithm for this is presented in [5]. It is also possible to enforce the no anomalies
property by considering the entry node to have definitions of all variables.
Branch testing, also known as decision-coverage, is one of the most widely discussed
subdomain-based criteria. A decision is a maximal Boolean expression controlling the
execution of a conditional statement or loop. For example, in the statement if
S, the Boolean expression (x=1) and (y=1) is a decision. In the
decision-coverage criterion, there are two subdomains for each decision, one consisting
of all inputs that cause it to evaluate to true at some point during execution and one
consisting of all inputs that cause it to evaluate to false at some point during execution 2 .
Note that these two subdomains are not necessarily disjoint since if the decision is within
1 Clarke et al. [2] have pointed out that a program may legitimately have a feasible definition-clear
path with respect to v from the start node to a call to procedure Q that defines reference parameter v.
In such cases, the NFA property can be enforced by considering the argument v to be defined before it
is used in the call to Q. Indeed, if this is not the actual data flow, then Q or P may attempt to reference
an undefined variable.
In [10] we investigated a different variant of branch testing, called all-edges, in which each edge in
the program's flow graph gives rise to a subdomain. The distinction between all-edges and decision
coverage is discussed in [9].
a loop, a single test case may cause the decision to evaluate to true on one iteration of
the loop and to false on another iteration.
In the remainder of the paper, we use the universally properly covers relation to
compare various criteria to the decision-coverage criterion and to each other. We thereby
exhibit criteria that are guaranteed to be at least as good as decision-coverage according
to the two measures of fault-detecting ability M and E. In each case, we compare criteria
strictly subsumes C 2 . It therefore follows that for these criteria,
does not universally properly cover C 1 . We thus focus attention on the question of
whether or not C 1 universally properly covers C 2 .
3 Data Flow Testing
Several of the criteria that have been proposed as more powerful alternatives to branch
testing involve the use of data flow information. These criteria are based on data flow
analysis, similar to that done by an optimizing compiler, and require that the test data
exercise paths from points at which values are assigned to variables, to points at which
those values are used. In this section we examine several data-flow based testing criteria
and show that each universally properly covers the decision-coverage criterion, and thus
can be viewed as being better at detecting faults than branch testing. We also compare
various data flow testing criteria to one another.
The all-uses criterion [18, 19] requires that test data cover every definition-use association
in the program, where a definition-use association is a triple (d; u; v) such that d is
a node in the program's flow graph in which variable v is defined, u is a node or edge in
which v is used, and there is a definition-clear path with respect to v from d to u 3 . We
will frequently refer to a definition-use association as an association. A test case t covers
association (d; u; v) if t causes a definition-clear path with respect to v from d to u to be
executed. Similarly, the all-p-uses criterion [18, 19] is a restricted version of all-uses that
requires that test data cover every association (d; u; v) in which u is an edge with a p-use
of variable v. Precise definitions of the criteria for a subset of Pascal similar to the one
in question are given in [7].
All-uses universally properly covers all-p-uses. All-p-uses universally properly
covers decision-coverage.
Proof:
Since SD all-p-uses all-uses universally properly covers all-p-
uses.
3 Note that if u is an edge, (n; m), the covering path must be of the form, must
include both the head and the tail of the edge.
The proof that all-p-uses universally properly covers decision-coverage is similar to
the proof in [10] that all-p-uses universally covers all-edges. Let P be a program, let d be
a decision in P, and let D d be the subdomain consisting of all inputs that cause decision
d to evaluate to true (alternatively we could let D d be the subdomain consisting of all
inputs that cause decision d to evaluate to false). Let e be the edge that is executed if
and only if d evaluates to true (or to false). Since we are only interested in the feasible
analog of decision-coverage, we can assume that D d 6= ;, i.e. that e is feasible.
Let v be a variable occurring in decision d. Let be the definitions of v for
which there is a feasible definition-clear path with respect to v from ffi i to e. For each
be the subdomain ftjt covers association v)g. Recall that we are
limiting attention to programs that satisfy the NFA property. Thus, every feasible path
from the start node to e passes through at least one of the
any test case that exercises one of the must exercise edge e,
Since each outcome of each decision in P gives rise to a distinct set of associations,
all-p-uses universally properly covers decision-coverage. 4 2
We next consider Ntafos' required k-tuples criteria [17]. These criteria require the
execution of paths going from a variable definition to a use that is influenced by the
definition, via a chain of intervening definitions and uses. A k-dr interaction is a sequence
of variables along with a sequence s of distinct statements such that
variable X i is defined in s i , used in s i+1 , and there is a definition-clear path with respect
to X i from s i to s i+1 . Note that the value assigned to X 1 in s 1 can influence the value
of X k\Gamma1 which is used at s k . The required k-tuples criterion requires that each k-dr
interaction be exercised, i.e., that a path s 1 executed where p i is a
definition-clear path with respect to X i . Certain requirements based on control flow are
also included.
Clarke et al. [2] pointed out certain technical problems with the original definition
and defined the required k-tuples+ criterion by making the following two modifications:
1. all l-dr interactions must be exercised for l - k, and
2. the statements s i in the path need not be distinct.
They showed that required k-tuples+ subsumes required (k\Gamma1)-tuples+, and that required
subsumes all-uses. Without modification (1), required k-tuples fails to
subsume required (k\Gamma1)-tuples, and without modification (2), required 2-tuples fails to
subsume all-uses.
In [10] we showed that all-p-uses universally covers, but does not universally properly cover all-
edges. In order to do so, we restricted attention to programs with no goto statements. Both the failure
to universally properly cover and the need to restrict the class of programs arose due to edges in P 's flow
graph that did not have any p-use. Such edges are an artifact of the conventions we used for building
flow graphs, not a fundamental aspect of program structure. When we consider the decision-coverage
version of branch testing rather than the all-edges version, these extraneous edges cease to exist, and
the problems disappear.
Lemma 2 For all k - 2, the required (k+1)-tuples+ criterion universally properly covers
the required k-tuples+ criterion. The required 2-tuples+ criterion universally properly
covers all-uses.
Proof:
This follows immediately from the fact that for k - 2,
SD all-uses next consider Laski and Korel's criteria [15], which have subsequently become
known as context-coverage and ordered-context-coverage. These criteria consider paths
through definitions of all variables used in a given statement. Let X
that are all used in node n. An elementary data context for n is a set fffi
is a definition of X i and there is a path p from the start node to node n such that
has a suffix that is a definition-clear path with respect to X i from ffi i
to n. Thus, control can reach n with variables having the values that were
assigned to them in nodes respectively. The context-coverage criterion requires
execution of such a path for each context. Clarke et al. [2] defined the context-coverage+
criterion by making the following modifications:
1. each subset of the set of variables used in node n gives rise to a context;
2. the execution of paths to the successors of node n is required.
They showed that context-coverage+ subsumes all-uses. Modification (1) was motivated
by the fact that there may be a definition-clear path with respect to X i from the start
node to a use of X i in n. Since we are assuming the NFA property, in the class of
programs considered here, no such path can be feasible. Furthermore, since there are 2 k
subsets of a set of k variables, modification (1) may lead to extremely large numbers of
subdomains.
The Laski and Korel definitions associated uses occurring in decisions with the decision
node, not with the edges leaving that node. Modification (2) was added in order to
insure that context-coverage subsumed branch testing. Alternatively, this can be achieved
by distinguishing p-uses from c-uses and associating p-uses with edges, as in [18, 19]. In
the remainder of the paper, we will use the term context-coverage to refer to the original
Laski-Korel criterion, with this minor modification. We use the notation
to denote the context arising from definitions of X i in nodes ffi i and uses of X
node or edge u.
Laski and Korel also introduced the ordered-context-coverage criterion [15]. An ordered
elementary data context for node n is a permutation of an elementary data context
for n. The criterion requires that each ordered-context be exercised by a path that visits
the definitions in the given order.
Lemma 3 Ordered-context-coverage universally properly covers context-coverage. Context-
coverage universally properly covers decision-coverage.
Proof:
By Observation 1, the fact that ordered-context-coverage universally properly covers
context-coverage follows immediately from the definitions. The proof that context-
coverage universally properly covers decision-coverage is similar to that of Lemma 1.
vk be the variables occurring in the decision in node n, let m be a successor
node of n, let D be the decision-coverage subdomain corresponding to (n; m), let
l g be the set of contexts corresponding to edge (n; m), and let D
be the corresponding subdomains. Consider a test case t 2 D. Let be the last
definitions of respectively, before the first occurrence of (n; m) in the path
executed by t. Clearly, is one of the contexts C i , so t 2 [D i .
Conversely, assume t 2 [D i . By the definition of context-coverage (as modified to
associate p-uses with edges), t executes a path that includes (n; m), and hence t 2 D.
Since each edge from each decision gives rise to a distinct set of contexts, the covering
is proper. 2
Note that if a statement s uses variables v and the number of definitions of
reach s is a i , then the number of executable contexts arising from this single
statement may be as high as
a i . Furthermore, the number of executable ordered
contexts arising from each context may be as high as k!. Thus both context-coverage
and ordered-context-coverage have the potential of being extremely expensive criteria.
In contrast, the number of definition-use associations arising from statement s is
a i .
One might expect context-coverage and ordered-context-coverage to be guaranteed to
be better at exposing faults than simpler data flow testing criteria. Hamlet has argued
that such criteria should be good at concentrating failure-causing input [13]. In fact, this
is not always the case. We next show that these criteria are not guaranteed to be better
at detecting faults than the all-p-uses criterion.
Lemma 4 Context-coverage does not universally properly cover all-p-uses or all-uses.
Ordered-context-coverage does not universally properly cover all-p-uses or all-uses.
Proof:
Consider the following program:
P(x,y:
begin
else x := 0;
Figure
1: Program for which context-coverage does not properly cover all-p-uses.
else y := 0;
else .
A flow graph for this program is shown in Figure 1. Note that the decision in node
8 always evaluates to true, thus control always follows edge (8; 9). This is purely a
convenience to simplify the descriptions of the subdomains and the calculations of M
and E.
The only variable used on edges (2; 3) and (2; 4) is x. Consequently, the def-p-use
subdomains arising from these p-uses are identical to the context subdomains arising
from them. Similarly, the subdomains arising from def-p-use associations (1; (5; 6); y)
and (1; (5; 7); y) are identical to the corresponding context subdomains.
The only difference between the two criteria comes from the decision involving both
variables x and y. The def-p-use associations arising from this decision give rise to
subdomains shown in the first three columns of Table 1, and the
contexts arising from this decision give rise to subdomains D 5
Observe that D must be used in
any covering of D 2 or D 4 , and hence context-coverage does not properly cover all-p-uses
for this program.
The ordered-contexts of this program are identical to the contexts. This is because
every path from the start node to edge (8,9) passes through a definition of x in node 4
id subdomain dua=context d m
Table
1: Subdomains of all-p-uses, context-coverage, and ordered-context-coverage
or node 5 before it passes through a definition of y in node 6 or node 7. Thus ordered-
context-coverage does not properly cover all-p-uses for this program.
The fact that context-coverage and ordered-context-coverage do not universally properly
cover all-uses follows from the transitivity of the universally properly covers relation.Example 2:
Now consider the specification,
Note that P (x; of the failure-causing
inputs are in D 8 , as well as in some other subdomains. Based on the values of d i and m i
shown in Table 1,
M(all-p-uses;
where A is the probability that no failure-causing input is selected from any of the
subdomains arising from uses other than the ones on edge (8; 9), and
E(ordered-context-coverage;
E(all-p-uses;
where B is the expected number of failure-causing input selected from any of the sub-domains
arising from uses other than the ones on edge (8; 9). Thus, for this program,
all-p-uses is better than context-coverage or ordered-context-coverage according to the
measures M and E. 2
Note that this program also illustrates how 2 k contexts can arise from a statement
using k variables, and thus even (unordered) context-coverage may be prohibitively expensive
Summarizing the lemmas in this section, we have
Theorem 3 All-uses, all-p-uses, required-k-tuples, context-coverage, and ordered-context-
coverage all universally properly cover decision-coverage. For k - 2, Required-(k+1)-
universally properly covers required-k-tuples+. Required 2-tuples universally properly
covers all-uses. Neither context-coverage nor ordered-context-coverage universally
properly covers all-p-uses or all-uses.
We conclude this section by mentioning two other data flow testing criteria, the all-
du-paths criterion [19] and the all-simple-OI-paths criterion [20]. Roughly speaking, the
all-du-paths criterion requires the execution of particular paths from variable definitions
to uses, while the all-simple-OI-paths criterion requires the execution of particular types
of paths that cover chains of definitions and uses leading from inputs to outputs. The
restrictions on the kinds of paths considered arise from control flow considerations - in
all-du-paths, attention is restricted to simple paths, i.e. paths in which all nodes, except
possibly the first and last, are distinct, while in the all-simple-OI-paths criterion,
attention is restricted to paths that traverse certain loops zero, one, or two times. Rapps
and Weyuker showed that all-du-paths subsumes all-uses and Ural and Yang showed
that all-simple-OI-paths subsumes all-du-paths. However, these results were based on
the original (non-applicable) versions of the criteria. We have previously shown that
when one considers instead the applicable analogs of the criteria, all-du-paths does not
even subsume branch testing [7]. Similar problems arise with all-simple-OI-paths. Con-
sequently, by careful placement of faults in executable portions of the code that are not
included in any executable du-path or in any executable simple OI-path, it is possible to
construct programs for which these criteria are less likely to expose a fault than branch
testing.
4 Mutation Testing
We next consider the mutation testing criterion [3]. Unlike the other criteria examined
so far, mutation testing is not a path-oriented criterion. Instead, it considers a test suite
T adequate for testing program P if T distinguishes P from each of a set of variants of
called mutants. These mutants are formed by applying mutation operators, which are
simple syntactic changes, to the program. In mutation testing, the subdomains are of
the form ftjP (t) 6= P 0 (t)g where P 0 is a particular mutant of P .
Obviously the number and nature of the subdomains will depend on exactly which
mutation operators are used. It is well-known that if the mutation operators include the
following two operators, then mutation testing subsumes branch testing [1]:
1. replace a decision by true
2. replace a decision by false.
Since these are the only mutation operators that are directly relevant to our comparison
of mutation testing to branch testing, we will use the term limited mutation testing to
refer to mutation testing with only these two operators.
To see that limited mutation testing subsumes decision-coverage, consider a decision
d in program P . Let D d=T be the decision-coverage subdomain arising from the true
outcome of this decision. Let P 0 be the mutant in which d is replaced by false, let D P 0
be the corresponding mutation testing subdomain, and let t 2 D P 0
i.e. t is an input
such that P (t) 6= P 0 (t). Therefore, t must cause d to evaluate to true at least once -
otherwise there would be no distinction between the computations of P and P 0 on t. Thus
' D d=T . However, as the proof of the following theorem shows, D P 0
and D d=T are
not necessarily identical, because even though the "wrong" branch is taken in the mutant,
the output may not be affected. This observation allows the construction of programs
for which decision-coverage is more likely to detect a fault than limited mutation testing.
Theorem 4 Limited mutation testing does not universally cover decision-coverage.
Proof:
Consider the following program
begin
if C(x) then y := x div 2 -integer part of x/2 -
else y := x/2;
Note that when x is even, the output is x=2 regardless of whether or not C(x) holds.
Let P 0 be the mutant in which C(x) is replaced by true and let P 00 be the mutant in
which C(x) is replaced by false. The subdomains arising from these mutants are
fxjnot C(x) and Odd(x)g
and the subdomains arising from decision-coverage are
Thus no even element of either DC=F or DC=T belongs to either of the limited mutation
testing subdomains, so limited mutation testing does not cover decision-coverage
for this program. 2
Example 3:
Let P be the above program and let S be any specification such that there is at least
one failure-causing input and all failure-causing inputs are even. Then
first glance, one might attribute this rather surprising result to the highly restricted
form of mutation testing considered. We therefore now consider mutation testing with
additional mutation operators. Each application of a mutation operator gives rise to an
additional non-empty subdomain, provided that the mutated program is not equivalent
to the original program. Thus, additional mutation operators can certainly increase the
fault-detecting ability. However, most mutation operators proposed in the literature do
not necessarily lead to proper coverage of branch testing. We therefore believe that, even
with the addition of other mutation operators, it will be possible to construct programs
for which mutation testing is less likely to detect faults than branch testing when assessed
using either M or E.
5 The Condition Coverage Family
Several criteria based on considering the individual conditions that comprise a decision
have been proposed. We will refer to these criteria as the condition-coverage family. Consider
a conditional statement controlled by a compound predicate, such as if
then S. Branch testing would require the selection of a test case that makes the predicate
and B) evaluate to true and a test case that makes evaluate to false.
Note that it is possible to adequately test this statement using the branch testing strat-
egy, without ever having the sub-expression B be false simply by selecting one test case
making both A true and B true and another making A false and B true. Similarly the
statement if or B) then S can be adequately branch-tested without B ever evaluating
to true. Myers [16] argued that this is a weakness of branch testing, since a fault
P(x,y:
begin
else
Figure
2: A simple program.
Figure
3: Flow graph of program from Figure 2.
such as B being the wrong expression could go undetected when using branch testing. He
therefore introduced three new criteria, condition-coverage, decision-condition-coverage,
and multiple-condition-coverage intended to overcome this deficiency.
Recall that a decision is a maximal Boolean expression controlling the execution of
a conditional statement or loop and that decision-coverage requires that every decision
take on the value true at least once during testing and also take on the value false at
least once. For example, consider the program shown in Figure 2, A flow-graph of this
program is shown in Figure 3. In this program, (x=1) and (y=1) is a decision.
A condition is a Boolean variable, a relational expression, or a Boolean function
occurring in a decision. For this example, (x=1) is a condition, as is (y=1). Thus a
decision is made up of conditions. Condition-coverage requires that every condition take
on the value true at least once and take on the value false at least once. For this
example, two test cases are sufficient to satisfy condition-coverage. Test suites of the
among others, would satisfy the criterion.
Decision-condition-coverage requires that every decision take on the value true at
least once and take on the value false at least once and that every condition take on
the value true at least once during testing and take on the value false at least once. In
this example, the same test suite that satisfied condition-coverage would satisfy decision-
condition-coverage.
Multiple-condition-coverage requires that every combination of truth values of conditions
occurs at least once during testing. 5 For the example of Figure 2, multiple-
condition-coverage would require four test cases:
Note that the number of subdomains arising from multiple condition coverage is
where n is the number of decision statements in P , and c i is the number of conditions
in the i th decision in P . Thus, in the worst case the number of test cases required by
multiple-condition-coverage is exponential in the number of conditions in P , while the
numbers of test cases required by decision-coverage, condition-coverage, and decision-
condition-coverage are all at worst linear in the number of conditions.
Example 4:
Consider again the program shown in Figure 2. Recall that the input domain of this
program is and that the decision-
coverage subdomains are SD dc
Let
5 There is some variation in the meaning of the term multiple-condition-coverage in subsequent software
testing literature. Our usage of the terms here is consistent with Myers' definitions.
These are the subdomains corresponding to the conditions and their negations, so the
multi-set of subdomains arising from condition-coverage is
SD cc
The multi-set of subdomains arising from decision-condition-coverage is
SD dcc
Finally, let
These subdomains, along with D 1 correspond to the combinations of conditions, so the
multi-set of subdomains arising from multiple-condition-coverage is
SD mcc g:Notice that for this program the condition-coverage and decision-condition-coverage
criteria give rise to non-disjoint subdomains. More generally, programs exist for which
each of the criteria in this family have non-disjoint subdomains. There are several reasons
for this. First, the conditions in a compound predicate need not be mutually exclusive.
Second, each condition or negation of a condition overlaps with the decision in which it
occurs or the negation of that decision. Furthermore, even the subdomains arising from
a particular decision or condition and its own negation, or from different combinations
of conditions and their negations may intersect; this can occur when the decision occurs
within a loop - a single test case may cause the decision (or condition or combination of
conditions) to evaluate to true on one iteration of the loop and to false on another.
Myers points out (though not using this terminology) that condition-coverage does
not subsume decision-coverage but decision-condition-coverage does. He then points out
what he considers to be a deficiency in decision-condition-coverage: it is possible to satisfy
decision-condition-coverage without executing all the branches in the transformed flow
graph in which compound decisions are broken into series of decisions containing only
individual conditions. Myers introduced multiple-condition-coverage to overcome this
deficiency. He indicates (correctly) that multiple-condition-coverage subsumes decision-
condition-coverage and concludes that multiple-condition-coverage is superior to decision-
condition-coverage. Summarizing the criteria, Myers says,
For programs containing decisions having multiple conditions, the minimum
criterion is a sufficient number of test cases to evoke all possible combinations
of condition outcomes in each decision, and all points of entry to the program,
at least once.([16], p. 44)
Given that Myers recommends multiple-condition-coverage as a minimal criterion, in
spite of its expense, in a book that is widely used by testing practitioners, it is worth investigating
whether multiple-condition-coverage is really good at detecting faults. We show
that both decision-condition-coverage and multiple-condition-coverage universally properly
cover decision-coverage, but that multiple-condition-coverage does not universally
properly cover decision-condition-coverage. We exhibit a program for which decision-
condition-coverage is better than multiple-condition-coverage according to both M and
Before examining the relationship between the criteria, we note that one could argue
that decision-condition-coverage has an "unfair advantage" over multiple-condition-
coverage in the program in Figure 2 because in this particular program there are more
decision-condition-coverage subdomains than multiple-condition-coverage subdomains.
In fact, the reason multiple-condition-coverage fails to properly cover decision-condition-
coverage is much more fundamental than that. To illustrate this, we introduce a "pared-
down" version of decision-condition-coverage, which we call minimized-decision-condition-
coverage. Note that the decision-condition-coverage criterion has several redundant sub-
domains. Any test case satisfying obviously satisfies both A and B. Similarly,
any test case satisfying either not A or not B satisfies not B). So the subdomains
corresponding to A, B, and not and B) can be eliminated from consideration.
Redundant subdomains arising from a disjunctive clause can be eliminated in a similar
manner.
More precisely, for the minimized-decision-condition-coverage criterion, the multi-set
of subdomains SD mdcc (P; S) is obtained as follows: for each decision d of the form A
and B, include the subdomain D d=T consisting of those inputs that make d evaluate to
true, the subdomain DA=F consisting of those inputs that make A evaluate to false,
and the subdomain DB=F consisting of those inputs that make B evaluate to false;
for each decision d of the form A or B include the subdomain D d=F consisting of those
inputs that make d evaluate to false, the subdomain DA=T consisting of those inputs
that make A evaluate to true, and the subdomain DB=T consisting of those inputs that
make B evaluate to true; for all other decisions, include all of the corresponding decision-
condition-coverage subdomains. Note that for any program, the number of subdomains
arising from minimized-decision-condition coverage is less than or equal to the number
arising from multiple-condition-coverage.
Example 5:
Consider the program shown in Figure 2. For any specification S, the subdomains arising
from minimized-decision-condition-coverage are
SD mdcc g;Theorem 5 The relations among the condition-coverage family of criteria are as follows:
1. Decision-condition-coverage universally properly covers decision-coverage.
2. Multiple-condition-coverage universally properly covers decision-coverage.
3. Minimized-decision-condition-coverage universally properly covers decision-coverage.
4. Decision-condition-coverage universally properly covers minimized-decision-condition-
coverage.
5. Multiple-condition-coverage does not universally properly cover minimized-decision-
condition-coverage.
6. Multiple-condition-coverage does not universally properly cover decision-condition-
coverage.
7. Multiple-condition-coverage does not universally properly cover condition-coverage.
Proof:
We prove parts 3, 5 and 7. The rest of the proof is straight-forward and can be found
in [9]. To see part 3, let D 2 SD dc (P; S). By definition of decision-coverage, D is either
D d=T or D d=F for some decision d. Case 1: d is of the form A and B and D is D d=T .
Then D 2 SD mdcc . Case 2: d is of the form A and B and D is D d=F . Then DA=F and
Case 3: d is of the form A or B and D is
D d=T . Then DA=T and DB=T 2 SD mdcc and Case 4: d is of the
form A or B and D is D d=F . Then D 2 SD mdcc . If d is not a conjunct or disjunct of two
conditions then D 2 SD mdcc . In each of these cases, D is a union of minimized-decision-
condition subdomains arising from this decision d. Since each decision gives rise to its
own collection of minimized-decision-condition subdomains, the covering is proper.
To see part 5, consider again the program in Figure 2. Recall that SD mcc
It is possible to express each minimized-
decision-condition subdomain as a union of multiple-condition-coverage subdomains, as
follows:
id subdomain d m
Table
2: Subdomain, sizes, and numbers of failures for program in Example 6
Notice that the subdomain D 9 only occurs once in SD mcc (P; S) but occurs twice in the
covering of SD mdcc (P; S), and that both of these occurrences are necessary in the sense
that any covering of SD mdcc (P; S) must have at least two occurrences of D 9 . There-
fore, multiple-condition-coverage does not properly cover minimized-decision-condition-
coverage for this program.
The proof of part 7 is almost identical. Since D 5 must be used
more than once in any covering of SD cc
We now use this result to exhibit a program for which each of the criteria discussed
in this section, except for decision-coverage, is better than multiple-condition-coverage
according to measures M and E.
Example
Let P be the program in Examples 4 and 5 and, as in Example be a
specification such that P (x;
10. The values for m and d for each subdomain are shown in Table 2. Note
that since all of the 45 failure-causing inputs lie in subdomains D 5 , D 6 , and D 9 , and no
failure-causing inputs lie in D 1 , D 7 , or D 8 , the minimized-decision-condition criterion has
a greater chance of selecting a failure-causing input than the multiple-condition-coverage
criterion, even though there are more subdomains (and hence more test cases) induced
by the multiple-condition-coverage criterion than by the minimized-decision-condition
criterion. In particular:
d 6
criterion
decision-coverage 0.45 0.45
condition-coverage 0.75 1.00
decision-condition-coverage 0.86 1.45
minimized decision-condition-coverage 0.75 1.00
multiple-condition-coverage 0.56 0.56
Table
3: Values of M and E for the program from Example 5.
whereas
d 7
d 9
Similarly,
E(mdcc;
d 6
whereas
E(mcc;
d 7
d 9
The values of M and E for condition-coverage and decision-condition-coverage, shown
in
Table
3, illustrate that multiple-condition-coverage also performs worse than these
criteria for this program and specification, according to these measures. As noted above,
multiple-condition-coverage requires more test cases for this program than are required
by minimized-decision-condition-coverage, so this example also illustrates that bigger test
suites are not necessarily better. 2
Myers' motivation for introducing multiple-condition-coverage was that it is possible
to satisfy decision-condition-coverage without covering all of the edges in the flow graph
arising from assembly code. This concern may have arisen from an analogy with hardware
testing, where wires connecting gates are the analogs of branches in assembly code.
Even though this is not an issue for software testing, there is some intuitive motivation
for requiring that such edges be executed. The "low-level" flow graph has edges corresponding
to the evaluations of subexpressions in a condition. If the program has an
erroneous subexpression of the form A and B, the subdomain consisting of those inputs
that make the subexpression evaluate to true may concentrate the failure-causing inputs
more than the subdomains true corresponding to the individual con-
ditions. Similarly, if the program has an erroneous subexpression of the form A or B,
the subdomain consisting of those inputs that make the subexpression evaluate to false
may concentrate the failure-causing inputs more than the subdomains
false corresponding to the individual conditions. Consequently, selecting test cases
from the subexpression subdomains can increase the likelihood of detecting that type
of fault. Several criteria, including the multi-value expressions criterion [11, 21], that
capture this intuition and do properly cover decision-condition-coverage, but whose cost
is linear in the number of conditions, are discussed in [9].
6 Conclusion
We have compared the fault-detecting ability of several software testing criteria. This
comparison was done according to carefully defined notions of what it means for criterion
C 1 to be better at detecting faults than criterion C 2 . In particular, we investigated
whether test suites chosen by randomly selecting one element from each subdomain induced
by C 1 are at least as likely to include at least one failure-causing input as are test
suites chosen in the same manner to satisfy C 2 .
We had previously shown that if C 1 universally properly covers C 2 , then C 1 is better
than C 2 , in that sense. In this paper, we extended this result by using the expected
number of failures exposed as the basis for comparison. We showed that if C 1 properly
covers C 2 , the expected number of failures exposed by test suites selected using this same
strategy is guaranteed to be at least as large for C 1 as for C 2 . Note that if C 1 universally
properly covers C 2 , then C 1 is guaranteed to be at least as good as C 2 according to these
measures, for any program, regardless of what faults are in the program.
We showed that the all-p-uses, all-uses, required-k-tuples+, context-coverage, ordered-
context-coverage, minimized-decision-condition-coverage, decision-condition-coverage, and
multiple-condition-coverage, all universally properly cover decision-coverage. However,
limited mutation testing subsumes but does not universally properly cover decision-
coverage. Furthermore, multiple-condition-coverage subsumes but does not universally
properly cover decision-condition-coverage; context-coverage subsumes but does not universally
properly cover all-uses or all-p-uses; and ordered-context-coverage subsumes but
does not universally properly cover all-uses or all-p-uses. These relations are summarized
in
Figure
4. In each case for which C 1 fails to universally properly cover C 2 , we exhibited
a simple program for which
Since multiple-condition-coverage, context-coverage, and ordered-context-coverage each
potentially require a number of test cases which is exponential in some aspect of the
program size, this calls into question their usefulness. We emphasize that the results
Figure
4: Summary of Relations between criteria. A solid arrow from C 1 to C 2 indicates
that C 1 universally properly covers C 2 , a dotted arrow from C 1 to C 2 indicates that C 1
subsumes but does not universally properly cover any relation that is not explicitly
shown in the figure and that does not follow from transitivity along with the fact that
universally properly covers implies subsumption, does not hold.
presented here show the existence of programs and specifications for which C 2 is better
at detecting faults than C 1 in situations when C 1 subsumes but does not universally
properly cover C 2 . They are based on the fact that when C 1 does not properly
cover C 2 for program P , it is often possible to find a specification S (or equivalently,
to find a distribution of failure-causing inputs) for which M(C 2 ;
and E(C 2 ; However, even for such a program P , there are other
specifications S 0 (or equivalently, other distributions of failure-causing inputs) for which
even when
does not universally properly cover C 2 , it may be the case that C 1 properly covers C 2
for the particular program that is being tested, and thus C 1 is guaranteed to be at least
as likely as C 2 to detect a fault in that program, but not necessarily in others.
There are several implications of these results for the testing practitioner. If C 1
universally properly covers C 2 , the practitioner is guaranteed to be more likely to detect
a fault using C 1 than using C 2 , provided that tests are selected using the strategy described
earlier. On the other hand, if C 1 does not properly cover C 2 for the program under test,
the practitioner should be warned that even if C 1 requires many more test cases than
may be less likely to detect a fault. Thus, criteria which are relatively low in
the hierarchy of criteria induced by the properly covers relation, may be of questionable
value. On the other hand, it certainly could happen that such criteria are indeed good
at detecting faults in "typical" programs. This paper does not address that question;
in fact, in the absence of a precise notion of what constitutes a "typical" program, such
questions can only be answered through the accumulation of anecdotal evidence.
We also note that it seems reasonable to conjecture that using test selection strategies
that closely approximate the one described here will yield similar results in practice.
However, more analytical and/or empirical research will be needed to bear this out for
particular approximation strategies. One might argue that this approach is "making the
real world fit the model", but in fact doing so may be justified. In software engineering,
unlike the physical sciences, we can exercise some significant control over the real world
when there is some benefit from doing so. For example, it may be reasonable to devise
test data selection strategies that closely resemble the ones used in our model. The payoff
for this would be that the tester has precise knowledge about the relative fault-detecting
ability of criteria, without prior knowledge of the nature of the faults in a program.
Our investigation also sheds some light on how to develop new criteria that are provably
better than an existing criterion C. One possibility is to take a criterion C 0 that
universally covers but does not universally properly cover C, find those subdomains of
C 0 that are used k ? 1 times in the covering of C, and include k duplicates of each such
subdomain, or equivalently, choose k test cases from each such subdomain.
Finally, we note again that our results are based on a test suite selection procedure
that is an idealization of the way test suites are selected in a typical testing environment.
It is assumed that test cases are selected only after the domain has been divided into
subdomains. In practice, these testing criteria are generally used as the basis for evaluating
the adequacy of test suites, not for selecting test cases. This suggests two open
problems: develop practical test selection algorithms that approximate the strategy used
here, and conduct theoretical studies similar to this one based on measures that more
closely reflect existing test selection strategies.
Acknowledgments
: The authors would like to thank the anonymous referees for carefully
reading the manuscript and making several useful suggestions. Some of the results
presented in this paper appeared in the Proceedings of the IEEE 15th International Conference
on Software Engineering, May 1993.
--R
Mutation analysis: Ideas
A formal evaluation of data flow path selection criteria.
Hints on test data selection: Help for the practicing programmer.
An evaluation of random testing.
Data flow analysis in software reliability.
Test selection for analytical comparability of fault-detecting ability
An applicable family of data flow testing criteria.
Assessing the fault-detecting ability of testing methods
Analytical comparison of several testing strategies.
A formal analysis of the fault detecting ability of testing methods.
A mathematical framework for the investigation of testing.
Theoretical comparison of testing methods.
Partition testing does not inspire confidence.
A data flow oriented program testing strategy.
The Art of Software Testing.
On required element testing.
Data flow analyisis techniques for program test data selection.
Selecting software test data using data flow information.
A structural test selection criterion.
Comparison of structural test coverage metrics.
Comparing test data adequacy criteria.
Analyzing partition testing strategies.
Comparison of program testing strate- gies
--TR
Selecting software test data using data flow information
A structural test selection criterion
An Applicable Family of Data Flow Testing Criteria
Comparing test data adequacy criteria
Theoretical comparison of testing methods
A Formal Evaluation of Data Flow Path Selection Criteria
Partition Testing Does Not Inspire Confidence (Program Testing)
Analyzing Partition Testing Strategies
Comparison of program testing strategies
Assessing the fault-detecting ability of testing methods
Experimental results from an automatic test case generator
Data Flow Analysis in Software Reliability
Data Abstraction, Implementation, Specification, and Testing
Art of Software Testing
A Formal Analysis of the Fault-Detecting Ability of Testing Methods
Data flow analysis techniques for test data selection
--CTR
Bingchiang Jeng , Elaine J. Weyuker, A simplified domain-testing strategy, ACM Transactions on Software Engineering and Methodology (TOSEM), v.3 n.3, p.254-270, July 1994
Alberto Avritzer , Elaine J. Weyuker, The Automatic Generation of Load Test Suites and the Assessment of the Resulting Software, IEEE Transactions on Software Engineering, v.21 n.9, p.705-716, September 1995
Alberto Avritzer , Elaine J. Weyuker, Generating test suites for software load testing, Proceedings of the 1994 ACM SIGSOFT international symposium on Software testing and analysis, p.44-57, August 17-19, 1994, Seattle, Washington, United States
Istvn Forgcs, An exact array reference analysis for data flow testing, Proceedings of the 18th international conference on Software engineering, p.565-574, March 25-29, 1996, Berlin, Germany
Phyllis G. Frankl , Oleg Iakounenko, Further empirical studies of test effectiveness, ACM SIGSOFT Software Engineering Notes, v.23 n.6, p.153-162, Nov. 1998
Allen S. Parrish , Stuart H. Zweben, On the Relationships Among the All-Uses, All-DU-Paths, and All-Edges Testing Criteria, IEEE Transactions on Software Engineering, v.21 n.12, p.1006-1009, December 1995
Finding failures by cluster analysis of execution profiles, Proceedings of the 23rd International Conference on Software Engineering, p.339-348, May 12-19, 2001, Toronto, Ontario, Canada
Giovanni Denaro , Sandro Morasca , Mauro Pezz, Deriving models of software fault-proneness, Proceedings of the 14th international conference on Software engineering and knowledge engineering, July 15-19, 2002, Ischia, Italy
Tsong Yueh Chen , Yuen Tak Yu, On the Expected Number of Failures Detected by Subdomain Testing and Random Testing, IEEE Transactions on Software Engineering, v.22 n.2, p.109-119, February 1996
Sandro Morasca , Stefano Serra-Capizzano, On the analytical comparison of testing techniques, ACM SIGSOFT Software Engineering Notes, v.29 n.4, July 2004
R. M. Hierons, Comparing test sets and criteria in the presence of test hypotheses and fault domains, ACM Transactions on Software Engineering and Methodology (TOSEM), v.11 n.4, p.427-448, October 2002
Andy Podgurski , Wassim Masri , Yolanda McCleese , Francis G. Wolff , Charles Yang, Estimation of software reliability by stratified sampling, ACM Transactions on Software Engineering and Methodology (TOSEM), v.8 n.3, p.263-283, July 1999
D. F. Yates , N. Malevris, An objective comparison of the cost effectiveness of three testing methods, Information and Software Technology, v.49 n.9-10, p.1045-1060, September, 2007
Walter J. Gutjahr, Partition Testing vs. Random Testing: The Influence of Uncertainty, IEEE Transactions on Software Engineering, v.25 n.5, p.661-674, September 1999
Hong Zhu, A Formal Analysis of the Subsume Relation Between Software Test Adequacy Criteria, IEEE Transactions on Software Engineering, v.22 n.4, p.248-255, April 1996
Phyllis Frankl , Dick Hamlet , Bev Littlewood , Lorenzo Strigini, Choosing a testing method to deliver reliability, Proceedings of the 19th international conference on Software engineering, p.68-78, May 17-23, 1997, Boston, Massachusetts, United States
Elaine J. Weyuker, Using operational distributions to judge testing progress, Proceedings of the ACM symposium on Applied computing, March 09-12, 2003, Melbourne, Florida
Phyllis G. Frankl , Richard G. Hamlet , Bev Littlewood , Lorenzo Strigini, Evaluating Testing Methods by Delivered Reliability, IEEE Transactions on Software Engineering, v.24 n.8, p.586-601, August 1998
Phyllis G. Frankl , Yuetang Deng, Comparison of delivered reliability of branch, data flow and operational testing: A case study, ACM SIGSOFT Software Engineering Notes, v.25 n.5, p.124-134, Sept. 2000
Yuen Tak Yu , Man Fai Lau, A comparison of MC/DC, MUMCUT and several other coverage criteria for logical decisions, Journal of Systems and Software, v.79 n.5, p.577-590, May 2006
Richard A. DeMillo , Aditya P. Mathur , W. Eric Wong, Some Critical Remarks on a Hierarchy of Fault-Detecting Abilities of Test Methods, IEEE Transactions on Software Engineering, v.21 n.10, p.858-861, October 1995
Mary Jean Harrold, Analysis and Testing of Programs with Exception Handling Constructs, IEEE Transactions on Software Engineering, v.26 n.9, p.849-871, September 2000
Alessandro Orso , Saurabh Sinha , Mary Jean Harrold, Classifying data dependences in the presence of pointers for program comprehension, testing, and debugging, ACM Transactions on Software Engineering and Methodology (TOSEM), v.13 n.2, p.199-239, April 2004
Antonia Bertolino, Software Testing Research: Achievements, Challenges, Dreams, 2007 Future of Software Engineering, p.85-103, May 23-25, 2007
Hong Zhu , Patrick A. V. Hall , John H. R. May, Software unit test coverage and adequacy, ACM Computing Surveys (CSUR), v.29 n.4, p.366-427, Dec. 1997 | fault-detecting ability;program testing;condition-coverage techniques;branch testing;software testing;software test data adequacy;programming theory;data flow testing;program debugging;independent random selection;probabilistic measure;test suite;mutation testing |
631115 | Using Term Rewriting to Verify Software. | This paper describes a uniform approach to the automation of verification tasks associated with while statements, representation functions for abstract data types, generic program units, and abstract base classes. Program units are annotated with equations containing symbols defined by algebraic axioms. An operation's axioms are developed by using strategies that guarantee crucial properties such as convergence and sufficient completeness. Sets of axioms are developed by stepwise extensions that preserve these properties. Verifications are performed with the aid of a program that incorporates term rewriting, structural induction, and heuristics based on ideas used in the Boyer-Moore prover. The program provides valuable mechanical assistance: managing inductive arguments and providing hints for necessary lemmas, without which formal proofs would be impossible. The successes and limitations of our approaches are illustrated with examples from each domain. | Introduction
Many different methods have been used to annotate software and prove properties about it. Fewer
attempts have been made to adapt a single notation to a variety of different annotation tasks and
explore the interactions between the types of tasks, properties of the specifications, and demands of
verification techniques. In this paper, we apply equational specification and reasoning techniques
to verify properties of while statements, abstract data types, generic program units, and derived
classes. We present new techniques for computing the weakest preconditions of while statements,
annotating abstract base classes from which other classes are derived, and designing algebraic specifications
which are convergent and sufficiently complete. In addition, we discuss an experimental
tool for partially automating verification activities and report some of our experiences using these
techniques and tool.
This research has been supported in part by the National Science Foundation grant CCR-8908565 and the Office
of Naval Research grants N00014-87-K-0307 and N0014-90-J4091.
Rewriting [10, 26] is central to our approach. We use rewriting concepts both for designing small
specifications with desirable properties, such as completeness and consistency, and for extending
specifications incrementally while preserving these properties. We also use rewriting concepts for
proving that programs are correct with respect to their specifications.
Abstraction and factorization reduce the amount of detail that needs to be considered when
solving problems. Specifications play a key role in abstraction, hiding details of implementations,
and in factoring program components, collecting program units with common semantics rather
than just common syntax.
The weakest precondition [11] of a while statement abstracts the particular state transformation
induced by a while statement to a class of state transformations which satisfy a particular post-condition
[17]. The notion of weakest precondition extends the method proposed in [20] to include
termination. Since both termination and verification of programs are unsolvable, it is somewhat
surprising that one can compute a first order expression of the weakest precondition of a while statement
[8, 32]. This expression, however, involves concepts such as G-odelization or Turing machines,
which cannot be reasoned about automatically. We introduce a notation called power functions
to describe while statements' state transformations. Power functions are described with algebraic
equations, as are the operations which appear in while statements' postconditions. Thus, we may
reason about power functions in the same automated way we reason about the operations appearing
in program annotations. Power functions also allow us to address incompleteness problems arising
in the verification of while statements involving abstract data types [24, 30].
Abstract data types permit program proofs to be factored into two parts: proofs of programs
which depend only on abstract properties of objects, and proofs that implementations of types
guarantee the abstract properties. In the second type of proof, implementations manipulating concrete
objects must satisfy pre and postconditions containing abstract objects. Hoare [21] introduced
representation mappings to map concrete objects to their corresponding abstract objects to make
such reasoning possible. We show that representation mappings can be cast within the equational
framework, allowing us to reap the benefits of equational reasoning and automated term rewriting.
Parameterized subprograms factor similar operations on different objects with the same types,
thus reducing the sizes of programs. Generic clauses, like those in Ada, extend the benefits of
factoring to program units which manipulate objects with different types. Generic formal type parameters
represent classes of types providing a few basic operations (e.g., assignment or equality).
Additional generic formal subprogram parameters can be specified to access additional operations.
When a generic unit is instantiated, only syntactic discrepancies are reported between the types
of the formal and actual generic subprogram parameters. Alphard's designers [31] were among the
first to suggest that functions defined with generic type parameters have semantic restrictions that
guarantee the functions are properly instantiated. Several research projects are currently investigating
how such restrictions should be stated and checked [13, 14, 16]. We use equational reasoning
to show that formal parameter specifications denoting properties required of actual parameters are
checked when generic program components are instantiated.
Object-oriented programming languages permit new classes to be defined via inheritance. A
superclass defines interfaces (and perhaps implementations) for operations, which are inherited by
its subclasses. In order to factor the implementation of a common operation in a superclass, each
subclass that redefines operations used in implementations of common operations must ensure that
its new operations have behaviors which are consistent with those of the respective superclass's
operations. That is, each subclass must behave like a subtype of the supertype. Groups of researchers
3[1, 28, 29] are currently defining subtype relations. We present a method for annotating
abstract base classes and other classes which are derived from them. Using equational reason-
ing, we show that derived classes are subtypes of an abstract base class in a manner similar to that
in [19].
In Section 2, we discuss these four classes of verification problems. Although we limit the
size of our examples to dimensions suitable for a technical presentation, they are representative of
increasingly larger programming problems.
The common denominator for these verification tasks is that we use equations both to annotate
each of the program components and to reason about the annotations. In Section 3, we address
the problems of both the quality and the expressiveness of specifications on which annotations
are based. We motivate the need of structuring specifications as rewrite systems both to ensure
crucial properties of specifications and to overcome inherent difficulties of equational reasoning. We
present design strategies for extending a specification while preserving its properties as a rewrite
system.
In Section 4, we briefly describe an automated tool for formally proving the obligations arising
from verification problems. In Section 5, we discuss the use of this tool and informally compare its
performance with another automated prover. Section 6 contains our conclusions.
Annotation and Verification
2.1 While Statements
Power functions [2] are a device to express the weakest precondition of a while statement in a form
which is useful for stating and verifying program correctness. We briefly review this technique and
show its application in two examples. In a later section we show how to automate the steps of the
process.
For any statement w and postcondition R, the weakest precondition wp(w; R) of w with respect
to R describes the set of all states S such that when w is activated in a state s in S it terminates
in a state r satisfying R [11]. If w is a while statement with condition b and body stmt, then
(R) is defined recursively as follows:
If the while statement is not defined on some state s, then wp(w; R)(s) is false, since H k (R)(s) does
not hold for any k.
The power function of a function f , whose domain and range are identical, embodies the k-fold
composition of f . If [stmt] is the function computed by statement stmt and s is a state, the power
function pf of [stmt] is defined as
s; if
undefined, otherwise.
Every function has a unique power function; and the totality, computability, and primitive recursiveness
of a function imply similar properties for its power function [2].
Using the notion of power function, we can obtain a first order expression of the weakest
precondition of a while statement with respect to any first order postcondition. If [stmt] is a total
function and pf is its power function, then
The expression -i P (i) stands for the minimum non-negative integer i, if it exists, such that P (i)
holds. More precisely, the second conjunct of the right side is a short hand for :b(pf (k;
is the least value such that k applications of the [stmt] to the original
state produce a state in which b evaluates to false. This yields the following equation
The right side requires only pf , the power function of [stmt], which is immediately obtained via
equation (1).
Often, we find it convenient to express a power function in terms of other functions that capture
higher level abstractions. We show one such example below, where very loosely speaking we say
that the power function of a maximum accumulator is the maximum of a sequence. In this case
we must ensure the validity of our claim, i.e., we must prove that some function pf is the power
function of a given function f . We call this step validation of pf with respect to f .
The weakest precondition of a while statement is more manageable when in equation (2) the
conjunct can be solved with respect to k, i.e., the value of k can be explicitly
determined from s. We call this step minimization of the loop. Loop minimization is obviously an
unsolvable problem since it is more difficult to demonstrate than loop termination. In the following
examples we show how to minimize loops and how this operation considerably simplifies the weakest
precondition.
Example 1
Consider the following program with while statement w and postcondition R:
where max(a[1::n]) is the largest value in the set a[n]g.
The function computed by the while statement body ([stmt]) returns the program state after
examining one component of the array.
Its power function (pf ) returns the program state after examining a slice of the array.
where max(a[i::j]; m) is the largest value in the set mg. We use induction to validate
pf , i.e., to show that it is indeed the power function of [stmt].
Base
The minimization of the loop requires us to demonstrate that -k(i+k ? n) is Substituting
this expression for k, we can calculate wp(w; R).
Although the annotations appear to use familiar, "well-defined" functions such as addition and
max, we have actually overloaded the function symbol max. Assuming all the scalar values are
natural numbers, one version of max is defined on two naturals, another on an array of naturals,
and a third on an array of naturals and a natural. Algebraic axioms permit us to define the relations
between symbols that appear in specifications.
natarray \Theta nat \Theta nat ! nat
natarray \Theta nat \Theta nat \Theta nat ! nat
With these definitions, wp(w; R) can be expressed as:
The initializing statements i := 2; m := a[1] transform the above equation into
which can be verified for any a and n ? 0.
Each of the verification tasks outlined above can be expressed as equations and verified me-
chanically. These tasks are:
1. Power function validation
Base cases add(0;
Inductive cases add(k
2. Loop minimization
3. Loop initialization
Example 2
The previous example shows that one may need to define new symbols for the analysis of a loop.
This is not a peculiarity of our method. Classic approaches to correctness verification may fail due
to the lack of expressiveness of data type specifications [24, 30]. For example, Kamin shows that a
program containing the following while statement
cannot be properly annotated by the lack of expressiveness of the usual theory of type Stack. A
similar result appears in [30].
Equation (2) implies that all one needs to properly annotate a loop is the power function of (the
functional abstraction of) the loop body. Rather than using equation (1), we chose to formulate the
power function of the loop in terms of high-level abstractions. These abstractions capture formally
the intuitive concepts that allow a programmer to code the above program.
The repeated execution of the loop body has the effect of chopping off a topmost portion of s,
reversing it, and placing it on top of t. The concept of separating a sequence into an initial portion
and a remainder generalizes the usual head and tail operations on sequences. We associate the
symbols drop and take with the more general operations and axiomatize them below.
stack
newstack
stack
newstack
newstack
The operation drop is denoted pop in [24] and is required to make the type stack expressive. drop is
the power function of pop. The operation take returns the portion of a stack dropped by drop. We
formulate the power function of the loop (pf ) from the functional abstraction of the body ([stmt]),
exactly as informally stated earlier.
pf (k; (s;
where concat and reverse are defined as usual.
stack \Theta stack ! stack
concat(push(s;
stack
newstack
To validate pf we prove that
The last equation is not an instance of the second case of equation (1). It stems from a simple
result [2, Th. 5.7] concerning the equivalence of two formulations of power functions, i.e., accumulation
vs. recursion.
To minimize the loop we define the operation size, which computes the size of a stack, axiomatize
isnewstack, and verify -k(isnewstack(take(k;
The minimization of the loop is obtained by proving that
The latter is equivalent to isnewstack(drop(size(s); s)). Substituting permits us to
calculate wp(w; R).
R(pf
Initializing t with newstack and s with s 0 results in the following wp for the program:
which holds for any s 0 .
2.2 Data Type Implementations
Modern programming languages provide special constructs to implement user-defined data types.
These constructs are specifically designed to hide the representation of a type from its users. Code-level
verification techniques, such as those discussed in Examples 1 and 2 are insufficient to address
the correctness of an implementation because of the wide gap between the low-level operations
performed by the code and the high-level operations described by the operation's interface. For
example, decrementing an integer variable may be all it takes to pop a stack. However, verifying that
the variable is decremented does not ensure that the code correctly implements the pop operation.
We need to show that the code fulfills its obligations to the abstract operations [21].
Example 3
An implementation of the data type stack may represent an instance of the type by a record as
follows:
subtype index is integer range 1::size;
type data is array (index) of item;
type stack is record
range 0::size := 0;
items
This code fragment belongs to a package with generic arguments size, a positive, and item, a
private type.
The correctness of an implementation of the type stack entails the type's representation mapping
[21]. This function, denoted with A below, maps a concrete instance of stack, represented by
the above record, to its abstract counterpart.
newstack; if
The implementation of the stack operation pop is straightforward.
procedure pop(q : in out stack) is
begin
then raise underflow ;
else q:pntr := q:pntr \Gamma
On input a stack q, pop raises an exception if q:pntr = 0, that is, q:pntr ? 0 is a precondition for the
normal termination of the procedure. If the precondition is satisfied, pop simply decrements q:pntr.
The implementation is correct if clients of the stack package, to which the stack representation and
the procedure code may be hidden, indeed perceive decrementing q:pntr as popping q.
The proof method proposed by Hoare reduces the correctness of the implementation to individual
obligations of each procedure of the package. Omitting for readability the qualification of pntr
and items, the obligation of the procedure pop is
Standard techniques [20] reduce the correctness of the code to the truth of
where "false" in the first disjunct describes the (impossible) initial state that would result in the
normal termination of the procedure pop when the exception underflow is raised. The representation
mapping can be defined equationally and the proof obligation can be discharged automatically using
the tool discussed in Section 4.
2.3 Instantiations of Generic Program Units
Modularity is an essential feature for the design and implementation of large programs. Generic
type and subprogram parameters have been added to statically typed programming languages to
avoid duplicating an operation's source code in cases where it manipulates objects only through
other operations that are either implicitly defined for its generic formal type parameters or appear as
generic formal subprogram parameters. Interconnection errors become more likely and more subtle
when such language features are used. Compilers and/or loaders verify only syntactic properties
of module interconnections. The verification of (semantic) correctness entails activities similar
to those required for the verification of loops and data types discussed earlier, i.e., axiomatizing
symbols used for asserting properties or requirements of modules, and proving theorems, expressed
by means of these symbols, about the modules.
Example 4
Many computations on sets or sequences of elements are instances of a general paradigm referred
to as accumulation [4], for example, finding the maximum element, computing the sum of the
elements, or counting how many elements have a certain property. These computations can be
implemented by a loop whose body processes a new element of the sequence on each iteration. A
special variable, whose initial value depends on the computation being performed, "accumulates"
the result of the computation for the portion of the sequence processed thus far.
Example 1 presented earlier is an instance of accumulation in which the sequence of elements
is represented by an array and the process being performed is finding the maximum. In a language
supporting generic parameters, the interface of a simple accumulator (in which the types of the
elements and the accumulated result are the same) appears as follows:
type
type vector is array(integer range !?) of elem;
with function step(a; b : elem) return elem;
return elem is
begin
for i in v 0 first last loop
a := step(a; v[i]);
return a;
When a generic subroutine is instantiated (e.g., with actual parameters natural, nat vect, 0, and
max, as shown below) discrepancies may be detected between the types of the generic subprogram
parameters and the types of the actual objects bound to them.
procedure main is
function max array is new
accumulator(elem
Unfortunately, only syntactic discrepancies are reported. Some implementations of an accumulator
may rely on semantic properties which do not hold for all bindings, but cannot be detected by the
compiler.
For example, certain accumulations can be performed in parallel. In the simplest form, a
parallel implementation of an accumulator may simultaneously activate two tasks. Each task is an
accumulator operating on half of the input array and feeding its results to the function step which
returns the desired value. To improve the implementation's efficiency, we can use a tree-like cascade
of tasks each executing a single invocation of step in parallel. However, the parallel implementation
of the accumulator assumes that the function bound to the generic parameter step is associative and
that the element bound to the parameter init is its left identity i.e., (elem; step; init) is a monoid.
This result can be established in the following manner. Let e 1 be the sequence of values
processed by the accumulator, and A the function defined by
With the techniques described in Section 2.1 we can prove that A is the function computed by the
code of accumulator. If for all k such that i - k - j, the following equation holds
we can implement our accumulator in parallel as described above. It is easy to show that equation
(3) holds when elem is a monoid.
Algebraic notation can be used to specify properties of generic subroutine parameters that can
be verified from the specifications of the actual parameters. Such restrictions can be made explicit
by writing them as conditions and including them with the text of the specification of accumulator.
step(step(x;
When the function accumulator is instantiated with actual arguments replacing the formal parame-
ters, the identifiers in the axioms of the actuals can be replaced by the names of the formals and the
specification of the actual arguments can be used to prove these conditions. For example, it is easy
to verify these conditions for the operation max 0 , the maximum of two natural numbers, specified
in Example 1. Likewise, the instantiation requirement holds for both addition and multiplication,
but not for exponentiation. Thus, exponentiation cannot be legally bound to the generic parameter
step.
2.4 Inheritance
Object-oriented programming languages permit the definition of new classes via inheritance. A
subclass inherits data representations and operations from a superclass and may add or redefine
these components. We use algebraic equations to specify both the behavior of classes and to verify
that a subclass relation is also a subtype relation.
Example 5
In the following example, Shape is an abstract class; it can serve as a superclass for another class
but no objects of type Shape may be created.
class Shape -
public:
virtual Point center () const -
return Point ((left()
virtual void move (const Point &
void recenter (const Point& p) - move (p-center()); -;
virtual double top
virtual double bottom
virtual double left
virtual double right
An abstract class is used to define interfaces for operations which manipulate objects created by
its subclasses. For example, recenter moves an object to a new position.
Circle is declared as a subclass of Shape, redefining the latter's center operation with a more
efficient version of its own and providing definitions for those operations which are pure virtual
functions in Shape (i.e., move, top, etc.
class Circle : public Shape -
public:
Circle (const Point & C, const double
inline void radius (const double & R) - assert (R?=0); -radius = R; -;
inline double radius () const - return -radius; -;
inline void center (const Point & C) -center = C; -;
inline Point center () const - return -center; -;
inline void move (const Point & P) -center.move(P); -;
double top () const - return (-center.y()
double bottom () const - return (-center.y() -radius); -;
double left () const - return (-center.x() -radius); -;
double right () const - return (-center.x()
private:
Point
double -radius;
When it is passed a reference to a Circle object, recenter invokes Circle's center and move
operations.
Point p1(10,10), p2(5,5);
Circle c(p1,20);
We can use algebraic specifications to define meanings for Shape's operations. The first argument
of an abstract operation f modeling a corresponding concrete operation f is the class instance to
which f belongs. For example, referring to the above program fragment, recenter(c; p2) is the
abstract counterpart of c.recenter(p2).
We do not specify an abstract class, such as Shape, by means of a sort. Rather, we describe
relationships between the class' defined operations. The completeness of our specification is a
critical issue. Heuristically, we consider each pair, triple, etc. of member functions of Shape and
capture their mutual dependencies, if any, by algebraic equations. We remove obviously redundant
equations.
These specifications define the meanings of operations which manipulate objects of type Shape.
Using these specifications we may prove the correctness of the implementation of member functions
which are not pure virtual, by assuming the correctness of the "future" implementation of the
member which are pure virtual. As discussed earlier, the verification condition is
where the conjunct ensures that the argument of recenter remains constant.
The proof of the implementation of recenter relies on the pre- and post-conditions of Shape's
center and move operations. For such proofs to hold when recenter is passed an object whose
type is derived from Shape, the object's type must be a subtype, not merely a subclasses, of type
Shape. To show this, we must demonstrate that the relationships among the operations of Shape
hold for the operations and instances of Circle.
The specifications of Circle (shown below) differ from those of Shape, since the latter is a
classic abstract data type, rather than an abstract class in the C ++ sense. The first condition is the
class invariant. It ensures that every Circle has a non-negative radius.
Circle's operations can be annotated as usual [21], although the standard proof techniques for
imperative languages [11, 20] may fall short to prove object-oriented code.
We are concerned with a different problem here, that is, we want to prove that Circle is a
subtype of Shape. For this task we verify that the annotations of Shape hold for every instance
of Circle. This activity is similar to proving that Circle implements Shape with the technique
proposed in [19], with minor a difference-a Circle is a Shape, thus, no representation function
or equality interpretation is involved in the proofs.
Most of these proofs are easily formulated as problems for our theorem prover and completed
automatically.
Designing Specifications for Annotations
The problems discussed in the previous sections are formulated and resolved using first order
formulas. These formulas involve the symbols of a specification whose atomic components are
equations. In this section we discuss how we design both our equations and specifications. Our goal
is to produce equations and specifications that are easy to process automatically. The processing
is not limited to proving the formula expressing the correctness of a piece of software, but also
includes analyzing the specifications to determine that they satisfy properties whose absence is
often a sign of flaws.
A major obstacle to automation is the declarative nature of equations. Changing equations into
rewrite rules makes a specification more operational and simplifies the problem.
3.1 Rewriting
The unrestricted freedom, provided by equational reasoning, of replacing a term with an equal term
leads to a combinatorial explosion of possibilities which are hard to manage by a prover, whether
automated or human. An equation t can be "oriented" yielding a rewrite rule This
rewrite rule still defines the equality of t 1 and t 2 . It allows the replacement of an instance of t 1
with the corresponding instance of t 2 , but forbids replacement in the opposite direction. Orienting
equations transforms an algebraic specification into a term rewriting system [10, 26].
There are two crucial properties that must be achieved when equations are oriented. Two
terms provably equal by equational reasoning, should have a common reduct, i.e., a third term
to which both can be rewritten. This property is referred to as confluence or Church-Rosser. In
addition, it should not be possible to rewrite a term forever, in particular there should be no
circular rewrites. This property is referred to as termination or Noetherianity. A system with both
properties is canonical or complete or convergent. The Knuth-Bendix completion procedure [27]
attempts to transform an equational specification into a complete rewrite system. The termination
of the procedure cannot be guaranteed and its execution may require human intervention. The
difficulty stems from the undecidability of whether or not a rewrite system is canonical [9, 22].
For this reason, we do not attempt to convert an equational specification in the corresponding
complete rewrite system. Rather, we ask specifiers to structure their specifications as rewrite
systems with the above characteristics. The task is eased considerably by two strategies used in designing
a specification. The technique also ensures other properties, such as sufficient completeness,
which we deem essential in our framework.
3.2 Sufficient Completeness of Constructor-Based Systems
To apply our technique we consider only constructor-based systems, i.e., we partition the signature
symbols into constructors and defined operations. The constructors of a type T generate all the
data instances or values of T which are represented by terms, called normal forms, that cannot be
reduced. Terms containing defined operations represent computations. For example, the constructors
of the natural numbers are 0 and successor (denoted by the postfix "+1" in the examples).
The constructors of the type stack discussed in Example 2 are newstack and push, since any stack
is either empty or is obtainable by pushing some element on some other stack. Concat and reverse
are examples of defined operations.
Considering constructor-based systems raises the problem of sufficient completeness, yet another
undecidable property [25]. For the specification of type T to be sufficiently complete, it must assign
a value to each term of type T [18]. If the specification is structured as a constructor-based rewrite
system, sufficient completeness is equivalent to the property that normal forms are constructor
terms. If left sides of axioms have defined operations as their outermost operators and constructor
terms as arguments, we can state necessary and sufficient conditions for the sufficient completeness
of a specification.
A constructor enumeration [7] is a set, C, of tuples of constructor terms such that substituting
constructor terms for variables in the tuples of C exhaustively and unambiguously generates the set
of all the tuples of constructor terms. The set of tuples of arguments of a defined operation should
be a constructor enumeration. For example, the set of tuples of arguments of drop, discussed in
Example 2 and shown below
is a constructor enumeration of hnat; stacki, since every pair hx; yi, with x natural and y stack is
an instance of one and only one element of C.
The set of tuples of arguments of the operation max 0 discussed in Example 1 is not a constructor
enumeration, since h0; 0i is an instance of both h0; ii and hi; 0i. The second axiom of should
have been
Although the difference does not affect the specification, the latter axiomatization removes a (triv-
ial) ambiguity. Note that if the right side of the second axiom of were defined
than i, the specification would be inconsistent since would be a consequence of the axioms.
An operation is overspecified when two rules can be used to rewrite the same combination of
arguments. It is underspecified when no rule can be used to rewrite some combination of arguments.
Overspecification can be detected by a superposition algorithm [27] which uses unification to detect
overlapping. Underspecification is a natural condition for some operations, although it creates
non-negligible problems. It can be systematically avoided, for example, in the framework of order-sorted
specifications [15]. Underspecification can be detected by an algorithm informally described
below [23]. Operations that are not underspecified are called completely defined.
Huet and Hullot devised an algorithm to detect incompletely defined operations. This algorithm
assembles the arguments of the axioms into tuples and the terms in the i th position of each tuple
are checked to see that they include a variable or "an instance of each constructor." For each
constructor c, tuples of remaining arguments with constructor c or a variable in position i are
formed and recursively tested. A rigorous description appears in [23].
By way of example, we execute this algorithm with the set C as input. For constructor 0
in position 1, tuple 1 is considered. The set of tuples of remaining arguments, fhsig, is trivially
complete. For constructor successor in position 1, tuples 2 and 3 are considered. The set of tuples
of remaining arguments is fhnewstacki; hpush(s; e)ig. It contains an instance of each constructor of
stack in position 1. The completeness of each recursive problem, fhig from newstack and fhs; eig
from push, is obvious.
To ensure the confluence of a constructor-based specification it is sufficient to avoid overspecifi-
cation. To ensure sufficient completeness it is necessary, but not sufficient, to avoid underspecifica-
tion. If all operations are completely defined and terminating, then the specification is sufficiently
complete. In fact, every term has a normal form which obviously contains only constructor symbols
because any term containing a defined operation is reducible. Underspecification and overspecification
are easily checked syntactic properties. However, the termination of a rewrite system is
undecidable [9]. In the next section, we discuss syntactic properties sufficient to ensure termination
and show how to obtain them through our design strategies.
3.3 Design Strategies for Axioms
Confluence and sufficient completeness are undecidable, although essential, properties of a spec-
ification. Lack of confluence implies that some computation is ambiguously specified. Lack of
sufficient completeness implies that some computation is unspecified. We regard both conditions
as serious flaws of a specification. We describe two design strategies for generating confluent and
sufficiently complete specifications.
The binary choice strategy is an interactive, iterative, non-deterministic procedure that through
a sequence of binary decisions generates the left sides of the axioms of a defined operation. We
used the symbol " ", called place, as a placeholder for a decision. Let f be an operation of type
Consider the template th place has sort s i . To get a rule's
left side we must replace each place of a template with either a variable or with a constructor term
of the appropriate sort. In forming the left sides, we neither want to forget some combination of
arguments, nor include other combinations twice. That is, we want to avoid both underspecification
and overspecification. This is equivalent to forming a constructor enumeration.
We achieve our goal by selecting a place in a template and chosing one of two options: "variable"
or "inductive." The choice variable replaces the selected place with a fresh variable. The choice
inductive for a place of sort s i splits the corresponding template in several new templates, one for
each constructor c of sort s i . Each new template replaces the selected place with c(
there are as many places as the arity of c. A formal description of the strategy appears in [3].
We apply the strategy for designing the (left sides of the) rules of the operation drop discussed in
Example 2. The initial template is
We chose inductive for the first place. Since the sort of this place is natural, we split the template
into two new templates, one associated with 0 and the other with successor
We now chose variable for the remaining place of the first template and variable again for the first
place of the second template to obtain:
We chose inductive for last remaining place. Since the sort of this place is stack, we again split
the template in two new templates, one associated with newstack and the other with push. We
obtain:
For each remaining choice, variable is selected, completing the rules' left sides.
We now describe the second strategy, which ensures termination. The recursive reduction of a
term t is the term obtained by "stripping" t of its recursive constructors. A constructor of sort s
is called recursive if it has some argument of sort s. For example, successor and push are recursive
constructors. "Stripping" a term c(t is a derived operation and t i is a recursive
constructor, removes the outermost application of the constructor from t i . The stripping process is
recursively applied throughout the term. A formal description of the recursive reduction function
appears in [3]. We show its application in examples.
For reasons that will become clear shortly, we are interested in computing the recursive reduction
of the left side of a rewrite rule for use in the corresponding right side. The symbol "$" in the right
side of a rule denotes the recursive reduction of the rule's left side. With this convention, the last
axiom of drop is written
since the recursive reduction of the left side is drop(i; s). We obtain it by replacing i +1 with i since
i is the recursive argument of successor, and by replacing push(s; e) with s since s is the recursive
argument of push. When a constructor has several recursive arguments the recursive reduction
requires an explicit indication of the selected argument. We may also specify a partial, rather than
complete, "stripping" of the recursive constructors.
The recursive reduction strategy consists in defining the right sides of rules using only functional
composition of symbols of a terminating term rewriting system and the recursive reduction of the
corresponding left sides. If a specification is designed using the binary choice and the recursive
reduction strategies, then it is canonical and sufficiently complete [3].
3.4 Design Strategies for Specifications
The above strategies lead naturally to the design approach called stepwise specification by extensions
[12]. Given a specification S i , a step extends the specification by adding some operations
and yielding a new specification S i+1 . S i+1 is a complete and consistent extension of S i [12], if
every data element of S i+1 was already in S i and distinct elements of S i remain distinct in S i+1 .
Furthermore, if S i is canonical and sufficiently complete, then so is S i+1 [3].
We clarify these concepts by showing the steps yielding the specification of Example 2. Our
initial specification, S 0 , consists of the sorts boolean, natural and stack with their constructors
only, i.e., true, false, 0, successor, newstack, and push. Since there are no rewrite rules, i.e., the
constructors are free, the canonicity and sufficient completeness of S 0 are trivially established. Now
we extend S 0 with the operation concat obtaining S 1 .
concat(push(s;
The recursive reduction of the left side is concat(s; t). Since we designed the concat axioms using
the binary choice and recursive reduction strategies, S 1 is a complete and consistent extension of
S 0 and is a canonical and sufficiently complete specification. During this step we may also extend
take, size, and isnewstack. However, we cannot extend S 0 with reverse because
the right side of one axiom of reverse contains concat. We must first establish the properties of
the specification containing concat. Hence, a separate step is necessary. Then, we extend S 1 with
the operation reverse obtaining S 2 .
newstack
Our strategies together with the stepwise approach ensure again that S 2 is a complete and consistent
extension of S 1 and is a canonical and sufficiently complete specification. All the specifications
presented in this note are designed in this manner.
The binary choice strategy force us to construct left sides of plausible axioms for which we
do not want to define a right side. We complete the definition of these "axioms" by placing the
distinguished symbol "?" in the right side. Other specification languages follow an equivalent
approach to control incompleteness. For example, Larch would declare as "exempt" any term
appearing as left side of one of the axiom we single out with "?" Our strategies can be used also in
the presence of non-free constructors. Some properties of specifications with non-free constructors,
such as confluence, are no longer automatically guaranteed, but they can be checked more easily
than when no strategies at all are used in the design of the specification [3].
Proving Theorems About Annotations
The examples in the previous section contain many small theorems that need to be proved. Automating
these proofs makes them easier to carry out and less prone to error. In this section we
report our experience with this task.
4.1 Induction
Many equations, e.g., cannot be proved by rewriting, i.e., using only equational
reasoning. Such equations can be proved via structural induction [6] or data type induction [19].
Inductive variables of type T are replaced by terms determined by T 's constructors and inductive
hypotheses are established. If F is a formula to be proved, v is the inductive variable, and s is the
type of v, our induction proofs are carried out in the following manner. For every constructor c of
Skolem constant; and if s is an inductive hypothesis.
4.2 An Automated Theorem Prover
We have implemented a prototype theorem prover incorporating many concepts from the Boyer-Moore
Theorem Prover [5]. However, except for built-in knowledge of term equality and data type
induction, the knowledge in the theorem prover is supplied by specifications. Our theorem prover
checks that each function is completely defined by executing Huet's inductive definition. During
this check it identifies as "inductive" those arguments filled by instances of constructors. The
discovery of inductive arguments allows the prover to generate automatically the theorems which
constitute the cases of a proof by induction.
The theorem prover executes four basic actions: reduce, fertilize, generalize, and induct. Reduce
applies a rewrite rule to the formula being proved. Fertilize is responsible for "using" an inductive
hypothesis (i.e., replacing a subterm in the current formula with an equivalent term from an inductive
hypothesis). Generalize tries to replace some non-variable subterm common to both sides of
the formula with a fresh variable. Induct selects an inductive variable and generates new equations.
An induction variable is chosen from the set of the inductive arguments by heuristics which include
popularity [5] and seniority.
The theorem prover computes a boolean recursive function, called prove, whose input is an
equation and whose output is true if and only if the equation has been proved. Axioms and lemmas
of the specification are accessed as global data. Proofs of theorems are generated as side effects of
computations of prove. Users may override the automatic choices, made by the prover, for inductive
variables and generalizations. A technique discussed later allows users to use case analyses in proofs.
function prove(E) is
begin
if E has the form x = x; for some term x; then return true; end if;
if E can be reduced, then return prove(reduce(E)); end if;
if E can be fertilized, then return prove(fertilize(E)); end if;
contains an inductive variable, then
return
return false;
An attempt to prove a theorem may exhaust the available resources, since induction may generate
an infinite sequence of formulas to be proved. However, the termination property of the rewrite
system guarantees that an equation cannot be reduced forever and the elimination of previously
used inductive hypotheses [5] guarantees that an equation cannot be fertilized forever.
5 Experience Proving Theorems
All the proofs discussed in the previous sections have been completely generated with our theorem
prover, except for two proofs of inheritance properties. Many proofs were produced automatically
by the prover. Others were generated only after we supplied additional lemmas, independently
proved using the theorem prover.
The validation proofs for power functions for both while statement examples were all done
automatically, some with just term rewriting and the others with rewriting and induction. The
minimization proofs were slightly more challenging. Those for the array example required three
simple lemmas (e.g., to be proved and added to the set of axioms before the
theorem prover could finish the proofs. The lemmas were suggested by the similarity of terms on
opposite sides of the equations generated by the theorem prover. Generalization had to be inhibited
to obtain the minimization proofs for the stack example, as explained below. Relationships between
different Skolem constants inserted at the same time may be lost when generalization replaces terms
containing these constants with new constants. In attempting to verify
we generate the equation
Generalization replaces size(A1) with a new Skolem constant B2 and starts to verify the lemma
The relation between A1 and B2, lost by the generalization, is crucial to the validity of the theorem.
In nested inductions, for equation is rewritten to
and the proof attempt fails. Simply inhibiting generalization in this case solves the problem and
the theorem is proved with just rewriting and induction.
However, generalization is essential to other proofs. We illustrate this by an example, which
also shows how we discover the lemmas that make some proofs possible or simpler. The proof of
the total correctness of Example 2 requires verifying
During an induction on s, the formula to be proved becomes
The prover simultaneously generalizes reverse(take(size(A1); A1)) and push(newstack; A2) and
attempts to prove
The proof is easily completed by nested inductions on A6 and A7. Without generalization, the
proof continues by induction on A1, but the inductive hypothesis is not strong enough to complete
the proof. The prover keeps generating new inductions on formulas with increasing complexities
until the available resources are exhausted.
We regard generalized formulas as lemmas. Often we are able to further generalize the lemmas
suggested by the prover. For example, equation (4) suggests that newstack might be a right identity
of concat. Thus we prove
and use it as a lemma in the original proof. This immediately reduced the original formula to
The presence of a leading reverse on each side of the equation suggests that the equality may
depend on the arguments only. Thus we attempt to prove
The proof succeeds and we use this result as a lemma in the original proof. With these two
lemmas the original proof becomes trivial. Both lemmas are obtained by "removing context."
Generalization hypothesizes that the truth of an equation does not depend on certain internal
specific portions of each side. The latter example hypothesizes that the truth of an equation does
not depend on certain external specific portions each side, i.e.,
A proof of the conditions required for the semantic correctness of the generic instantiations of
accumulator was attempted for the operations max 0 (see Example 1), addition, multiplication, and
exponentiation. The theorems relative to the first three instantiations were all proved automatically.
However, the attempt to prove the associativity of exponentiation fails. The prover attempts to
verify by a nested induction. For the equation is reduced to
the prover halts with a message that it failed for this case. Thus, only the first three instantiations
of the generic accumulator are semantically correct. Interestingly, the fact that equation (3) holds
when step is associative and init is its left identity was proved automatically by our prover too.
The reversal of a stack discussed in Example 2 can be parallelized by a divide-and-conquer
technique similar to that discussed for accumulation. This program is an instance of a more
complex case of accumulation, in which the type of the result of the accumulation differs from the
type of the of elements of the accumulated sequence. We may exploit parallelism if we assume that
a stack is dynamically allocated in "chunks." Each chunk consists of a fixed-size array-like group of
contiguous memory locations, which are addressed by an index. Chunks are allocated on demand
and do not necessarily occupy contiguous locations of memory, rather are threaded together by
pointers as in a linked list. We reverse a stack in parallel only when the stack consists of several
chunks. In this case, we split the stack in two non-empty portions, say x and y, of linked together
whole chunks. We assume that the bottommost chunk of x points to the topmost chunk of y. We
reverse this link and recursively reverse x and y. When a portion of a stack consists of a single
chunk, we only have to swap the content of the chunk's memory locations to achieve reversal. The
correctness of this parallel implementation of a stack reversal relies on the equation
where reverse and concat were defined in Example 2. Operationally, concat stands for the operation
linking together two portions of a stack represented by its arguments. Reverse is overloaded: on
multi-chunk stacks it partitions and recurs whereas on single-chunk stacks it swaps. The proof of the
equation entails only the mutual relationships between these symbols, not their implementations.
Thus, the representation "in chunks" of the type stack is not an issue of the proof. Obviously, in a
comprehensive proof of correctness the differences between the two computations associated to the
reverse must be accounted for, e.g., as discussed in Section 2.2.
Our prover proves the above equation without human help. Interestingly, during the proof,
which is by induction on x, the prover automatically generates and proves an independent lemma
for each case of the induction. One lemma states that newstack is a right identity of concat, the
other that concat is associative.
In Section 2.4, we presented annotations involving the C ++ predefined type double. We did not
axiomatize this type by means of an algebraic inductive specification, and consequently we could
not use our prover for theorems relying on intrinsic properties of this type. However, all the proofs
in this section, except left(S) - right(S) and bottom(S) - top(S), were easily proved when we
provided some simple unproved lemmas, such the commutativity of "+" for double.
5.1 An Informal Comparison
The need to supply guidance to an automatic prover is not a peculiarity of our implementation.
For example, the Larch Theorem Prover (LP) [13] is designed to be a proof checker as well as an
automated prover. We consider this a approach very sensible.
The guidance required by our prover is in the form of lemmas. Lemmas simplify proofs and
improve understanding by "removing context." In our case they also overcome the lack of certain
proof tactics and of a friendly interface in our implementation.
We briefly compare how LP and our prover prove two sample theorems proposed in [13]. The
proofs involve the types linear container and priority queue, whose axioms are shown below, a total
order, and the type boolean with standard operators.
member
next
next(insert(Q;
else if next(Q) ! C then next(Q)
else C
The "?" symbol in the axioms of next corresponds to the Larch declaration "exempt" for next(new).
LP was used to prove the theorems:
E)
E)
The first theorem was proved after an inductive variable (C) was explicitly picked. The second
theorem required more intervention after rewriting produced:
LP was given a series of commands to divide the proof into cases and apply critical-pairs comple-
tions. The case isempty(Q) required a critical-pairs completion. The case :isempty(Q) required
case analysis on the truth V ! next(Q). Each case was further subdivided based on the truth
of . For the case :(V ! next(Q)) and critical-pairs completion was
requested.
Our prover also verified both these theorems, the first automatically and the second after we
added a few lemmas to the axioms. Since our prover's only knowledge derives from axioms, we do
not treat conditional expressions or implications in any special manner. They are represented by
means of user-defined operations. For example, implication is defined by the following axioms.
true
false
Thus we often need lemmas to manipulate these functions.
Since our prover handles only equations, we formulate the second theorem as
The prover automatically chooses A0 as the inductive variable. The base case, new, is
trivially proved by rewriting. The inductive case,
true as an inductive hypothesis and reduces the left side to
where - is a large nested conditional expression. We use two "standard" lemmas to simplify -. One
distributes "!" with respect to conditionals, i.e., we replace instances of x ! if b then y else z
with if b then x ! y else x ! z. This transformation allows the use of properties of the ordering
relation "!" such as irreflexivity and the "implied equation" [13, Fig. 9]. The other lemma splits
implications whose antecedent is a disjunction, i.e., we replace instances of x - y ) z with
z)- (y ) z). This transformation allows the use of each antecedent independently. By applications
of these lemmas, left side of the equation to be proved becomes
Now we enable the use of the antecedent of each conjunct of the formula by means of two "specific"
lemmas. This approach is directly inspired by [5]. One lemma replaces instances of
with the other lemma replaces instances Q
holds. The specific instance of the latter is member(A2; A1) =? i.e., the
contrapositive form of the first theorem proved for this problem.
After several inference steps, the left side is reduced to:
Now, we continue by cases on next(A2) ! A3 because this expression and its negation appear in
the formula. Although, our prover lacks a "proof-by-cases" tactic, we can simulate it by a lemma.
If P is the formula being proved and x is a boolean subexpression of P , we use a lemma to rewrite
P to In x ) P , we can replace the subexpression x of P by any y that is
known to be implied by x, and likewise for the other conjunct, that is reasoning by cases allows us
to cross-fertilize P [5]. Using this lemma triggers additional rewriting activity to transform the left
side to:
The conjunct with antecedent next(A2) ! A3 is rewritten as
fertilized with the inductive hypothesis, and reduced to true.
We continue by cases on member(A2; A1) because the truth of the theorem depends on standard
properties of ordering relations.
The first conjunct is rewritten to
where the consequent holds by the transitivity of "-". By successively reducing its consequents to
true, the second conjunct is eventually reduced to true.
true
Thus the proof terminates successfully.
The complexities of these proofs are comparable to those obtained with LP. All inductive variables
are chosen automatically, less case analysis is required, and there is no need to invoke the
Knuth-Bendix completion. Although this technique is occasionally useful, we find the proofs it
generates difficult to understand, and thus we prefer this tactic for situations that do not allow
data type induction, e.g., non constructor-based specifications.
We had to provide more lemmas to our prover. These lemmas are either trivial or instances
of trivial lemma schemas, although suggesting them requires "understanding the proof." This
is a mixed blessing. The effort to understand why a proof does or does not go through helps
discovering the relevant properties of a specification. This may lead to better specifications and
even code enhancements.
The user interface of our prover is very primitive. As a consequence we iterated our proof
attempt several times before completion and we had to create manually the instances of the lemma
schemas we supplied to the prover.
6 Concluding Remarks
We have discussed formal verification techniques for problems characterized by both differences
in sizes and addressed properties. Loops are the critical components of small programs. The
problems to be solved in this domain are correctness and termination, lack of expressiveness of
the axiomatizations used for annotations, and the inherent difficulty of reasoning about repeated
modifications of a program state.
Data type implementations are representative of medium size programs. The crucial problem to
be solved is the mutual internal consistency of a group of related subroutines bound by the choice
of the representation of abstract concepts by means of the structures offered by some hardware
architecture and/or some programming language. Proofs of correctness in this domain entail not
only the code, but also the representation mapping which has no physical presence in the software.
Module interconnection is the significant feature of large programs. Syntactic and semantic
commitments of one component may not match the expectations of another. This problem is exacerbated
by languages that allow the customization of a software unit by means of other units.
Proving correctness does not involve code directly, but annotations generated by proofs of correctness
for previous problems.
These tasks can all be addressed with a common formalism (equational specifications) and proof
techniques (rewriting and induction). In particular, their formalizations are based on specifications
that from a qualitative (and to some extent quantitative) point of view are independent of the tasks
and of their sizes. Furthermore, we have discussed conceptual and practical tools for designing and
using these specifications.
A crucial requirement of any specification is its adequacy. The assumption that a specification
is "good" is often mistaken unless considerable care is devoted to its design. Several properties,
unfortunately undecidable, are generally used to address the quality of a specification. We have
shown that, by restricting the expressive power of our specification language, the most common
and fundamental of these properties can be guaranteed. It is hard to say whether our restrictions
are too severe, but it is encouraging to discover that typical verification problems proposed in the
literature do not pose severe problems and that the proposed specifications are easy to use for both
humans and an automated tool.
Rewriting is the fundamental idea behind our approach. The design strategies we have presented
for designing rewrite rules ensure properties of the smaller units of specifications, the defined
operations. Our strategies also allow us to build specifications incrementally in a way that preserves
the properties of smaller units. In building large specifications from smaller ones, we glossed over the
problems of modularizations and parameterization of specifications. Our approach is compatible
with various techniques proposed for these features. In this respect, the properties we are able
guarantee ease the composition of specifications.
The hardest task of nearly any verification problem is proving theorems. Informal proofs are
easier to understand than formal ones, but are less reliable. Formal proofs, except for the simplest
problems, are too complicated for humans without automated tools. Our proofs contain a few
hundreds inferences, the majority of which are simple rewriting steps. Our prover becomes more
effective with occasional hints. The lemmas we supply are "macro-steps" that the prover, for lack
of knowledge and experience, would not carry to completion in certain contexts.
Finding the appropriate lemmas is not always easy. However, some lemmas such as those
discussed in our comparison with LP are relatively standard. Others are suggested by the prover
itself through generalization. From generalizations we sometimes find more elegant lemmas. Finally,
by inspecting proof attempts, we are able to detect repeated patterns or formulas of increasing
complexities which generally lead to proof failures. When these conditions arise, we look for lemmas
that overcome the problems.
We believe that our specification approach is adequate for a large number of cases. However,
our prover still fails to solve most non-trivial problems autonomously. It manages the bookkeeping
of inductions and it provides hints for necessary lemmas. It completely removes the tedium of
rewriting and the clerical mistakes associated with this activity. It prints readable proofs, although
sometimes it makes inferences that are not necessary because the prover's rewrite strategy is in-
nermost. An outermost rewriting strategy would produce shorter and more readable proofs. It
shows a remarkable skill in finding inductive variables. Despite the considerable limitations in its
user interface and proof tactics, the prover increases the quality of our specifications and enhances
considerably the human ability to produce formal proofs for software problems.
--R
A parallel object-oriented language with inheritance and subtyping
Automatically provable specifications.
Design strategies for rewrite rules.
On development of iterative programs from functional specifications.
A Computational Logic.
Proving properties of programs by structural induction.
Complexity analysis of term-rewriting systems
Mathematical Theory of Program Correctness.
Termination of rewriting.
Rewrite systems.
A Discipline of Programming.
Fundamentals of Algebraic Specifications 1: Equations and Initial Semantics.
Debugging Larch shared language specifications.
IEEE Computer
Operational semantics of order-sorted algebras
Introducing OBJ3.
Notes on type abstraction.
The algebraic specification of abstract data types.
Abstract data types and software validation.
An axiomatic basis for computer programming.
Proof of correctness of data representations.
Confluent reductions: Abstract properties and applications to term-rewriting sys- tems
Proofs by induction in equational theories with constructors.
The expressive theory of stacks.
On sufficient-completeness and related properties of term rewriting systems
rewriting systems.
Simple word problems in universal algebras
Modular specification and verification of object-oriented programs
Family values: A semantic notion of subtyping.
A new incompleteness result for Hoare's system.
An introduction to the construction and verification of Alphard programs.
Verification of programs by predicate transformation.
--TR
Reusing and interconnecting software components
The expressive theory of stacks
Termination of rewriting
Complexity analysis of term-rewriting systems
A parallel object-oriented language with inheritance and subtyping
Debugging Larch Shared Language Specifications
Rewrite systems
rewriting systems
A New Incompleteness Result for Hoare''s System
Confluent Reductions: Abstract Properties and Applications to Term Rewriting Systems
Abstract data types and software validation
An axiomatic basis for computer programming
Mathematical Theory of Program Correctness
A Discipline of Programming
Fundamentals of Algebraic Specification I
Modular Specification and Verification of Object-Oriented Programs
Operational Semantics for Order-Sorted Algebra
Design Strategies for Rewrite Rules
--CTR
Olivier Ponsini , Carine Fdle , Emmanuel Kounalis, Rewriting of imperative programs into logical equations, Science of Computer Programming, v.56 n.3, p.363-401, May/June 2005
Antoy , Dick Hamlet, Automatically Checking an Implementation against Its Formal Specification, IEEE Transactions on Software Engineering, v.26 n.1, p.55-69, January 2000 | term rewriting;software tools;abstract data types;algebraic axioms;generic program units;representation functions;program verification;convergence;sufficient completeness;structural induction;boyer-moore prover;rewriting systems;while statements;verification tasks;abstract base classes;mechanical assistance;theorem proving |
631132 | Certification of Software Components. | Reuse is becoming one of the key areas in dealing with the cost and quality of software systems. An important issue is the reliability of the components, hence making certification of software components a critical area. The objective of this article is to try to describe methods that can be used to certify and measure the ability of software components to fulfil the reliability requirements placed on them. A usage modeling technique is presented, which can be used to formulate usage models for components. This technique will make it possible not only to certify the components, but also to certify the system containing the components. The usage model describes the usage from a structural point of view, which is complemented with a profile describing the expected usage in figures. The failure statistics from the usage test form the input of a hypothesis certification model, which makes it possible to certify a specific reliability level with a given degree of confidence. The certification model is the basis for deciding whether the component can be accepted, either for storage as a reusable component or for reuse. It is concluded that the proposed method makes it possible to certify software components, both when developing for and with reuse. | Introduction
Object-oriented techniques make it possible to develop components in
general, and to develop reusable components in particular. These
components must be certified regarding their properties, for example
their reliability.
A component developed for reuse must have reliability measures
attached to it, based on one or several usage profiles. The objective of
the certification method discussed below is to provide a basis for
obtaining a reliability measure of components. The reliability measure
may either be the actual reliability or an indirect measure of reliability
such as MTBF (Mean Time Between Failures).
During development for reuse, a usage model must be constructed
in parallel with the development of the component. The usage model
is a structural model of the external view of the component. The probabilities
of different events are added to the model, creating a usage
profile, which describes the actual probabilities of these events. The
objective is that the components developed will be certified before
being put into the repository. The component is stored together with
its characteristics, usage model and usage profile. The reliability measure
stored should be connected to the usage profile, since another pro-
file will probably give a different perceived reliability of the
component altogether.
Development with reuse involves retrieving components from the
repository, and at the retrieval stage it is necessary to examine the reliability
of the components being reused. The components have been
certified using the specific profiles stored, and if they are to be reused
in a different environment with another usage profile, they must then
be certified with this new usage profile.
A method of certification can be described in the following steps:
Modelling of software usage, 2) Derivation of usage profile, Generation
of test cases, 4) Execution of test cases and collection of failure data and 5)
Certification of reliability and prediction of future reliability. The method
can be applied to certification of components as well as to system cer-
tification.
VII. Certification of Software Components
2.2 Component certification
The components must be certified from an external view, i.e. the actual
implementation of the component must not influence the certification
process. The estimation of usage probabilities must be as accurate as
possible. It may, in many cases, be impossible to exactly determine the
usage profile for a component. This will be problematic, especially
when the individual component is indirectly influenced by the external
users of the system. It must, however, be emphasized that the most
important issue is to find probabilities that are reasonable relative to
each other, instead of aiming at the true probabilities, i.e. as they
will be during operation.
The reuse of components also means that the usage model of the
component can be reused, since the usage model describes the possible
usage of a component (without probability estimates of events
being assigned). This implies that the structural description of the
usage can be reused, even if the actual usage profile can not. The problems
of component reuse and model reuse for different cases are further
discussed in Section 3.5. Component certification is also
discussed by Poore et al. [Poore93].
3. Usage modelling
3.1
Introduction
This section discusses usage models and usage profiles for software
systems as a whole, as well as for individual components, and an
illustration is given in Chapter 5 by means of a simple telecommunication
example. A system may be seen as consisting of a number of
components. A component is an arbitrary element which handles
coherent functionality.
Usage models are intended to model the external view of the usage
of the component. The user behaviour should be described and not
the component behaviour. The users may be humans or other compo-
nents. Modelling usage of software components includes problems
which do not arise when modelling the usage for systems as a whole.
3. Usage modelling
The primary users of a component are those in the immediate vicinity,
for example other components. But in most cases there are other users
involved, for example end-users, which indirectly affect the use of
actual components. Therefore, the usage of a component may have to
be derived from an external user of the system, even if the user does
not communicate directly with the component.
It is assumed that the usage models are created in accordance with
the system structure to support reuse. The view is still external, but
the objective is to create usage model parts which conform to the
structure of the system. The reuse of components also means that it
will be possible to reuse the usage model describing the external
usage of that particular component. The usage models of the components
may combine in a way similar to the components' combination
within a system, providing services to an external user.
Different usage profiles may be attached to one usage model.
3.2 Usage model
Markov chains as a means of modelling usage are discussed by Whittaker
and Poore [Whittaker93]. The use of Markov chains has several
advantages, including well-known theories. A main disadvantage is,
however, that the chain grows very large when applying it to large
multi-user software systems, [Runeson92]. The objective of the usage
model is to determine the next event, based on the probabilities in the
Markov chain. The chain is used to generate the next event without
taking the time between events into consideration, which means that
the times between events are handled separately and an arbitrary time
distribution can be used.
A hierarchical Markov model is introduced, the state hierarchy
model (SHY), to cope with this disadvantage, known as the state
explosion problem [Runeson92]. The SHY model can describe different
types of users, i.e. human users, other systems, system parts, and
multiple combinations and instances of the user types. During the
development of the usage model the user types are handled and constructed
separately, and they are composed into a usage model for the
system as a whole. The model, being modular, is therefore suitable for
reuse, since the objective is to ensure a conformity between the usage
model and the system structure, see Section 3.1.
VII. Certification of Software Components
System configurations, for example, in different markets may differ
in terms of user types and services available. Therefore, usage models
for different system configurations, may be constructed, by combining
the SHY models of the reusable components, and SHY models of the
configuration-specific parts, hence obtaining SHY models for different
system configurations. In particular, different services are one of the
component types to be reused. This implies that the certification is
often related to the services potentially provided by the system.
The general principles behind the SHY model are shown in
Figure
1.
Usage level
User type level
User level
Service level
Behaviour level
Link
Figure
1. The state hierarchy model.
The usage is divided into a hierarchy, where each part represents
an aspect of the usage.
1. The usage level is a state which represents the complete usage.
2. The user type level contains user types or categories.
3. The user level represents the individual users.
4. The service level represents the usage of the services which are
available to the user. The service usage description is instantiated
for different users.
5. The behaviour level describes the detailed usage of a single
service as a normal Markov chain.
3. Usage modelling
The interaction between different services is modelled as links,
meaning a transition in one Markov chain on the behaviour level,
causing a transition in another chain on the behaviour level. An exam-
ple, A dials B, is shown in Figure 2, where the transition from idle to
dial for user A leads to a transition from idle to ring for user B.
User A User B
idle dial idle ring
Figure
2. Link example, A dials B.
The model is discussed in more detail by Wohlin and Runeson
[Wohlin93, Runeson92] and it is also used in the example in Chapter 5,
see in particular Section 5.2.
3.3 Usage profile
The usage model is complemented with a usage profile, which assigns
probabilities to every branch in the hierarchy, together with every
transition on the behaviour level. The probabilities must be derived
based on experience from earlier systems and expected changes concerning
the actual system or expected usage of the system as it is marketed
The probabilities in the hierarchy can be assigned static values, as
in the example in Chapter 5, or be dynamically changed depending on
the states of the users in the model. The latter approach is needed to
be able to model the fact that some events are more probable under
special conditions. It is, for example, more probable that a user who
has recently lifted a receiver will dial a digit, than that a specific user
will lift the receiver. The use of dynamic probabilities in the hierarchy
are further discussed by Runeson and Wohlin [Runeson92].
Test cases are selected by running through the SHY model. First, a
user type is selected by random controlled selection, then a specific
user is chosen, after which a service, available to the selected user, is
drawn. Finally, a transition in the Markov chain on the behaviour level
for the actual service is selected. This transition corresponds to a spe-
cific stimulus which is appended to the test script, and the model is
run through again, beginning from the usage level, see Figure 1 and
VII. Certification of Software Components
Section 5.4. The generation of a specific stimulus also means generating
the data being put into the system as parameters to the stimulus,
hence data is taken into account.
3.4 Usage profile and reuse
A component developed for reuse is certified with a particular usage
profile for its initial usage and is stored in a repository for future
reuse. The component is stored together with its characteristics, usage
model and usage profile.
The reliability measure is attached to the actual usage profile used
during certification, and since it is based on this particular profile it is
not valid for arbitrary usage. The parts of the components, most frequently
used in operation, are those tested most frequently, which is
the key objective of usage testing. These parts are less erroneous, since
failures found during the certification are assumed to be corrected.
Another usage profile relating to other parts of the component, will
probably give a lower reliability measure.
When developing with reuse of software components, it is necessary
to compare the certified usage profiles with the environment in
which the component is to be reused. If a similar profile is found, the
next step is to assess whether the reliability measure stored with the
component is good enough for the system being developed. If the
component has not been certified for the usage profile of the new sys-
tem, a new certification must be performed. The usage model stored
with the component is used for the certification. After certification the
new profile and the new certified reliability are stored in the repository
with the component.
Objective measures of reliability for an arbitrary usage profile
would be of interest in development with reuse. It is, however, impossible
to record such measures, since the definition of reliability is: the
probability of a device performing its purpose adequately for the
period of time intended, under the operating conditions encountered.
The component has therefore to be re-certified if it is reused under
other operational conditions than initially profiled.
3. Usage modelling
3.5 Reuse of the usage model
The proposed usage model itself can easily be reused. The extent of
model reuse and how the model is reused depends on how the system
or components of the system are reused. Some different reuse scenarios
are presented for component certification, together with those in a
system context.
Component certification
1. Reuse with the same usage model and usage profile
The usage model and usage profile for a component can be
reused without modification, if the component has been certi-
fied before being stored in the repository.
2. Reuse of the usage model with an adjusted usage profile
The usage model for a component can be reused, if the component
is to be certified individually with a new usage profile. The
certification can be obtained with the same model by applying
the new usage profile. This gives a new reliability measure of the
component, based upon the new expected usage.
3. Reuse with adjustments in both the usage model and the usage
profile
Adjustments in the structural usage of a component are made if
the component is changed in order to be reused. The usage
model must therefore be updated accordingly and a new certifi-
cation must be made.
Reuse of components in a system context
1. Reuse of a component without modification
The objective is to derive the usage model of the system, from
the usage models of the components, when a system is composed
of a set of components. This can, in particular, be
achieved when the structural usage of the component is un-
changed, but the probabilities in the usage profile are changed.
It should be further investigated whether it is possible to derive
VII. Certification of Software Components
system reliability measures from the reliability measures of the
components, when the usage profiles for the components are
unchanged. The main problem is the probable interdependence
between components, which has not been assessed during component
certification. This is an area for further research.
2. Reuse of a component with modifications
The change in the usage model is a result of the change or adaptation
of the component. Therefore, the usage model of the individual
component must be changed, thereby changing the usage
model of the system. The system has either to be certified with
the expected usage profile of the system, or the reliability of the
system must be derived from the components. This problem is
further addressed in [Poore93].
3. Changes to the existing system
If an element of the system is changed, for example a component
is replaced with another which has different functionality, the
usage of the affected component is changed in the existing usage
model. A new certification must then be obtained based on the
new usage model.
Two factors concerning the SHY usage model make it suitable for
reuse. First, the distinction between the usage model and the usage
profile is important, since it facilitates the use of the same model, with
different profiles, without changes. Secondly, the modularity of the
usage model, and the traceability between the constituents in the
usage model and components in the system, are essential from the
reuse point of view.
3.6 Evaluation of the SHY model
Important aspects of the SHY model:
1. Intuitive: The conformity between model parts and system constituents
makes it natural to develop the model. This is particularly
the case when a system constituent provides a specific
service for the external user.
4. Certification of components
2. Size: The model size increases only linearly with the increasing
numbers of users, [Runeson92].
3. Degree of detail: The model supports different levels of detail.
The actual degree of detail is determined based on the system,
the size of the components and the application domain.
4. Dependencies: Functional dependencies are included in the
model through the link concept.
5. Reuse support: The model supports reuse, as stated in
Section 3.5, through its modularity and conformity to system
constituents.
6. Assignment of probabilities: The structure of the model helps to
partition the problem into smaller parts, hence making it easier
to derive transition probabilities.
7. Calculations: It is theoretically possible to make calculations on
the hierarchical model, by transforming it into a normal Markov
model. However this is impossible practically due to the size of
the normal model. Work is in progress to allow calculations to
be performed directly on the SHY model.
8. Generation of test cases: Test cases can be generated automatically
from the model.
4. Certification of components
4.1 Theoretical basis
In the book by Musa et al. [Musa87] a model for reliability demonstration
testing is described. The model is a form of hypothesis certifica-
tion, which determines if a specific MTBF requirement is met with a
given degree of confidence or not. A hypothesis is proposed and the
testing is aimed at providing a basis for acceptance or rejection of the
hypothesis. The procedure is based on the use of a correct operational
profile during testing and faults not being corrected. The primary
objective is, however, to certify that the MTBF requirement is fulfilled
VII. Certification of Software Components
at the end of the certification, hence corrections during the certifica-
tion process ought to be allowed. A relaxation of the assumption in the
model of no correction after failure is discussed below. The hypothesis
certification model is based on an adaptation of a sampling technique
used for acceptance or rejection of products in general.
The hypothesis is that the MTBF is greater than a predetermined
requirement. The hypothesis is rejected if the objective is not met with
the required confidence and accepted if it is. If the hypothesis is neither
accepted nor rejected, the testing must continue until the required
confidence in the decision is achieved.
The hypothesis certification is performed by plotting the failure
data in a control chart. Figure 3 shows failure number (r) against normalized
failure time (tnorm). The failure time is normalized by dividing
the failure time by the required MTBF.
The testing continues whilst the measured points fall in the continue
region. The testing is terminated when the measure points fall in
the rejection or the acceptance region, and the software is then rejected
or accepted accordingly.
r2010tnorm
Figure
3. Control chart for hypothesis certification of the reliability.
The control chart is constructed by drawing the acceptance and
rejection lines. They are based on the accepted risks taken for acceptance
of a bad product and rejection of a good product. The calculations
are described by Musa et al. [Musa87].
Correction of software faults can be introduced by resetting the
control chart at the correction times. It is not practical to reset the control
chart after every failure, so the chart is reset after a number of failures
has occurred. The reason for resetting the chart is mainly that
after the correction, the software can be viewed as a new product.
4. Certification of components
It can be concluded that the hypothesis certification model is easy
to understand and use. The hypothesis certification model provides
support for decisions of acceptance or rejection of software products
at specified levels of confidence.
If different failure types are monitored, for example with different
severities, the failure data for each type can be plotted in a diagram
and related to a required MTBF for specific types. The overall criterion
for acceptance should be that the software is accepted after being
accepted for all failure types.
The hypothesis certification model does not give any predictions of
the future reliability growth. The certification can, however, be complemented
with a software reliability growth model for that purpose.
4.2 Practical application
The hypothesis certification model is very suitable for use for certifica-
tion of both newly developed and reusable software components. A
major advantage with the model is that it does not require a certain
number of failures to occur before obtaining results from the model. It
works even if no failure occurs at all, since a certain failure-free execution
time makes it possible to state that the MTBF, with a given degree
of confidence, is greater than a predetermined value. Therefore, the
MTBF is a realistic measure, even if a particular software component
may be fault free. The model does not assume any particular failure
distribution. Most available software reliability growth models
require the occurrence of many failures before predictions can be
made about component reliability. A normal figure would be in the
region of 20-40 failures. Hopefully, this is not a realistic failure expectation
figure for a software component.
It is also anticipated that as the popularity of reuse develops, the
quality of software systems will improve, since the reusable components
will have been tested more thoroughly and certified to a specific
reliability level for different usage profiles. Therefore implying, that
when performing systems development with reuse, the number of
faults in a component will be exceptionally low. The proposed
hypothesis certification model can still be applied.
VII. Certification of Software Components
5. A simple example
5.1 General description
The objective in this section is to summarize and explain the models
presented in the sections above in a thorough but simple manner
using an example from the telecommunication domain. The example
follows the steps outlined in Section 2.1 and the basic concept of the
usage model presented in Figure 1.
module M is to be certified. The module is composed of two
components, C1 and C2. The module itself can be seen as a compo-
nent. C1 is a reused component found in a repository and C2 is reused
with extensive adjustments. C1 offers service S1, to the user, whereas
C2 originally offered S2. In the reused component version, S2 is
changed to S2', and a further service is added, S3. The module is
shown in Figure 4.
Figure
4. Module to be certified.
The reused component, C1, has been certified before but with a
usage profile other than the profile expected when using C1 in module
M. It must therefore be re-certified.
5.2 Modelling of software usage
The SHY usage model for module M is developed according to the
structure in Figure 1. Two different user types use the module, namely
and UT2. There are two users of the first type, i.e. U11 and U12,
and one user of the second type, U21. The three upper levels of the
SHY model - usage level, user type level and user level - are illustrated
in Figure 5.
5. A simple example
Usage
Figure
5. Upper levels of the usage model for the module.
Three services are available for the users of the module. The first
service, S1 in C1, is reused without adjustment. Therefore, the behaviour
level usage model for the service can also be reused without
adjustment. The usage profile for S1 has, however, changed and a new
profile must be derived. Service S2 in C2 has changed to S2', which
results in the addition of two new states to the original usage model of
S2. For service S3 in C2, a new behaviour level usage model must be
developed, since the service is new. The behaviour level usage models
for the services are illustrated in Figure 6. The shaded areas are
changed or new.
Figure
6. Usage models for S1 (reused), S2' (modified) and S3 (new).
Now the entire usage model can be composed using the structure
in
Figure
5, together with the usage models for each service in
Figure
6. The users of type 1 have access to service S1. For each user of
user type 1, an instance of the usage model of service S1 is connected.
The user of type 2 has access to services S2' and S3, and hence one
instance of each usage model is connected to the user of type 2. If the
services are interdependent, links between services would be added,
hence modelling the dependence between services used by the external
users.
The entire usage model for the module is shown in Figure 7.
VII. Certification of Software Components
Usage
Figure
7. Usage model for the module.
5.3 Derivation of usage profile
The usage profile is derived, i.e. probabilities are assigned to the different
transitions in the usage model. Static probabilities are used in
this example, see Section 3.3. First the probabilities for the different
user types are determined, followed by the assignment of probabilities
for the users of each type. The probabilities for selection of the
services available for each user are determined. Finally, the transition
probabilities within the behaviour level model for the services are
assigned.
The usage profile can be derived bottom-up, if it is more suitable
for the application in question. The strength of the modelling concept
is that only one level need be dealt with at a time.
A possible usage profile for the usage model is shown in Figure 8,
though without probabilities for S2 and S3.
Unfortunately, it is impossible to go through the example in detail
in this article, and in particular to discuss the links in more depth. For
a more detailed presentation see [Wohlin93, Runeson92].
5.4 Generation of test cases
Initially, the usage model starts in a well-defined initial state, i.e. each
of the behaviour level models are in their defined initial states. The
5. A simple example
Usage
x0.3
Figure
8. Usage profile for the module.
selection starts in the usage state, where a random number is drawn,
for example 0.534 selecting UT1, since 0.534 < 0.7, see Figure 8.
Another random number is drawn, for example 0.107 selecting U11,
and for selection of its service no random number is needed. The
actual state of the service usage model is denoted X. A transition is
then selected and performed. The stimulus connected to that transition
is appended to the test script and the SHY model is run through
again beginning from the usage state.
Several test cases or one long test case can be generated, depending
on the application.
5.5 Execution of test cases and collection of failure
data
The test cases generated are executed as in other types of tests and the
failure times are recorded. Failure times can be measured in terms of
execution time or calendar time, depending on the application. An
example of failure data is shown in Table 1. Times between failures are
given in an undefined time unit. The failure data are constructed to
illustrate the example.
Table
1. Failure data.
Failure number 1
Time between fail- 320 241 847 732 138 475 851 923 1664 1160
ures
VII. Certification of Software Components
5.6 Certification of reliability
The reliability is certified by applying the hypothesis certification
model to the collected failure data. In fact, an MTBF requirement is
certified. The MTBF objective in this example is 800 time units, which
means that the first normalized failure time (tnorm) is equal to 320/800,
see
Table
1 and Figure 9. The data points are plotted in a control chart,
with the module under certification being accepted when the points
fall in the acceptance region. The module is accepted between the
eighth and the ninth failure and the time for acceptance is about 5 400
r97531Reject Continue
Accept
tnorm
Figure
9. Control chart for certification.
6. Conclusions
Reuse will be one of the key developmental issues in software systems
in the future, in particular reliable systems. Therefore, the reused components
must be reliable. Component reliability can be achieved by
applying sound development methods, implementing fault tolerance
and finally, by adapting methods to ensure and certify the reliability
and other quality attributes of the components both when developing
for and with reuse.
Usage testing or operational profile testing has already shown its
superiority over traditional testing techniques from a reliability per-
6. Conclusions
spective. A higher perceived reliability during operation can be
obtained by usage testing, than with coverage testing, with less effort
and cost. The gains in applying usage testing have been presented by,
for example, Musa [Musa93].
This article has concentrated on the certification of software com-
ponents. The other issues (development of reliable software and fault
tolerance) are equally important, although not discussed here.
The proposed method of usage modelling, i.e. the state hierarchy
(SHY) model, is shown to be a valuable abstraction of the fundamental
problem concerning components and their reuse. The model in
itself is divided into levels and the services are modelled as independently
as possible, therefore supporting the reuse objective. The area of
certification of software components is quite new. Some ideas and
capabilities have been presented, but more research and, of course,
application are needed.
The reliability certification model discussed is well established in
other disciplines and it can also be adapted and used in the software
community. The model is simple to understand and apply.
It has been emphasized that the usage models developed and the
reliability measures with a given usage profile can be reused together
with the components. The division into a structural usage model and
different usage profiles makes it possible to reuse the usage model,
and to apply new profiles, as a new environment has a behaviour different
to that considered earlier.
The models and methods have been applied to a simple example to
illustrate the opportunities and the benefits of the proposed scheme. It
can therefore be concluded that the proposed method, or one similar
to it, should be applied in a reuse environment, to obtain the necessary
reliability in the components both when developing for and with
reuse.
Acknowledgement
We wish to thank Johan Brantestam, Q-Labs for valuable technical
comments, Helen Sheppard, Word by Word and Graeme Richardson
for helping us with the English, as well as the whole personnel at Q-
Labs. We also acknowledge suggestions made by anonymous IEEE
Transactions on Software Engineering referees.
VII. Certification of Software Components
7.
--R
Software Reuse: Emerging Technology
--TR
Software reliability: measurement, prediction, application
Software reuse: emerging technology
Markov analysis of software specifications
Planning and Certifying Software System Reliability
Operational Profiles in Software-Reliability Engineering
--CTR
Hai Zhuge, An inexact model matching approach and its applications, Journal of Systems and Software, v.67 n.3, p.201-212, 15 September
Lutz Prechelt , Walter F. Tichy, A Controlled Experiment to Assess the Benefits of Procedure Argument Type Checking, IEEE Transactions on Software Engineering, v.24 n.4, p.302-312, April 1998
S. Castano , V. De Antonellis , B. Pernici, Building reusable components in the public administration domain, ACM SIGSOFT Software Engineering Notes, v.20 n.SI, p.81-87, Aug. 1995
Victor R. Basili , Steven E. Condon , Khaled El Emam , Robert B. Hendrick , Walcelio Melo, Characterizing and modeling the cost of rework in a library of reusable software components, Proceedings of the 19th international conference on Software engineering, p.282-291, May 17-23, 1997, Boston, Massachusetts, United States
Sathit Nakkrasae , Peraphon Sophatsathit, A formal approach for specification and classification of software components, Proceedings of the 14th international conference on Software engineering and knowledge engineering, July 15-19, 2002, Ischia, Italy
John C. Knight , Michael F. Dunn, Software quality through domain-driven certification, Annals of Software Engineering, 5, p.293-315, 1998
Osman Balci, A methodology for certification of modeling and simulation applications, ACM Transactions on Modeling and Computer Simulation (TOMACS), v.11 n.4, p.352-377, October 2001
Osman Balci , Richard E. Nance , James D. Arthur , William F. Ormsby, Improving the model development process: expanding our horizons in verification, validation, and accreditation research and practice, Proceedings of the 34th conference on Winter simulation: exploring new frontiers, December 08-11, 2002, San Diego, California | software reliability;software cost;software reuse;software quality;failure statistics;usage testing;usage modelling technique;hypothesis certification model;usage profile;usage modelling;software reusability;software cost estimation;software component certification |
631135 | A Characterization of the Stochastic Process Underlying a Stochastic Petri Net. | Stochastic Petri nets (SPN's) with generally distributed firing times can model a large class of systems, but simulation is the only feasible approach for their solution. We explore a hierarchy of SPN classes where modeling power is reduced in exchange for an increasingly efficient solution. Generalized stochastic Petri nets (GSPN's), deterministic and stochastic Petri nets (DSPN's), semi-Markovian stochastic Petri nets (SM-SPN's), timed Petri nets (TPN's), and generalized timed Petri nets (GTPN's) are particular entries in our hierarchy. Additional classes of SPN's for which we show how to compute an analytical solution are obtained by the method of the embedded Markov chain (DSPN's are just one example in this class) and state discretization, which we apply not only to the continuous-time case (PH-type distributions), but also to the discrete case. | Introduction
A BOUT one decade ago, Molloy [1], Natkin [2], and
Symons [3] independently proposed associating exponentially
distributed firing delays to the transitions of a
Petri net. Generalized stochastic Petri nets (GSPNs), introduced
by Ajmone Marsan, Balbo, and Conte in [4], relax
this condition by allowing "immediate" transitions, with a
constant zero firing time. In GSPNs, firings of immediate
transitions have priority over firings of timed transitions.
Each immediate transition has associated a weight which
determines the firing probability in case of conflicting immediate
transitions. Stochastic activity networks (SANs)
[5] and stochastic reward nets (SRNs) [6] are two other
classes of Petri nets where transition firing is either exponentially
distributed or constant zero. GSPNs, SANs, and
SRNs can be automatically transformed into continuous-time
Markov chains (CTMCs) or Markov reward processes.
The need for non-exponentially distributed transition
firing times in SPNs has been observed by several au-
thors. Bechta, Geist, Nicola, and Trivedi defined extended
stochastic Petri nets (ESPNs) [7], where the firing delay
of timed transitions may have arbitrary distribution. The
G. Ciardo is with the Department of Computer Science, College of
William and Mary, P.O. Box 8795, Williamsburg, VA 23187, USA.
E-mail: [email protected]. His work was supported in part by the
German Academic Exchange Office (DAAD), while he was a Visiting
Professor at the Technische Universit?t Berlin, and by NASA under
grant NAG-1-1132.
R. German is with the Institut f?r Technische Informatik, Technische
Universit?t Berlin, 10587 Berlin, Germany. E-mail: [email protected]
berlin.de. His work was supported by Siemens Corporate Research &
Development.
C. Lindemann is with Gesellschaft fur Mathematik und Daten-
verarbeitung, Forschungsstelle fur Innovative Rechnersysteme und
Germany. E-mail:
[email protected]. His work was supported by the Federal Ministry
for Research and Development of Germany (BMFT) under grant ITR
9003.
numerical solution for ESPNs is applicable when the underlying
stochastic behavior is a semi-Markov process. Ciardo
proposed several extensions to ESPNs and called this modeling
formalism semi-Markov SPNs (SM-SPNs) [8]. Deterministic
and stochastic Petri nets (DSPNs), introduced by
Ajmone Marsan and Chiola [9] as an extension to GSPNs,
include exponentially distributed and constant timing. If
at most one deterministic transition is enabled in a mark-
ing, the steady-state solution can be computed using an
embedded Markov chain. Timed Petri nets (see [10] for a
recent survey paper) and generalized timed Petri nets [11]
employ a discrete time-scale for their underlying Markov
process. Timed transition in TPNs and GTPNs fire in
three phases and the next transition to fire is preselected
according to a probability distribution.
Recently, the class of extended DSPNs has been introduced
[12]. In extended DSPNs, transitions with arbitrary
distributed firing times are allowed under the restriction
that at most one transition with non-exponentially distributed
firing time is enabled in each marking. General
formulas for the steady-state solution of extended DSPNs
were derived using the method of supplementary variables.
In case the non-exponential distributions are piecewise
specified by polynomials, an efficient numerical solution is
possible. Furthermore, Choi, Kulkarni, and Trivedi introduced
the class of Markov regenerative SPNs (MR-SPN)
in [13], [14] which is equivalent to the class of extended
DSPNs. The authors observed that the stochastic process
underlying a MR-SPN is a Markov regenerative process and
derived general formulas for the transient and steady-state
solution. The transient solution method employs inversion
of matrices containing expressions of Laplace-Stieltjes
transforms and the inversion of Laplace transforms.
More general classes of SPNs have been considered by
Haas and Shedler. They introduced regenerative SPNs and
showed how this class of SPN can be analyzed by means of
regenerative simulation [15]. They showed that, for each
generalized semi-Markov process (GSMP) [16], there exists
an equivalent SPN with generally distributed firing times
[17] and proposed to analyze them by discrete-event simulation
In this paper, we explore various subclasses of SPNs,
obtained by imposing restrictions on the combinations of
firing distributions types allowed or on the effect of a transition
firing on the other enabled transitions. This leads to
a hierarchy of SPN classes where modeling power is reduced
in exchange for an increasingly efficient solution. GSPNs,
DSPNs, SM-SPNs, TPNs, and GTPNs are particular entries
in our hierarchy. Furthermore, we show that state
discretization can be applied to both the continuous-time
CIARDO ET AL., A CHARACTERIZATION OF THE STOCHASTIC PROCESS UNDERLYING A STOCHASTIC PETRI NET 507
and the discrete-time case. The class of Discrete-time SPNs
introduced by Molloy [18] is then extended by allowing arbitrary
discrete firing time distributions rather than only
the geometric distribution.
We introduce the semi-regenerative SPNs (SR-SPNs) for
which we show how to compute the steady state solution
by embedding a Markov chain at appropriately defined re-generation
points. The evolution of a SR-SPN between
regeneration points is not restricted to be a CTMC, as for
MR-SPNs and extended DSPNs. Thus, we relax the restriction
that in any marking at most one timed transition
with non-exponentially distributed firing delay is enabled.
In Section IV, we present a SR-SPNs of a transmission
line, where two deterministic transitions are concurrently
enabled. In particular, we consider SR-SPNs where all
transition firing distributions can be piecewise defined by
polynomials multiplied by exponential expressions. This
class of probability distribution is referred to as expolyno-
mial distributions and includes the exponential as well as
the constant distribution as special cases.
The paper is organized as follows. Section II defines
SPNs and describes their behavior. A hierarchical classification
of SPNs according to the underlying stochastic process
is presented in Section III and the feasibility of their
numerical solution is discussed. To illustrate the numerical
solution method of SR-SPNs, a SR-SPN of a simple transmission
line is analyzed in Section IV. Finally, concluding
remarks are given.
II. Stochastic Petri nets
A Petri net is a directed bipartite graph in which the first
set of vertices corresponds to places (drawn as circles) and
the other set of vertices corresponds to transitions (drawn
as bars). Places contain tokens which are drawn as dots.
The set of arcs is divided into input, output and inhibitor
arcs (drawn with an arrowhead on their destination, inhibitor
arcs have a small circle), with each arc is associated
a multiplicity. A marking of a Petri net is given by
a vector which contains as entries the number of tokens in
each place. A transition is said to be enabled in a marking
if all of its input places contain at least as many tokens
as the multiplicity of the corresponding input arc and all
of its inhibitor places contain fewer tokens than the multiplicity
of the corresponding inhibitor arc. A transition
fires by removing tokens from the input places and adding
tokens to the output places according to the arc multiplic-
ities. The reachability set is defined to be the set of all
marking reachable by firings of transitions from the initial
marking.
Throughout this paper, we adopt the common formalism
introduced for Petri nets in which transition firings is
augmented with time [6]. We consider stochastic Petri nets
(SPN) where the firing of a transition is an atomic operation
and two types of transitions exist: immediate tran-
sitions, which fire without delay, and timed transitions,
which fire after a random firing delay. The firing of immediate
transitions has priority over the firing of timed
transitions. Each immediate transition has associated a
weight which determines its firing probability in case this
transition is conflicting with some other immediate transi-
tion. The firing delay of each timed transition is specified
by a probability distribution function. As a consequence,
the reachability set of a SPN can be divided into vanishing
and tangible markings depending on whether an immediate
transition is enabled or not. The tangible markings of
a SPN correspond to the states of an underlying stochastic
process, the marking process. Firing weights of immediate
transitions, average firing delays of timed transitions, and
arc multiplicities may be marking-dependent. Each quantity
is evaluated in the marking before determining which
transitions are enabled and what effect they have when
they fire.
A timed transition is denoted by t, the set of all timed
transitions by T . A tangible marking is denoted by -, the
tangible reachability set by S. E(-) is the set of transitions
enabled in marking - and S E(-)g is the
set of markings where t 2 T is enabled. F t (-; \Delta) is the
probability distribution function for the firing time of t in
-. If this distribution is not marking dependent, we write
Particularly important for continuous-time SPNs is the
case where the firing time distributions may depend on the
marking only through a "scaling factor" [19]. Define the
average firing time of transition t in marking - as:
Then, we require that
F t (\Delta) is the "normalized firing time distribution" of
t, which has average one and is not a marking-dependent
quantity.
To specify the influence of the firing of a transition on
the firing process of other transitions enabled in the current
marking, execution policies have been introduced in
[19]. We allow different execution policies for timed transitions
in a SPN which may also depend on the marking
[20]. Define e t;s (-) 2 fR; Cg to be the execution policy to
be used for transition s when transition t fires in marking -.
If e t;s new random
delay from the associated distribution); if e t;s
it "Continues".
A. Stochastic behavior
The tangible marking - of the SPN as a function of the
time ' is described by a continuous-time stochastic process,
the marking process: f-f'g; ' - 0g or by a discrete-time bi-variate
stochastic process [6]: f('
' [n] is the instant of time when a timed transition fires and
- [n] is marking reached after this firing. ' [0] is zero and - [0]
is the initial marking.
Consider now the remaining firing time (RFT) of each
timed transition after a change of the marking. The RFT
508 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 20, NO. 7, JULY 1994
of transition t enabled in marking - [n] , - [n]
t , specifies the
time to be spent in markings enabling t before transition
t can fire. The transition t with the minimum RFT enabled
in - [n] fires at time '
t . If the firing
time distributions of two or more transitions enabled in
a marking have jumps at the same instants of time, the
probability of them having the same RFT is positive. We
do not consider this case, although weights can be used to
define a probability mass function over these transitions.
At time ' [0] , the RFT of each timed transition enabled
in the initial marking is given by a random sample,
\Delta)), from the firing time distribution associated
with this transition (all other RFTs are undefined).
If transition t 2 E(- [n] ) has the minimum RFT, at time
' [n] , the RFT - [n+1]
s of any other transition s 2 E(- [n+1] )
at time '
is:
(- [n]
if e t;s (- [n]
Using the terminology of [19], this behavior corresponds to
a "race policy", since the minimum RFT determines the
next transition to fire. After the firing of a timed transi-
tion, the next tangible marking is reached either directly or
after the firing of immediate transitions. The probability of
branching to marking - after the firing of timed transition
t in marking - is
- .
Furthermore, the definition of e allows to choose between
"age memory", "enabling memory", and "resampling" [19]
in a marking-dependent way: after the firing of transition
t, a transition s might either restart, e
firing time.
The following firing time distributions are important in
practical applications:
oe
c , where oe is the length of
the unit step. The constant distribution is a special
case: Const(c) is equivalent to Geom(1; c).
ffl discrete: X - Discr , the distribution function of X
is obtained as a weighted sum of a (finite or countably
infinite) number of constant distributions. The geometric
distribution is a special case. It is possible to
approximate any distribution arbitrarily well by using
a sufficiently large number of elements in the weighted
sum.
This distribution approaches Const(0) as -
increases.
b. This distribution approaches Const(b) as a
approaches b.
Poly , the distribution function of
X is piecewise defined by polynomials in ' (expressions
of the form
IR) and has finite support
The finite discrete and uniform distributions
are special cases. It is possible to approximate
any distribution arbitrarily well by using either a sufficiently
large number of polynomials of small degree
(e.g., constants, as for the discrete distributions) or by
using a single polynomial of sufficiently large degree.
Expoly , the distribution function
of X is piecewise defined by expolynomials in '
(expressions of the form
+1)). The polynomial and exponential
distributions are special cases.
III. SPNs with efficient solution
In this section, we describe several types of behavior
which might render the solution analytically tractable.
This leads to a hierarchy of SPN classes where modeling
power is reduced in exchange for an increasingly efficient
solution. The classes are defined by the underlying stochastic
process.
A. Markov SPNs
The main obstacle to an analytical solution is the presence
of the RFT in the state description. If the firing time
distribution of t is memoryless, the RFT of t in - has the
same distribution as the entire firing time, F t (-; \Delta), hence,
there is no need to include it in the state description. Ac-
cordingly, two classes of SPNs were defined: we call them
"CTMC-SPNs", where all distributions are exponential [1],
[2], and "DTMC-SPNs", where all distributions are geometric
[18], since the marking process f- 0g is a
continuous-time Markov chain (CTMC) or a discrete-time
Markov chain (DTMC), respectively.
As the geometric distribution is memoryless only at discrete
time instants multiple of the "time step" oe, exponential
and geometric distributions cannot be freely mixed. A
special case of memoryless distribution is the constant zero,
the distribution of the immediate transitions.
GSPNs [4] are a special case of CTMC-SPNs where the
mass at zero is either zero or one. By using state-expansion,
any phase-type distribution with any mass at zero still results
in an underlying CTMC [19].
The "discrete-time SPNs" defined in [18] allow only geometric
distributions with the same step oe, possibly with
parameter one, that is, the constant oe, since Const(oe) is
equivalent to Geom(1; oe). A discretization analogous to
the one used to expand a phase-type distribution can be
applied to the discrete-time case [8]. First, a geometric distribution
with unit step ioe can be discretized as shown in
Figure
1, where t 1 has step 3 and t 2 has step 1. Weights are
needed to decide whether transition t 1 or t 2 will fire, given
that both attempt to fire, an event which has probability pq
when the underlying DTMC is in state (100c). This allows
more generality than in [18], since the timing of a transition
(i.e., F t 1
its ability to fire when competing
with other transitions (i.e., w t 1
are described
by different quantities. Then, Const(ioe) is equivalent to
hence TPNs are also reducible to a DTMC
CIARDO ET AL., A CHARACTERIZATION OF THE STOCHASTIC PROCESS UNDERLYING A STOCHASTIC PETRI NET 509
100a
100b
Fig. 1. Discretizing two Geom distributions.
with unit step oe, if all constant firing times are a multiple
of oe. The process described by a TPN is probabilistic even
if its firing times are not random variables since, whenever
two transitions have the minimum RFT, the conflict must
be resolved probabilistically using the weight information.
Finally, since any discrete distribution can be obtained as
a weighted combination of constants, any SPN whose firing
distributions have as support a subset of fioe
be reduced to a DTMC with unit step oe.
For both CTMC-SPNs and DTMC-SPNs, steady-state
and transient analysis can be performed using standard
numerical techniques [6]. Assume that the state space of
the process is S and the initial probability distribution is
-(0). The steady-state probability vector - of a CTMC
described by the infinitesimal generator Q, or of a DTMC
described by the one-step transition probability matrix P,
is the solution of
subject to
assuming that S contains only one recurrent class of states
(if this is not the case, S can be partitioned into a transient
class and two or more recurrent classes, which can be
solved independently). Sparsity-preserving iterative methods
such as Gauss-Seidel or Successive Over-Relaxation can
be effectively used for the solution. For transient analysis
of the continuous case, the transient probability vector at
time ' is the solution of
d'
and can be computed using Jensen's method, also called
Uniformization or Randomization [21]. For the discrete
case, the Power method can be used:
starting from -(0)
(the iterations halt when ioe - '). Ergodicity is not required
for transient analysis.
B. Semi-Markov SPNs
If the firing of a transition t in marking - [n] causes transition
s to restart its firing in - [n+1] , e t;s (- [n] the RFT
of s must be resampled in - [n+1] . If all transition pairs
behave this way, the marking process is a semi-Markov
process (SMP), that is, it enjoys absence of memory immediately
after every state change [19], [7]. If s has an
exponential distribution, the choice between Restart and
Continue is irrelevant and we assume e t;s = R in this case.
The time instants ' [n] are called regeneration points
[22].
For transient and steady-state analysis of a SMP, the evolution
of the process during the regeneration points must be
studied. Equation (1) describes the kernel
of a SMP. Since the future of the marking process after a
regeneration point becomes a probabilistic replica of the
future of the process after time zero, if started in the same
state, the kernel is also given by Equation (2).
Equation (3) describes the vector holding
time distributions in the states of the SMP, which can be
reduced to Equation (4).
The matrix \Pi(') of transient solutions of a SMP is given
by the following system of integral equations:
where diag(h(')) represents a square matrix having the
elements of h(') on the main diagonal and zeros elsewhere.
For steady-state analysis, an embedded Markov chain
(EMC) can be defined. The one-step transition probability
matrix P of the EMC is computed by studying the
evolution of the SMP between regeneration points. Having
obtained the matrix P, the steady-state solution of the
EMC can be derived by solving the linear system of global
balance equations. Subsequently, the vector c of conversion
factors is computed. The entries of c represent the
expected holding times in the states of the SMP between
two regeneration points. The solution vector of the EMC
is multiplied by the vector c and normalized to obtain the
steady-state probability vector of the SMP.
The one-step transition probability matrix P of the EMC
is derived from the kernel: and the vector
c of conversion factors is given by
The steady-state solution fl of the EMC can be obtained
by solving:
subject to
and the steady-state solution of the SMP is given by:
510 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 20, NO. 7, JULY 1994
For a SM-SPN, the entries of kernel and the vector of
holding times are given by (for simplicity, we assume that
simultaneous firings have zero probability):
Z 'Y
Y
Equation (5) can be solved directly or by employing
Laplace-Stieltjes transforms as recently proposed in [13],
[14]. This solution method may cause numerical difficulties
and its computational cost is significant for large models.
However, when all firing time distributions are expolyno-
mial distributions, the entries of P and c can be obtained
by symbolic integration, and the steady-state solution can
then be obtained by solving (6) and (7).
C. regenerative SPNs
If there is a marking - and two transitions t; s 2 E(-)
such that e t;s before s, and s does not
have an exponential distribution, the marking process is
not semi-Markov. However, under certain conditions, it
might be possible to find regeneration points, at which the
process enjoys absence of memory. This process is called
a semi regenerative process (SRP) in [22], so we call semi
regenerative SPN (SR-SPN) a SPN whose marking process
is a SRP. For the transient and steady-state analysis, the
evolution of the process between the regeneration points
must be studied. Since this can be a GSMP with arbitrarily
distributed holding time in each state, the marking process
of a SR-SPN is more general than in a MR-SPNs [13] or
an extended DSPN [12]. The set of regeneration points of
a SR-SPN can be expressed as
where each regeneration point ' [nk ] must satisfy the condition
that, when the SPN enters marking - [nk ] at time
firing transition t, any transition enabled in - [nk
restarts its firing process.
In the following, we concentrate on steady-state analysis.
As for SMP, an EMC is defined at regeneration points of
a SRP. For the one-step transition probability matrix P of
the EMC, the evolution of the SRP between the regeneration
points must be studied. The steady-state solution of
the EMC can be computed as in the case of SMPs, but the
conversion factors constitute a matrix rather than a vector.
The steady-state probability vector of the SRP is derived
by multiplying the steady-state probability vector of the
EMC by the matrix of conversion factors and normalizing
according to Equations (6) and (8).
The one-step transition probability matrix of
the EMC and the matrix of conversion factors
are defined by:
during [' [nk
=Eftime in j during [0; ' [n 1
The steady-state solution of the EMC can still be obtained
by solving the linear system of equations (6). The
solution of the EMC, fl, is converted to that of the SRP, -,
by multiplying by the conversion factors and normalizing:
For a constructive definition of SR-SPNs, the sets SE ,
TR , are introduced [23]. SE is the set of all markings
in which only exponential transitions are enabled. TG
is the set of all general (non-exponential) transitions of the
SPN. TR ' TG is a set which contains regenerative transi-
tions. A transition t 2 TG is called regenerative, if all other
transitions of the SPN restart when t becomes enabled,
fires, or becomes disabled. Note that TR is not required to
contain all general transitions.
SPN is a SR-SPN, if a set TR
of regenerative transitions can be found, such that SE and
TR constitute a partition of S.
The definition of the regeneration points of a SR-SPN
depends on whether a regenerative transition is enabled or
not. For states - [nk the next regeneration point is
chosen to be the instant of time after the transition with
the minimum firing delay has fired: ' [nk+1 . For
states TR , the next regeneration point is
chosen to be the instant of time after t has fired or has
become disabled.
The possible evolution of the SR-SPN during the enabling
period of a regenerative transition t is described
by the subordinated (stochastic) process of t 2 TR . The
matrix of transient state probabilities for this process is
state i at time 0g:
Based on \Pi('), P and C can be defined row-wise. For
all regenerative transitions t 2 TR the rows corresponding
to states i 2 S t are defined by:
ae / t
where u i is the i-th row unity vector,
\Theta
is the
matrix of branching probabilities after a firing of t, and
\Theta
and
\Theta
are the transient probabilities
and the expected holding times of the states of the subordinate
process:\Omega
CIARDO ET AL., A CHARACTERIZATION OF THE STOCHASTIC PROCESS UNDERLYING A STOCHASTIC PETRI NET 511
The entries of P and C for rows corresponding to states
are given by
is the rate leading from state i to state j and - i
is the sum of all outgoing rates for state i.
If a regenerative transition t is never enabled together
with other transitions, the steady-state solution of a SR-
SPN is insensitive to the distribution of t, since Equations
(11) and (12) reduce to:
An efficient numerical solution of Equations (11) and
(12) is the critical step for the practical application of SR-
SPNs. The next section presents a SR-SPN whose subordinated
process is a SMP.
The numerical solution of a SR-SPNs with large state
space can be performed efficiently if the subordinated processes
are CTMCs. In this case, at most one regenerative
transition may be enabled in each marking. SR-SPNs
with this restriction are equivalent to the class of extended
DSPN defined in [12] and Markov regenerative SPNs defined
in [13]. In case of a subordinated CTMC, the matrix
of transient state probabilities for the subordinated process
is given by the matrix exponential of the generator matrix
for the subordinated CTMC of transition t:
In
Appendix
A we show how to generalize Jensen's method
for an efficient calculation of the rows of Equations (11)
and (12) in case of expolynomial regenerative transitions.
This technique has been implemented in TimeNET (Timed
Net Evaluation Tool), which can solve SR-SPNs, provided
that at most one expolynomial transition is enabled in
each marking, plus any number of exponential transitions
(TimeNET offers simulation capabilities as well, for SPNs
not satisfying this requirement).
The analysis can be generalized to the case where the firing
time distribution of a regenerative transition depends
on the marking through a scaling factor. This can be done
by scaling the generator matrix by the corresponding scaling
factors:
\Theta -
In this case the
matrices\Omega t and \Psi t are given by:
Z 1e
Z 1e
Equations (16) and (17) generalize the results presented
in [24] (see
Appendix
B).
SPNs
Underlying: general
SRP-SPNs
Subordinated: general
SRP-SPNs
SRP-SPNs
SMP-SPNs
Underlying: SMP
CTMC-SPNs
Underlying: CTMC
DTMC-SPNs
Underlying: DTMC
Fig. 2. SPN hierarchy.
D. Generalized semi-Markov SPNs
If there is a marking i and two transitions t; s 2 E(i)
such that e and s do not have an exponentially
distributed firing delay, the underlying process
is too difficult to study as a SRP, or it might even not be
a SRP. The underlying stochastic process is a GSMP. It is
also possible to derive the state equations for the transient
or steady-state case by means of supplementary variables
[12]. The resulting equations constitute a system of partial
differential equations which can be analyzed numerically
by replacing the differential quotients by finite difference
quotients. In practice, however, simulation is the method
of choice for the study of large SPNs with generally distributed
firing times [15], [17].
Figure
2 summarizes the classes of SPNs and relates
them to the underlying stochastic processes.
Practical limitations
We conclude this section with an informal discussion of
the models that can be solved in a reasonable amount of
time on a modern workstation. For analytical solutions,
the main obstacle is often the memory required to store the
reachability graph, or even just its tangible portion. As a
rule of thumb, 10 5 to 10 6 markings and 10 6 to 10 7 marking-
to-marking transitions can be considered the limit. A large
amount of virtual memory is usually required, but it is
not in itself sufficient, since the algorithms to build the
reachability graph exhibit little locality, causing an excessive
number of page faults if not enough main memory is
available.
In this respect, the memory requirements for a Markovian
SPN (CTMC-SPN or DTMC-SPN) are the most
512 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 20, NO. 7, JULY 1994
straightforward to understand. For SMP-SPNs, the requirements
are similar, since the embedded process is a
DTMC. The study of a SR-SPN requires instead to consider
multiple stochastic processes. If the regenerative
transitions have expolynomial firing distributions and the
subordinated processes are CTMCs, it is appropriate to
attempt an analytical solution. For the subordinated pro-
cesses, the memory locality can be higher and the maximum
main memory requirements smaller, since the solution
of each process is performed in isolation and each of them
is usually smaller than the entire chain for a similar Markovian
SPN. However, the transition probability matrix describing
the embedded process, while possibly being of a
smaller dimension than in the Markovian case, often contains
many additional entries, corresponding to marking-
to-marking paths, rather than single transitions. Hence, for
SR-SPNs, the overall memory requirements might be better
or worse than for a similar Markovian model, depending
on the particular model.
Another issue to consider is the execution time required
for a steady-state or transient solution. For a Markovian
SPN, assuming enough memory is available, the steady-state
solution becomes a problem only if the convergence
of the linear system is excessively slow. For transient anal-
ysis, a large number of iterations is required if the system
is stiff, that is, if there is a mixture of slow and fast events
(entries differing by many orders of magnitude) and the
time at which the solution is required is sufficiently large.
For the steady-state study of a SR-SPN, analogous considerations
apply to each individual process, since a transient
analysis of the subordinated processes and a steady-state
analysis of the embedded process is required. Hence, the increased
generality of the underlying stochastic process does
not necessarily have a negative effect on the solvability.
In all cases where a numerical solution is impossible or
impractical, such as the analysis of Markovian SPNs with
excessively large reachability sets, transient analysis of a
non-trivial SMP-SPN or SR-SPN, and analysis of a general
SPN or a SR-SPN with non-Markovian subordinated
processes, simulation is an effective approach.
IV. An example
In this section, a SPN model of a simple transmission line
is considered, to illustrate the steady-state analysis of a SR-
SPN. It is assumed that, after the generation of a message,
transmission begins and a timeout clock is started. Conflict
for the medium can delay the start of transmission. If the
timeout elapses before the transmission of the message is
completed, the transmission will be repeated.
Figure
3 shows a SPN model of the system. The generation
of a message is modeled by transition t 0 , which has an
arbitrary firing time distribution with average firing time
. The timeout and the transmission of the message
are represented by transitions t 1 and t 3 with constant
firing times of - 1 and - 3 , respectively. The delay to acquire
the medium is modeled by transition t 2 , which has
an exponentially distributed firing time with rate -. The
multiplicity of some input arcs is marking-dependent to en-
Fig. 3. SR-SPN of a transmission line.
0:1000 1:0110
2:0101
Fig. 4. The reachability graph for the model.
sure that places are empty after the firing of
t 1 or t 3 . If the timeout is not larger than the transmission
time, no successful transmission can ever take place, hence
we assume
In this SPN, t 1 and t 3 have constant firing times, are concurrently
enabled, and start their firing process at different
instants of time. Nevertheless, the analysis is possible using
firings of the transitions t 0 and t 1 as regeneration points:
g. The reachability set of the SR-SPN consists
of three markings and is shown in Figure 4:
The sets S t 0
constitute a partition
of the reachability set S. Transition t 0 is exclusively
enabled, hence the steady-state solution of the SPN is insensitive
to the distribution of t 0 , it only depends on the
average firing time - 0 . The process subordinated to transition
t 1 is a SMP and is shown in Figure 5. Since t 1
can become enabled only upon entering marking 1, the
first and last row of \Pi t 1
, and \Psi t 1
are not needed.
neither t 0 nor t 1 can become enabled upon entering
marking 2, the last row of C is not needed either. In ad-
dition, after t 1 fires, the SPN cannot be in marking 2, so
the last column and row of P are not needed either. In
other words, P only needs to describe the probability of
transitions between markings 0 and 1. Finally, entry / t 1
does not need to be computed because transition t 1 is not
enabled in marking 0. For simplicity, we denote unneeded
entries with the symbol "\Gamma".
The matrix \Pi t 1
(') of the transient state probabilities for
the SMP subordinated to t 1 can be obtained symbolically:
\Theta
CIARDO ET AL., A CHARACTERIZATION OF THE STOCHASTIC PROCESS UNDERLYING A STOCHASTIC PETRI NET 513
0:1000 1:0110 2:0101
Fig. 5. The SMP subordinated to t 1 .
The matrices of state probabilities of the SMP at the instant
of firing of t 1
, and of expected holding times in
in each marking up to the firing of t 1 , \Psi t 1
are:
\Theta
and
\Theta / t 1
(')d'
where u(\Delta) is the unit step function. Employing Equations
and (10) yields the one-step transition probability matrix
P of the EMC and the matrix of conversion factors
C:
All changes of marking caused by firings of transition t 1
lead to marking 0. The steady-state probability vector of
the EMC is computed by solving the linear system of equations
(6). In this particular case, the solution is:
The steady-state marking probability vector - of the SR-
SPN is obtained as the product of the steady-state probability
vector of the EMC by the matrix C of conversion
factors:
and by subsequently normalizing fl 0 according to Equation
(8).
For a numerical example, assume -
\Theta
and
The average cycle time for the token is given by the sum
of the average enabling times for
and its throughput
However, only a portion ! t 1
10 of this throughput corresponds
to successful completions of the transmission, hence the
real transmission throughput is
Finally, when a timeout occurs, the probability that the
transmission has not yet started is
The complementary probability,
corresponds to the undesirable event of having a timeout
while the transmission is underway.
Conclusion
In this paper, we have classified the SPNs into various
classes, according to the nature of their underlying stochastic
process. The most complex class we identified, SR-
SPNs, corresponds to semi-regenerative processes and effectively
extends the class of SPNs for which an analytical
solution is known.
We illustrate the type of behavior that can be modeled by
the SR-SPNs using a simple system transmitting messages
as an example. In it, two deterministic transitions can
become enabled concurrently, but not necessarily at the
same time. A similar model was given as an example of
a DSPN which cannot be solved with previously known
methods [9].
Appendix
I. Extending Jensen's method to expolynomial
distributions
In this appendix we derive an efficient numerical computation
of Equations (11) and (12) for a transition t with an
expolynomial distributed firing time and a subordinated
CTMC [23]. The presented formulas generalize Jensen's
method and the formulas presented in [12] for polynomial
distributions.
514 IEEE TRANSACTIONS ON SOFTWARE ENGINEERING, VOL. 20, NO. 7, JULY 1994
For
The solution of (11) and (12) is a weighted sum of matrices
can be calculated with Jensen's
method: define
A =q
The rows of E ( ') are obtained as
R
where \Phi is calculated by iterative vector-matrix multiplications
is the i-th row unity-vector and fi(k; q') is the k-th Poisson
probability in q' which can be calculated iteratively by:
The truncation points L and R of the summation can be
estimated for a given error tolerance [6].
For the calculation of L (';m;-) , the power series of the
matrix exponential can be substituted:
dy
Applying right truncation leads to:
R
where fl(k) and j(k) are calculated iteratively:
A criterion to bound the truncation error can be de-
rived. The entries of each row of the matrix exponential
sum to one. The truncation error of each entry of L (';m;-)
is bounded by ", if R satisfies
r
where s is the (exact) sum of each row of L (';m;-) . If
s is given by
The computational effort for the calculation of one row
of L (';m;-) depends mainly on the number of vector-matrix
multiplications. If a weighted sum of matrices E (') and
L (';m;-) is required, it is possible to factor out the matrix
powers, thus avoiding repeated vector-matrix multiplica-
tions. The asymptotical complexity for the numerical solution
in case of an expolynomial distribution is therefore of
the same order as in the case of a deterministic firing time.
II. Formulas for marking-dependence through a
scaling factor
In this appendix we derive Equations (16) and (17) presented
for regenerative SPNs with a transition t with a
general distributed firing time depending on the marking
through a scaling factor and with a subordinated CTMC.
Define the normalized state probabilities of the subordinated
CTMC as (' j is the absolute and -
' is the normalized
elapsed firing time of t):
The matrix of normalized state probabilities is given by
the matrix exponential of the scaled generator matrix -
(this can be shown by the underlying system of differential
equations [24]):
ij as the probability, that the subordinated CTMC
is in state j after t fires, given it was in state i initially, and
ij as the expected holding time in state j up to the firing
of t. Integrating by substitution:
Z 1-
Z 1-
Substitution of the entries of the matrix exponential leads
to Equations (16) and (17).
--R
"Performance analysis using stochastic Petri nets"
"Reseaux de Petri stochastiques"
Modeling and Analysis of Communication Protocols Using Numerical Petri Nets
"A class of Generalized Stochastic Petri Nets for the performance evaluation of multiprocessor systems"
"Stochas- tic activity networks: structure, behavior, and application"
"Automated generation and analysis of Markov reward models using Stochastic Reward Nets"
"Extended Stochastic Petri Nets: applications and analysis"
Analysis of large stochastic Petri net models
"On Petri Nets with deterministic and exponentially distributed firing times"
"Timed Petri nets definitions, properties, and applications"
"A Generalized Timed Petri Net model for performance analysis"
"Analysis of Stochastic Petri Nets by the Method of Supplementary Vari- ables"
"Markov regenerative stochastic Petri nets"
"Transient analysis of deterministic and stochastic Petri nets"
"Regenerative stochastic Petri nets"
"Continuity of generalized semi-Markov processes"
"Stochastic Petri net representation of discrete event simulations"
"Discrete Time Stochastic Petri Nets"
"The effect of execution policies on the semantics and analyis of Stochastic Petri Nets"
"Analysis of deterministic and stochastic Petri nets"
"The randomization technique as a modeling tool and solution procedure for transient Markov pro- cesses"
Erhan C- inlar
Analysis of Stochastic Petri Nets with Non- Exponentially Distributed Firing Times
"Modeling discrete event systems with state-dependent deterministic service times"
--TR
A class of generalized stochastic Petri nets for the performance evaluation of multiprocessor systems
nets
Regenerative stochastic Petri nets
A generalized timed petri net model for performance analysis
Stochastic Petri Net Representation of Discrete Event Simulations
The Effect of Execution Policies on the Semantics and Analysis of Stochastic Petri Nets
Analysis of large stochastic petri net models
Analysis of stochastic Petri nets by the method of supplementary variables
Markov regenerative stochastic Petri nets
DSPNexpress
Extended Stochastic Petri Nets
On Petri nets with deterministic and exponentially distributed firing times
Transient Analysis of Deterministic and Stochastic Petri Nets
Stochastic Activity Networks
--CTR
Armin Heindl , Reinhard German, A Fourth-Order Algorithm with Automatic Stepsize Control for the Transient Analysis of DSPNs, IEEE Transactions on Software Engineering, v.25 n.2, p.194-206, March 1999
N. N. Ivanov, Decomposition Analysis of Random Processes in Time Stochastic Petri Networks, Automation and Remote Control, v.62 n.10, p.1731-1742, October 2001
Christian Kelling, A framework for rare event simulation of stochastic Petri nets using RESTART, Proceedings of the 28th conference on Winter simulation, p.317-324, December 08-11, 1996, Coronado, California, United States
Narayanan , Sanda Harabagiu, Question answering based on semantic structures, Proceedings of the 20th international conference on Computational Linguistics, p.693-es, August 23-27, 2004, Geneva, Switzerland
Andrea Bobbio , Antonio Puliafito , Mikls Tekel, A Modeling Framework to Implement Preemption Policies in Non-Markovian SPNs, IEEE Transactions on Software Engineering, v.26 n.1, p.36-54, January 2000
A. Bobbio , A. Horvth , M. Telek, The scale factor: a new degree of freedom in phase-type approximation, Performance Evaluation, v.56 n.1-4, p.121-144, March 2004
Muhammad A. Qureshi , William H. Sanders , Aad P. A. van Moorsel , Reinhard German, Algorithms for the Generation of State-Level Representations of Stochastic Activity Networks with General Reward Structures, IEEE Transactions on Software Engineering, v.22 n.9, p.603-614, September 1996
N. Chaki , S. Bhattacharya, Performance analysis of multistage interconnection networks with a new high-level net model, Journal of Systems Architecture: the EUROMICRO Journal, v.52 n.1, p.56-70, January 2006
I. Mura , A. Bondavalli, Markov Regenerative Stochastic Petri Nets to Model and Evaluate Phased Mission Systems Dependability, IEEE Transactions on Computers, v.50 n.12, p.1337-1351, December 2001
G. Ciardo , R. L. Jones, III , A. S. Miner , R. I. Siminiceanu, Logic and stochastic modeling with SMART, Performance Evaluation, v.63 n.6, p.578-608, June 2006 | PH-type distributions;embedded Markov chain;stochastic Petri nets;stochastic Petri net;semiMarkovian stochastic Petri nets;stochastic process;distributed firing times;simulation;deterministic Petri nets;SPN classes;modeling power;stochastic processes;markov processes;timed Petri nets;state discretization;continuous-time case;generalized timed Petri nets;petri nets;generalized stochastic Petri nets |
631150 | Warm Standby in Hierarchically Structured Process-Control Programs. | We classify standby redundancy design space in process-control programs into the following three categories: cold standby, warm standby, and hot standby. Design parameters of warm standby are identified and the reliability of a system using warm standby is evaluated and compared with that of hot standby. Our analysis indicates that the warm standby scheme is particularly suitable for long-lived unmaintainable systems, especially those operating in harsh environments where burst hardware failures are possible. The feasibility of warm standby is demonstrated with a simulated chemical batch reactor system. | Introduction
Process-control programs, such as those for controlling
manufacturing systems, can often be organized in a multi-level
hierarchical control structure where higher level processes
formulate long-term control strategies, e.g., optimizing
resource management, whereas lower level processes
perform real-time control functions [1,13]. The long-term
nature of the decisions made by non real-time upper level
processes means that the system may be able to tolerate
temporary loss of such processes, e.g., by using suboptimal
strategies. However, the loss of critical real-time processes
can disrupt the whole system. This suggests that different
fault tolerance techniques should be adopted for upper and
lower level processes since their reliability requirements are
quite different.
The standby replacement approach [2,6,7,12] is an economical
and efficient way of achieving fault tolerance at
reasonable cost for processes in the control hierarchy. In
one scheme, termed "cold standby," only one copy of each
process is active at a time, and each copy is allocated to
a processor that is designed to be fail-stop [11], i.e., able
to detect if it contains a fault during the normal course
of operation by using redundant hardware such as a self-checking
circuit [8,10]. When a processor is faulty, processes
which reside on the failed processor are assigned to
a spare processor or other functional processors if no spare
processors are available. The main disadvantage of this
"cold standby" scheme is its long recovery time to load
and restart a backup copy. This problem is overcome by
This work was supported in part by the National Science Foundation
under Grant CCR-9110816 and the US Nuclear Regulatory Commission
under award NRC-04-92-090. The opinions, findings, conclusions, and
recommendations expressed herein are the authors' and do not necessarily
reflect the views of the NRC.
I.R. Chen is with Department of Computer and Information Science,
University of Mississippi, Weir 302, University, MS 38677.
F.B. Bastani is with Department of Computer Science, University of
Houston, Houston,
using the "hot standby" scheme where two or more copies
are allowed to run at the same time on different fail-stop
processors, with one copy serving as the primary and the
others serving as active backups. When the primary copy
fails (due to the failure of the processor on which it resides),
a backup copy running on another processor can take over
instantaneously without any recovery time delay. Howev-
er, this "hot standby" approach requires up to date copies
of a process and may not be cost-effective for upper level
processes which do not require instantaneous recovery.
This paper develops a warm standby scheme in which
the copies of a process may be partial copies instead of
full copies as in the "hot standby" scheme. In the design
space of replication, we envision that (a) for cold standby,
there is only a single active copy of the process and, hence,
there are no other active copies; (b) for hot standby, there
are multiple active, full copies of a process; (c) for warm
standby, there are also multiple active copies of a process,
some of which are partial copies. Warm standby is suitable
for upper level processes because it incurs medium cost
and moderate recovery time delay as compared with other
standby schemes, although it potentially can also be used
for lower level processes that are less time-critical.
The rest of the paper is organized as follows. Section II
defines the meaning of a warm standby copy in a hierarchically
structured system as opposed to a hot standby copy,
and identifies the design parameters of the warm standby
scheme. Section III presents a reliability analysis of warm
standby and a simulation evaluation using a simulated
chemical batch reactor. Finally, Section IV concludes the
paper and outlines some future research areas.
II. Definition of Warm Standby Copies
We first define our fault model. We assume that if a processor
fails then its failure is detected by redundant hardware
and it ceases operation. In no cases does a machine
behave unexpectedly. This assumption can be satisfied using
techniques based on fail-stop processors [11].
As an example of warm standby copies, consider a part
of a process-control system where a temperature profile is
controlled by a control process according to a prescribed
optimal time-temperature curve. This control process monitors
the temperature sensor input and calculates the actuator
output for effecting temperature changes (with a goal
of minimizing the mean square error between the actual
temperature profile and the optimal temperature profile).
To tolerate possible failure of the control process, a stand-by
copy is created in another computer. The standby copy
Figure
1. Different Detailed View of A Temperature Profile
Figure
2. An Abstract Hierarchy.
can be implemented in three ways: (a) it has the same view
of the control information as that of the primary copy and
the frequency of receiving the temperature sensor input is
the same as that of the primary copy, (b) it only has a
partial view of the control information and, hence, the frequency
of receiving the sensor input is less than that of the
primary copy, and (c) it does not have any view of the control
information and, hence, the frequency of receiving the
sensor input is zero. These three implementations correspond
to the hot standby, warm standby, and cold standby
schemes, respectively. Figure 1 illustrates the sensor input
temperature profile as perceived by the standby copy
using hot, warm, and cold standby schemes, respectively.
In effect, the sensor input temperature profile is viewed at
different levels of detail, ranging from the most detailed
one corresponding to the use of maximum sensor sampling
frequency, to the least detailed one corresponding to the
use of minimum sensor sampling frequency.
There are two implications in this example which must
be pointed out. First, a warm standby copy, although possessing
only a partial view of the sensor input temperature
profile, still has a useful and summarized view of the temperature
profile, e.g., it may still know what the maximum
temperature is, when it was attained, etc. This allows a
warm standby to immediately take charge using its summarized
information without having to start from scratch
as in the cold standby scheme. Second, the amount of
processing power to create and maintain a standby copy
is proportional to the sampling frequency and, hence, a
warm standby copy will not consume as much processing
power as a hot standby copy since the sampling rate is low-
er. This means that for the same hardware cost a higher
degree of replication may be achieved by using warm standby
instead of hot standby copies. This higher degree
of replication for the same hardware cost can provide the
system with a better reliability, particularly for long-lived
unmaintainable systems, and those operating in harsh environments
where burst hardware failures are possible, because
now a process has more copies to tolerate multiple
processor failures. A reliability analysis will be performed
later in Section 3 to illustrate this point.
In the following, we give a more formal definition of a
primary or standby copy of a control process, and its interaction
with other control processes in a hierarchically
structured system. The definitions are illustrated using
the system shown in Figure 2. It consists of 4 application
processes, a; b; c, and d.
2.1 Seniority Function
A copy of a process may impose only a partial load on
a processor depending on a design parameter called the
seniority of that copy. Formally, let A denote the set of
processes in the hierarchical control system (e.g., a, b, c,
and d in Figure 2) and let P denote the set of available
processors. Then, let
f;g be the parent function (e.g.,
a in
Figure
for the hierarchical structure;
be the load function (e.g., instr/sec) of
processes in A;
be the capacity (e.g., instr/sec) of processors
in P .
The seniority function allocates copies of processes to processors
Thus, if a copy of a process a's seniority oe(a; p) is 1, then
it means that the copy is either a primary copy or a hot
standby copy and it runs at its full load, l(a), on processor
indicates that processor p does not
execute a at all. A value between 0 and 1 means that
this copy of process a is a warm standby copy and imposes
oe(a; p)l(a) load on processor p. The allocation must satisfy
the following constraint,
a2A
2.2 Logical Communication Link
In a conventional hierarchical structure, for each par-
ent/child process-pair, we have one parent-to-child logical
communication link (for sending control instructions) and
one child-to-parent logical communication link (for transmitting
status information). In the hierarchical structure
with warm standby copies, similar logical communication
links are used. However, the logical communication links
need not be of the same capacity. Formally, let
In other words, allocation(a) is the set of processors having
at least one copy of a and primary(a) is the set of processors
having the most senior copies of a. There are two sets of
active logical communication links in the hierarchy:
ffl A parent-to-child link from x to y iff 9a 2 A such that
allocation(a). The capacity of the link is proportional
to oe(a; y).
ffl A child-to-parent link from x to y iff 9a 2 A such that
allocation(-(a)). The capacity of the link is proportional
to oe(-(a); y).
Notice that the load on the communication subsystem is
the same for both the warm standby and hot standby
schemes since the source of all information is the set of
primary nodes while the destination is the set of allocated
nodes. If the receiver is a full copy (one that is allocated
to a primary node) then it gets complete information,
otherwise it receives only partial information.
III. Evaluation
We first show that the use of warm standby copies instead
of just hot standby copies can enhance the system
reliability. Design conditions under which the above statement
is true are investigated. Then, we present a simulation
evaluation of the warm standby scheme using a case
study.
3.1 Reliability of Partial Replication
As pointed out in Section II, since a warm standby (i.e.,
a partial) copy requires less processing power than a full
copy, more standby copies for the same hardware cost can
be used to tolerate hardware failures, resulting in a system
that is less vulnerable to hardware failure. A direct consequence
of this effect is enhanced reliability. At the same
time, there is no increase in software complexity since all
copies of a process run the same program. Nevertheless,
a design tradeoff associated with the use of warm standby
copies over just hot standby copies is that there exists a
possibility that a warm standby copy may not be able to
deal with a control situation when it takes control. The
period that is required for a partial copy to advance its seniority
to become a full copy when it initially takes charge
is called a vulnerable period, which depends on the copy's
seniority. In the following, we present an analysis that illustrates
conditions under which the warm standby scheme
may be favored over the hot standby scheme.
Consider the case of allocating 2 processes, a and b, to 4
processors, a (a is the parent
of b and thus both are important system functions and
cannot fail at any time) and c(p i
(each processor has the processing capability of loading up
to one full copy, either a or b). We assume that a processor
functions for an exponentially distributed time with
rate -; once it fails it stays down because there is no repair
capability in the system. Now, consider the following
two ways of achieving fault tolerance by means of standby
redundancy:
1. Using only hot standby copies, e.g.,
1, and oe(b; p 3 It consists of a series
structure of two subsystems with one consisting of p 1
and each containing a full copy of a, in a parallel
structure, and the other consisting of p 3 and p 4 , each
containing a full copy of b, also in a parallel structure.
The reliability of the system is given by
2. Using warm standby copies, e.g.,
and oe(a; p 3
seniority of 0.5 means that a copy runs only at its one-half
load on the processor it is allocated to. The reliability
of this system is bounded from above by the reliability
of a 2-out-of-4 system. Let x i be 1 if p i is alive
and let it be 0 if p i has failed, for
denote the complement of x i . Then the structure function
for the system [3] is
From this, the upper bound on the reliability
of the warm standby system is given by:
r(t)j upper bound
which is better than the reliability of the hot standby
system. However, the lower bound on the reliability is
given by
r(t)j lower bound
The reason the reliability is less than the upper bound
is because of the probability of a faulty control decision
while the warm standby is in the process of gathering
sufficient information to become the primary controller
after the primary fails. We have developed a
detailed reliability model [5] that assumes that when
a partial copy of process a or process b takes over,
it takes an exponentially distributed time (this is the
vulnerable period) with rate - a or - b , respectively, to
become a primary copy. Moreover, it models the fact
that during this vulnerable period, there is a software
failure rate, ', representing the rate at which a partial
copy fails to deal with a control task when it takes
over.
Detailed calculations [5] show that the reliability of the
system using warm standby copies is better than that just
using hot standby copies as ' (software failure rate of a
partial copy) decreases and as - (recovery rate of a partial
copy) increases (here - observation
is that when ' -, the reliability of the warm standby system
is always better than that of the hot standby system.
Figure
3 compares the reliability of these two systems with
all parameters varying proportionately. When ' is comparable
in magnitude to -, the warm standby system can
provide a better reliability than the hot standby system as
the underlying hardware becomes more unreliable. An explanation
of this is because state transitions that could lead
to system failure in the warm standby system are mostly
due to ' rather than - and the probability that a state
transition can lead to system failure in the warm standby
system is less than that of the hot standby system since
there are more states in the warm standby system. Con-
sequently, the reliability of the warm standby system will
decline by a lesser extent than that of the hot standby system
as - increases since increasing - only increases ' by
the same order of magnitude. Conversely, when ' is an
order of magnitude higher than - (e.g., 10-),
the warm standby system will suffer more from increasing
- since this increases ' by an extra order of magnitude
times) and the probability of state transitions that
can lead to system failure for the warm standby system is
greatly increased. In summary, we conclude that the warm
standby scheme is most favorable when ' is of the same order
of magnitude as - and this favorable situation is most
likely when the underlying hardware is unreliable and/or
the recovery rate (-) is high.
3.2 Simulation Evaluation
Figure
3. Reliabilities of Hot Standby and Warm Standby
with Lower and Upper Bounds.
Figure
4. A Batch Reactor.
In this section, we first develop a process-control program
for a simulated experimental chemical batch reactor
system to illustrate the warm standby technique in prac-
tice. Then, we present the simulation results and analyze
the effect of various parameters on the reliability of the
recovery procedure. In this case study, the physical environment
of the batch reactor in which the control processes
are embedded is simulated; however, the control processes
are completely implemented, instead of being simulated,
and operate in real-time. The environment simulator sends
sensor data in every 4t interval to the control processes;
when it receives control actions in response to a sensor
event (e.g., opening a fraction of a steam valve) from the
control processes, the simulator updates the state of the
environment (e.g., temperature and pressure) to that at
t +4t based on the state at t, and advances its simulation
clock to t +4t.
3.2.1 A Chemical Batch Reactor System
Consider the batch reactor sketched in Figure 4 where
first-order consecutive reactions take place in the reactor
as time proceeds. Reactant A (with a corresponding solution
concentration CA ) is initially charged into the vessel.
Steam is fed into the jacket to bring the reactor up to a temperature
at which the consecutive reactions begin. Cooling
water is later added to the jacket to remove the exothermic
heat of reactions. The product that is desired is component
(with a solution concentration CB ). If the control
process lets the reaction go on for too long, too much B will
react to form compound C (with a solution concentration
CC ) and consequently the yield of B will be low. On the
other hand, if the control process stops the reaction too
early, too little A will have reacted and the conversion and
yield of B will again be low. Therefore, the control process
needs to control the batch reaction to follow a specific temperature
profile (i.e., time vs temperature profile) in order
to optimize the yield. The actual temperature is adjusted
by a controller which controls two split-range valves, a
steam valve and a water valve. The fraction of the steam
valve which is open, X s , and the fraction of the water valve
which is open, Xw , are determined by an output signal, P c ,
produced by the temperature actuator. The steam valve is
wide open when P and is closed when P c - 9 while
the water valve is closed when P c - 9 and wide open when
3. Hence, the control process needs to communicate
closely with the temperature controller to properly adjust
the temperature in the reactor.
Figure
5 shows an instance of the optimum temperature
and concentration profiles with Tmax representing the maximum
temperature and C j representing the concentration
of A, B, or C. If the reaction runs longer than t opt , the yield
Figure
5. Batch Profiles.
Figure
6. A Control Hierarchy for the Batch Reactor System
of B decreases. A complete set of equations that describe
the kinetics of the first-order consecutive reactions can be
found in [9].
3.2.2 A Hierarchically Structured Control Program
The control process described above can be implemented
as a 4-level control hierarchy (Figure
Level 4: This level controls the inventory of chemicals,
the scheduling of batch reactions, the maintenance of
production level, etc.
Level 3: This level governs the kinetics of different batch
reactions (e.g. formulating optimal temperature pro-
files).
Level 2: This level consists of processes (i.e., master control
processes) each of which controls a batch reaction
using an optimal temperature profile provided by a
level 3 process. The responsibilities of a master control
process include (a) minimizing the mean square error
(MSE) between the actual and the desired reactor
temperature profiles such that CB;desired \GammaC B;actual
CB;desired -
3% for the final concentration of product B, and (b)
dynamically formulating desired jacket temperature
profiles one segment at a time to be followed by a
level 1 process. The MSE is defined as
batch
batch X
desired
actual (t)j
where t batch is the total batch reaction time in minutes.
Level 1: This level consists of specialized processes (i.e.,
jacket temperature control processes) each of which is
responsible for regulating jacket temperature changes
(by controlling the steam and cooling water valves and
flow rates) such that the prescribed jacket temperature
profile formulated by a level 2 process is followed.
We focus our attention on a level 2 process (the master
control process) and a level 1 process (the jacket temperature
control process) of the control hierarchy. (In a hierarchical
control structure as such, levels 1 and 2 are normally
made fault tolerant because on-line repair is not practical
for lower level real-time controllers.) We assume that a
level 3 process which formulates the desired temperature
profile to be followed by the master control process is allocated
to some processor in the system and there are four
processors, to which the master and the
jacket processes can be allocated for controlling the batch
reaction. Also, we assume that a level 2 or a level 1 process
will consume a fraction of the processing power of a
processor in a ratio that is equivalent to its seniority. For
example, if a copy's seniority is 1, then it will consume the
full processing power of a processor that it is allocated to.
Seniority Function. A level 2 master control process, m,
is replicated on three processors,
functions
Figure
7. The Data Structure for Describing
A Temperature Profile.
Only the copy with oe(m; primary copy)
provides direct control to the batch reactor with the others
serving as partial copies. On the other hand, a level 1
jacket temperature control process, j, is replicated on three
processors,
to 1.0 and oe(j; p 4 only the copy with
primary copy) provides direct control to
the jacket temperature. Recall that a copy consumes a fraction
of the processing power of a processor in a ratio that
is equivalent to its seniority, so that oe(m; p 3
and oe(m; p 4 Furthermore, when a junior
copy becomes a primary copy, other partial copies residing
in the same processor will be deprived of their processing
power. For example, if oe(m;
the junior copy with oe(m; advances its seniority
to oe(m; p 3 due to detection of a failure of the primary
copy of m, the junior copy with oe(j;
be deprived of its processing power from processor p 3 . In
our implementation, this is achieved by scheduling it to
die at the same time when failure of the primary copy of
occurs.
Knowledge Representation. We use the same knowledge
representation to describe the desired and actual reactor
(or jacket) temperature profiles for all copies of m (or
j).
Figure
7 shows the data structure used to describe a
temperature profile. There are 60=t s slots which could be
filled per minute, where t s represents the sensor sampling
interval in seconds. The degree to which these slots are
filled is proportional to a copy's seniority. For example,
every nth slot is filled for a copy of a control process with
a seniority equal to 1=n. This data structure allows a temperature
profile to be described at different levels of detail,
i.e., as the number of filled slots increases, the temperature
profile is known to a greater detail. Consequently, a copy
with a low seniority will probably have more up-to-date information
about less frequently updated information such
as phase, rate of change of temperature (slope), etc., and
have less up-to-date information about temperature slots.
Note that with this implementation, a primary copy of m
does not send different information to different copies of j
and, hence, broadcast protocols with sampling by copies of
(with a sampling rate proportional to their seniorities)
could be used.
Simulating the Batch Reactor Environment. The
control environment in which the master and jacket processes
are embedded is simulated by the following three
processes running on other processors:
1. an environment process which simulates the physical
environment of the batch reactor and the sensor sub-system
2. a channel process which simulates the underlying communication
subsystem with varying degrees of chan-
Table
1. Parameters of Batch Reactor Control Program.
Table
2. MSE and CB for Single, Double and Triple Processor
Failures.
nel capacities simulated by changing its input queue
lengths (this parameter is called C channel ); and
3. a supervisor control process which simulates a level 3
process.
These three simulated processes, together with the three
copies of the master control process and the three copies of
the jacket temperature control process, communicate with
one another through the channel process.
Fault Recovery Procedure. Process failure detection is
implemented by "are you alive" and "I am alive" messages.
Specifically, if a process does not respond to an "are you
alive" message for more than N broadcast (a program param-
eter) consecutive broadcasting intervals (a broadcasting interval
a sensor sampling interval in our implementation),
then the process is considered dead. The recovery action
taken by a junior copy of m upon detection of a failure of
its senior counterpart is (a) advancing its seniority from
either 0.2 to 0.6, or 0.6 to 1.0, and (b) acquiring more detail
control information from the supervisor control process
(for the desired temperature profile) and the environment
process (for the sensor data) in a frequency that is proportional
to its new seniority. If the new seniority is 1.0,
the junior copy takes charge immediately while gradually
acquiring information from the environment. The recovery
action taken by a junior copy of j is the same except that
the parent process from which it acquires the desired jacket
temperature profile is the primary copy of m.
3.2.3 Simulation Results
The parameters of our control program are shown in Table
1.
Table
2 compares the cases of single, double, and
triple processor failures in terms of the MSE (mean square
error) between the actual and optimal temperature profiles
and the final yield of CB . The time of processor failure is
chosen to be at the most critical moment of the batch reac-
tion, namely, at the time when a phase change occurs (at
the 14th minute mark).
From
Table
2, we see that when all the copies of m or
(for example, all copies of j process fail when processors
the batch reaction goes on
unattended and the final yield of CB becomes quite low
(in fact it is equal to zero) because too much B is con-
sumed. On the other hand, for other cases (even for the
case of a triple failure, e.g., (p
copy still survives, the yield of CB is quite good. This
is partly due to the fact that the batch reactor is intrinsically
a set-point based reactor system and thus a temporary
out-of-sync between the control and environment processes
can be tolerated in a short period of time without causing
catastrophic damages. However, it also points out the importance
of using partial copies to provide fault tolerance
- even a partial copy with seniority equal to 0.2 can make
a significant difference. Our other experimental results [4]
show that in all cases when at least one partial copy survives
failures (for both the level 1 and level 2 processes),
the yield of CB is good and the MSE is never more than
Comparison of the simulation results with the case of
using full copies is obvious. In the latter case, p 1 and p 2
may be allocated for the master control process, and p 3 and
may be allocated for the jacket control process. Hence,
when a critical double failure occurs, namely, (p 1 ,p 2 ) or
(p 3 ,p 4 ), the batch reaction will go on unattended. This is
in contrast with the results for warm standby where we observe
that when a double failure occurs, the batch reaction
is always under proper control. Of particular interest is
the simulation result as compared to the case of using cold
standby processes. In the former case, there is no disruption
of continuity of control when a warm standby process
takes over whereas in the latter case a cold standby process
would require a loading and restart period (e.g., to
load the temperature history logged in some stable storage
before restart) and during that period the system is left
uncontrolled.
IV.
Summary
In this paper, we have developed a fault-tolerant technique
that can be used in a variety of process-control sys-
tems. This technique provides good reliability in a cost-effective
way by incorporating the concept of warm stand-
by. Our case study shows that a surviving warm standby
copy with a seniority as low as 0.2 can make a significant
difference in providing continuity of control when failures
occur. Our comparative study of the reliability of partial
and hot standby techniques suggests that warm standby
appears to have its greatest advantage when it is applied
to systems whose underlying hardware is unreliable.
There are several research areas which include (1) developing
a decentralized management methodology for hierarchically
structured process-control programs with warm
standby and analyzing the effects of local and global factors
which influence the distribution of oe per process (influenced
by local factors, e.g., importance of a process) and per processor
(affected by global factors, e.g., load balancing re-
quirements), (2) using a frame-structure-based knowledge
representation technique to facilitate self-learning capability
of control processes, (e.g., via peer-to-peer communications
which can exist among copies of a process), and
extending it to cases where the knowledge base represented
by the frame structure is large, and (3) comparing the
performance of warm standby using unreliable communication
protocols (e.g., datagram services) with hot standby
using reliable protocols based on timeout and retransmission
Acknowledgments
The authors wish to thank the five anonymous reviewers
for their detailed comments which have significantly
improved the quality of this paper.
--R
"Theory and practice of hierarchical control,"
"The STAR (self-testing-and repairing) com- puter: an investigation into the theory and practice of fault tolerant computing,"
Statistical Theory of Reliability and Life Testing
"Reliability of fully and partially replicated systems,"
"Making processing fail-safe,"
"Fault tolerant computer system for the A129 helicopter,"
Design and Analysis of Fault Tolerant Systems
"Evaluation of a self-checking version of the M- C68000 microprocessor,"
"Fail-stop processors: An approach to designing fault tolerant computing systems,"
"Fault-tolerant design of local ESS processors,"
"The development of reliability in industrial control systems"
--TR
Fail-stop processors
Process Modeling, Simulation, and Control for Chical Engineers
An ai-based architecture of self-stabilizing fault-tolerant distributed process control programs and its analysis | software reliability;process computer control;hierarchically structured process-control programs;software maintenance;burst hardware failures;redundancy;warm standby;process-control programs;simulated chemical batch reactor system;system reliability;long-lived unmaintainable systems;hot standby;cold standby;computer integrated manufacturing;standby redundancy design space;fault tolerant computing |
631158 | Measures of the Potential for Load Sharing in Distributed Computing Systems. | In this paper we are concerned with the problem of determining the potential for load balancing in a distributed computing system. We define a precise measure, called the Number of Sharable Jobs, of this potential in terms of the number of jobs that can usefully be transferred across sites in the system. Properties of this measure are derived, including a general formula for its probability distribution, independent of any particular queuing discipline. A normalized version of the number of sharable jobs, called the Job Sharing Coefficient, is defined. From the general formula, the probability distribution of the number of sharable jobs is computed for three important queuing models and exact expressions are derived in two cases. For queuing models in which an exact expression for the probability distribution of the number of sharable jobs is difficult to obtain, two methods are presented for numerical computation of this distribution. The job sharing coefficient is plotted against traffic intensity for various values of system parameters. Both of these measures are shown to be useful analytic tools for understanding the characteristics of load sharing in distributed systems and can aid in the design of such systsems. | Introduction
In a distributed system, statistical fluctuations in arrival and service patterns across the various
sites cause imbalances in load where one or more sites may be operating much below capacity
and others may simultaneously be overloaded. Livny and Melman [4] showed that in a
distributed system consisting of homogeneous sites, there is a high probability that a site is idle
while jobs are queued up for service at another. Thus improvement in overall system
performance can be achieved by moving jobs from overloaded to underloaded sites, a process
called load-sharing or load-balancing. Over the last several years, a number of load balancing
algorithms has been proposed. Shivaratri, Krueger, and Singhal [8] provide a survey and
taxonomy of load sharing algorithms. Recently, Rommel [6] extended the Livny-Melman
result for a generalized definition of overloaded and underloaded sites and computed the
probability that at least one site is overloaded while at least one other site is underloaded, for
several service time distributions. This probability is called the Probability of Load Balancing
Success (PLBS).
However, these results do not quantify the potential to which load distribution is possible in a
distributed system because the PLBS does not give any indication of the amount of
simultaneous overload and underload present in the system. For example, Rommel's results are
concerned only with the probability of at least one site being overloaded while at least one
other is underloaded. They do not offer insight into the average number of jobs that can be
profitably transferred by a load sharing algorithm. Such information is clearly useful to
system designers, enabling them to predict accurately the potential number of jobs that can be
transferred to improve overall system performance.
In this paper, we quantify the potential for load distribution in distributed systems in terms of
the number of jobs that can potentially be transferred to balance the load across all sites. We
do this by deriving a general formula for the probability distribution of the number of jobs
which can be usefully transferred across sites in a distributed system. The mean of this
random variable is computed as a function of system load and plotted for a number of queuing
models. It is shown that Rommel's results constitute a special case of this general distribution.
A normalized measure for the potential for load sharing is defined and shown to provide useful
insights in determining the potential for load sharing.
2. Model
We consider a homogeneous distributed computing system consisting of N independent CPUs
each with local memory. Sites, denoted S i , 1 i N, communicate with each other by
message passing over a communications network. Jobs arrive at each site from the outside
world in independent streams, are processed, and sent out. Each site can be modeled as a
queuing system, such as M/M/1, M/D/1, etc. For the moment, we do not assume any particular
queuing model. The job arrival rate is denoted as l and the rate at which each CPU processes
jobs is denoted as . The ratio l / is denoted r and is called the traffic intensity. The queue
size at each node is a continuous time stochastic process denoted as {Q i (t), t 0} where 1 i
N. For clarity, we shall drop the time variable and refer to the instantaneous random variable
in the following.
Due to statistical fluctuations in arrival and service times, Q i varies over time between 0 and
an arbitrarily large number. If Q i is large at a given time then the site S i clearly is faced with
a large backlog of work. On the other hand if Q i is small, or zero, then S i is lightly loaded.
These notions are quantified by selecting two integers L and H, 0 < L < H, and by the
following definitions. The integers L and H are system parameters set by system designers
based upon the speed of the CPUs comprising the distributed system, as well as anticipated job
arrival patterns.
2.1 Definitions
Definition 1: A site S i is underloaded , denoted as UL, if Q i < L.
Definition 2: A site S i is overloaded , denoted as OL, if
Definition 3: A site S i is normal , denoted as NL, if L Q i H.
Definition 4: The probability of underload , which is the same for all i, 1 i N, (due to the
assumption of homogeneity of sites) is the probability that a site is underloaded
Definition 5: The probability of overload , which is the same for all i, 1 i N, (due to the
assumption of homogeneity of sites) is the probability that a site is overloaded.
Definition the amount of underload at a site is the random variable defined
as
Definition 7: For 1 i N the amount of overload at a site is the random variable defined as
A i and B i can be interpreted as the extent of underload and overload, respectively at the site
S i is the number of jobs that S i can receive before becoming normal, i.e., ineligible for
receiving transferred jobs. Similarly B i represents the number of jobs that must be transferred
from S i to make it attain normality. Clearly, the sum of A i over all i, 1 i N, is the total
amount of underload in the system The sum of B i over all i, 1 i N, represents the total
amount of overload in the system. These two sums are defined below.
Definition 8: The total underload in the system is the random variable
Definition 9: The total overload in the system is the random variable
Definition 10: The number of underloaded sites in the system, denoted as N UL , is defined as
Definition 11: The number of overloaded sites in the system, denoted as N OL , is defined as
2.2 A Measure of the Potential for Load Sharing
It is natural to attempt optimization of overall system performance by transferring jobs from
overloaded sites to underloaded ones. Normal and overloaded sites are not candidates to
receive transferred jobs. If site S i is overloaded, then to completely alleviate the overload, the
number of jobs that must be transferred out of S i is This is the number of jobs that
can potentially be transferred from S i to other sites in the system. However, only sites which
are currently underloaded can accept these jobs. If S k is an underloaded site, then it can accept
at most L - Q k jobs since accepting more would make it a normal (or overloaded) site. Thus, it
is quite possible that the total number of jobs that can feasibly be transferred from S i is not
the same as Q
Extrapolating the above reasoning from a single site to the entire system we see that the total
number of jobs that can be transferred, when the system has at least one overloaded site, is a
random variable whose value depends upon the number of underloaded sites at that time, as
well as the queue sizes at these sites. In the following, we shall give a rigorous definition of
this random variable, denoted G, and derive a number of its properties. It is worth noting that
our analysis is independent of any particular load sharing algorithm and the method below can
be extended to general definitions of overload, underload, and normality. The only major
assumption in our model, apart from homogeneity, is that a site can be in only one state -
underloaded, overloaded or Normal - at any one time.
Definition 12: The number of sharable jobs in the system is the random variable,
The expectation of the number of sharable jobs will be denoted EG
From Definitions 6 - 12 it is clear that the Number of Sharable Jobs is a function of N, L, H, r,
and the queuing model of each of the computing sites comprising the distributed system. A
related measure of the potential for job transfer is the following.
Definition 13: Let EQ denote the expectation of the queue size random variable at each of the
sites in the distributed system. The job sharing coefficient of the system, denoted J c , is the
constant:
The job sharing coefficient is a constant which measures the mean number of sharable jobs as a
fraction of the mean number of jobs in the system. Since the number of sharable jobs is always
less than the total number of jobs in the system it follows that 0 J c 1. J c is a function of N,
L, H, r, and the queuing model of a site in the system.
J c is computed as follows: First obtain EQ from the probability distribution of the queue size
in the queuing model at each site. Second, compute the distribution of G using methods
described in the following. Third, calculate EG from the distribution of G. Finally compute
the ratio in Definition 13. For most of the common queuing models, EQ is already available in
the literature. Thus, the hard part in calculating J c will usually be computing the distribution of
G and subsequently EG. Therefore, in the rest of this paper we study G in detail and derive its
important properties, including a general formula for its probability distribution.
2.3 Interpretations
Both EG and J c have useful interpretations. For systems designers, J c provides a normalized
basis for comparing one system to another, or for comparing the same system operating under
different values of parameters such as N, L, etc. Using the methods described in the rest of this
paper, systems designers can compute J c for different values of these parameters as well as for
different queuing models and select the most suitable configuration with respect to their needs.
Thus J c can be used as a design tool. Large values of J c indicate a large potential for load
sharing.
Once a set of systems parameters has been decided upon, the implementers of the system can
use EG to predict the mean number of jobs in the system due to load sharing. EG can also be
used to estimate the amount of message passing and data transfer that the communications
network would be called upon in order to handle the needs of load sharing.
We note also that G is an upper bound on the number of jobs that can be shared among the sites
in the system. This implies an upper bound on the improvement in system performance by
load sharing at any given time. EG is the mean of the maximum number of sharable jobs. On
the average, the best improvement in system performance due to load sharing is bounded by
functions of EG depending on the performance measure in question. These bounds are
currently being researched by the authors.
3. Analysis
We begin by stating a number of observations.
Observation 1: At any given time, a site can only be in one state, normal, underloaded, or
overloaded.
This follows from 0 < L < H and Definition 1 - Definition 3.
Observation 2: The tuple of random variables (N UL , NOL ) is distributed according to a
Trinomial distribution with parameters (N, P UL , P OL ).
This is obtained from Observation 1, Definitions 9 - 12, and the independence of the
sites.
Observation 3: For any i, 1 i N, A i > 0 =>
Observation 4: For any i, 1 i N,
Observations 4 and 5 follow from 0 < L < H, and Definitions 6 and 7 .
Observation 5: If all sites are normal or underloaded then
Observation are normal or overloaded then
If all sites are normal, then for all i, 1 i N, A This means that TUL
9. Hence, from Definition 12 it follows that
are normal and the others are underloaded then T
Similarly, if some sites are normal and all the others are overloaded,
then
Observation 7: G > 0 if and only if there exists at least one underloaded site and one
overloaded site simultaneously, i.e., if and only if NUL > 0 and N OL > 0.
Observation 7 follows from Definition 12 which says that G > 0 if and only if both T UL
> 0 and TOL > 0. This can happen if and only if NUL > 0 and N OL > 0.
Observation 8: 0 G (N-1)L.
To see this we note that from Definition 6, 0 A i L, for all i, 1 i N. From
N*L. The event T would occur if and only if Q
N. But from Observation 7 we must have H for at least one i to have G > 0.
Hence G attains its maximum when all sites except one have queue sizes equal to 0 and the
queue size at the single overloaded site is at least (N-1)L.
Observation 9:
This is clear from Observation 8 above.
From these observations, we see that if there are no underloaded sites then all sites are either
overloaded or normal and it is useless to attempt job transfer. In this case G = 0. On the other
hand, if there are no overloaded sites then all sites are either normal or underloaded so job
transfer is unnecessary. In this case also, we have represents the maximum
number of jobs that can be usefully transferred in the system. There is no point in trying to
transfer a greater number of jobs than G because there will not be any underloaded sites
available to accept these excess jobs.
In [6] Rommel computes the probability that G > 0, and calls it the Probability of Load
Balancing Success(PLBS). Here we shall derive a general expression for the distribution of
G. This expression gives much better insight into the potential for load balancing.
Theorem 1
The probability distribution of G is given by the following expression.
a)
For
N-I
I P OL
I
Where R i and S j are random variables defined as follows.
For all i, 1 i N,
Proof:
From Observation 3.6 we see that
It follows that
0}- P{N UL
From Definitions 2.8 through Definition 2.10 we obtain
b) From Definition 12, for 1 k (N-1)L, we obtain the following expression, using the total
probability law and Observation 7.
k and T OL
k}
N-I
k and (N
For fixed I and J, using Definition 6 and Definition 7, we can rewrite the summand above as
P{ A i
k, and (I sites are UL, J sites are OL)}
k and I sites are UL) and ( B i
k and J sites are OL)}
Now, there are N!
I! J! (N - I - J)!
ways, i.e., combinations of sites, in which the event that I
sites are underloaded while J are overloaded can happen. Because the sites are homogeneous,
these are equiprobable. In each such combination we have three subsets of the N sites: the
underloaded subset, the overloaded subset, and the normal subset. By Observation 1 these
three subsets are disjoint and form a partition of all the sites. The sum of A's in the first set is
over exactly I sites, while the sum of the B's in the second set is exactly over J sites. The
queues across sites are stochastically independent and identically distributed. Hence, using
Observation 2 we can write the above probability as the following.
I! J! (N - I - J)!
P{ A i
I
(2)
We can rewrite the first probability term in (2) as follows.
P{ A i
I
I
a i =0, for 1 i I, and a I*L - k
I
I
I
Because the Q i are independent the probability term here is a product of I terms. Each can be
combined with one of the P UL terms in the denominator. Then, because a i < L in the above
summation, and because P definition,we obtain the following expression.
I | Q I < L}
a i =0, for 1 i I, and a i I*L - k
I
I
}.P{R I
a I
a =0, for 1 i I, and a I*L - k
I
I
is the conditional random variable, R i
| { Q i
< L}. For 1 i I, the R i
are
independent because the Q i are independent. Hence, by combining the product probability above
into one term and rewriting the multiple summation, we obtain the following formula.
P{ A i
I
I
I . (3)
Since for x < L, {Q { the distribution of R i is as follows.
P{
for
In a similar manner, we obtain
J . (4)
Here, S j is the conditional random variable S
its probability distribution is as follows.
, for x > H
Finally, substituting (3) and (4) back into (2) and the result back into (1), we obtain the
expression in Theorem 1(b).
The above theorem is completely general in the sense that it makes no assumptions about the
nature of the queues at each of the N sites, i.e., regarding arrival and service time distributions
and service disciplines. The distributions of the conditional random variables R i and S i are
readily obtained from that of Q N. The formula in Theorem 1(b) involves the
probability distributions of terms of the form R i
I
and S j
. Each of these are sums of
independent identically distributed random variables. Hence their distributions can, in
principle, be computed using a number of methods including the well known transform
methods [3]. If exact expressions prove hard to derive, numerical methods can be used to
compute and to invert these transforms.
From Theorem 1, we can obtain the expectation of the random variable G as a finite sum. This
quantity, henceforth denoted by EG, represents the mean number of jobs that can be usefully
transferred around the system. It is a measure of the potential for load sharing or
alternatively, a measure of the wasted capacity in the system. Clearly, its behavior as a
function of system parameters N, L, H, and r is of great interest.
Note that Theorem 1(b) gives the probability distribution of G in terms of P{G k}, for 1 k
(N-1)L. This, of course, can easily be converted into the more standard form of P{G=k} using
the following formula:
4. Computing the Distribution of G
Theorem 1 gives a general formula for computing the distribution of G in terms of expressions
derived from the probability distribution of the queue size at each site. To apply Theorem 1 to
a particular case, i.e., for a given queue size distribution such as M/D/1, M/M/1, etc., the
procedure is as follows.
1: From the probability distribution of the queue size random variable, obtain the
overload and underload probabilities, P UL and P OL .
Step 2: Using P UL and P OL compute the probability distributions of the conditional overload
and conditional underload random variables, R i and S i for 1 i N.
Step 3: For 1 I N, compute the probability distributions of sums of I independent instances
of each of R i and S i .
Step 4: Compute the formulas of Theorem 1(a) and Theorem 1(b) using the results of Steps 1,
2, and 3.
Generally speaking, P UL and P OL can be obtained readily. Indeed, even closed form
expressions may be derivable for many of the common queuing models. Thus the expression in
Theorem 1(a) for P{G= 0} can usually be computed without great difficulty. On the other
hand, an exact expression for P{G k}, k > 0, in Theorem 1(b) may not always be easy to
derive. This is usually because the distributions of the sums of the overload and the underload
terms may turn out to be exceedingly complex. We now present two computational methods
for dealing with such a situation.
4.1 Method 1
Consider the terms due to underloads in Eq (2) above. Each can be written as:
I
Applying the law of total probability on the random variable Q I and using the independence of
the queues, we obtain:
< L for 1 i I -1)}P{Q I
Thus,we have a recursive relation for q(I, k).
This recursion terminates at
I
and Q 1 < L}
I
I * L - k - x i
I
} otherwise.
Likewise, we can obtain a recursive relation for the 'overload' term in Eq (2). P{G k} can
therefore be computed using a recursive technique. However, this method is computationally
expensive for large N because the above recursions branch out at each step.
4.2 Method 2
In the second method, we make use of transforms to compute the probability distributions of
the sums of the conditional random variables S i and R i for 1 i N. Three transform methods
are commonly available in the context of probability distributions. These are z-transform,
Laplace Transform, and Fourier Transform (also known as the characteristic function) [3].
Here, we shall describe the use of z-transforms. The other transforms can also be used
likewise.
Given a function p n , defined on the non-negative integers, its z-transform [3] is the function
If p n is the probability distribution of a discrete non-negative random variable X, then the z-transform
of the sum of I independent instances of X is simply the Ith power of f[z]. If the
random variable X is finite, then f[z] is a polynomial of finite degree. Thus the z-transform of
the sum is also a polynomial of finite degree. The coefficients of this polynomial comprise of
the probability distribution of the sum of I independent instances of X.
From its definition (see statement of Theorem 1) we observe that each R i is a finite random
variables. The R i are independent and identically distributed. Hence, the distribution of finite
sums of the R i can be obtained exactly by computing the powers of the z-transform of R i .
The conditional random variable S i is not finite. In this case, to apply the above technique, we
are forced to make it finite by truncating S i at some integer M, such that P{S i M} is small,
say 0.0001. The resulting probabilities for the sums of the S i are approximations, as are the
subsequent computations for P{G k}. Since queue size distributions tend to be tail heavy for
large values of the traffic intensity, r , the approximation tends to be less accurate for large r.
On the other hand, this method is very fast and by increasing the truncation value, M, the
accuracy can be enhanced.
Other transform methods may yield better approximations. This is a subject for further study.
The point, however, is that Theorem 1(b) provides a basis for the computation of the
distribution of G, whether by analytical or numerical methods.
4.3 Boundary Values
Two independent and easily computable 'sanity checks' on numerical computation are
available. Theorem 1(a) provides an independent formula for P{G=0} which must equal
1- P{G 1} computed from Theorem 1(b). At the other extreme, the following Lemma
provides an independent check for P{G (N -1)L}.
Let random variable Q denote the queue size . Then
Proof:
From the definition of G, it is clear that the event which from Observation 9 is
the same as {G (N-1)L}, occurs when (N-1) sites have empty queues and one site has queue
size at least (N-1)L. There are N ways, i.e. combinations of sites, in which this event can
happen. Due to homogeneity and independence of the sites, each such combination has
probability Hence the lemma follows.
We now specialize Theorem 1 to a number of important queuing models. These cases illustrate
the computation of the distribution of G, EG, and J c .
5. Special Cases
5.1 M/M/1 Queue
In this case, each site is modeled as an M/M/1 queue in which arrivals to a site form a Markov
process and jobs are serviced according to a Markov process. The M/M/1 queue is a well
known model for queues.
Let random variable Q denote the M/M/1 queue size at a site and r denote the traffic intensity.
For clarity we shall omit subscripts (denoting sites) in the following analysis wherever there is
no confusion. We have [3],
Using this expression for the queue size distribution, we obtain the following expressions.
{
To compute the expression in Theorem 1(b), we need to compute the distributions of R, S,
and the distributions of finite independent sums of each of these.
5.1.1 Distribution of S
First consider the term in Theorem 1(b) due to overloads, i.e. the terms involving S. We have
r H+1
This means that for 1 j N, S j has the same distribution as the random variable
1). Thus, S j
has the same distribution as Q j
. Hence, for k > H, and
Since the Q j above are independent Geometrically distributed random variables each with
parameter r, the sum of J of these is distributed as a Negative Binomial random variable with
parameters (J, r) [5]. Thus, the above probability is given by the following expression.
x
5.1.2 Distribution of R
Now consider the terms in Theorem 1(b) due to underloads, i.e., the terms involving R i . We
have
The z-transform of each R i can be obtained from the above expression. The z-transform of the
sum of I independent random variables, each distributed as R i , is the Ith power of the z-transform
of R i and is given by the following expression.
I
r z
The second term in the above expression is a polynomial of degree (L-1) in (r z). Thus, to
evaluate the above z-transform, we need to know the coefficients of the Ith power of this
polynomial. It is shown in the Appendix that these coefficients, denoted c k , 1 k I(L-1), are
given by the following expression.
I
Putting all these pieces together and performing algebraic manipulation, we obtain the
following expression in the M/M/1 case.
For
N-I
IL-k
Here, f J is the cumulative distribution function of the Negative Binomial (J,r) density. i.e.,
x
for
Finally, to obtain the expectation of G, we observe that since G is a non-negative random
variable its expectation is given by P{G k}
. Thus the above formula for P{G k} can be
used to compute EG.
5.1.3 Computation of the Job Sharing Coefficient
To compute the Job Sharing Coefficient, J c, we recall [3] that the expectation of the queue size
distribution of the M/M/1 queue is r/(1 - r). For various combinations of values of N, L, and
H, EG was computed as described in Section 5.1.2. J c was then calculated from Definition 13
for each of these combinations and plotted as a function of r in Figures 7, 12, 17, and 22.
5.2 M/D/1 queue
When a site is modeled as an M/D/1 queue, the arrival process is Markovian while the service
time is constant across all jobs. In this case, the queue size distribution is given by the
following expressions [2].
n- k
n- k-1
The complex form of causes the computation of the expression in Theorem
1(b) to be intractable. It turns out that for large values of r and n, difficult to
compute accurately since it involves summing up a large number of very small quantities of
alternating sign. Method 2 described in Section 4.2 was used to compute approximations to EG
for various parameter values.
5.2.1 Computation of Job Sharing Coefficient
From [7], we obtain that the average waiting time in M/D/1 queue is given by
, where is the service rate. Using Little's Law [3], we obtain
Using this expression and EG values as computed in Section 5.2, J c can be calculated from
Definition 13. In Figures 8, 13, 18, and 23, J c is plotted against r for various combinations of
values of system parameters.
5.3 M X /M/1 Queue
The M X /M/1 queue models bulk arrivals, in which jobs arrive according to a Markovian
process, not one by one but in bulk. The number of jobs arriving at a particular time, called the
bulk-size random variable, is a discrete non-negative random variable. A common model for
bulk-size distribution is the Geometric distribution [2]. In addition to arrival rate, l, and
service rate, , an M X /M/1 queue requires a third parameter, a, to describe the Geometric
distribution of bulk size. The first two are usually replaced by their ratio l / , denoted by r,
as in the case of the M/M/1 queue. The queue size distribution for an M X /M/1 queue is given
by the following[2].
In this case, we were able to obtain an exact expression for the distribution of G, as follows.
Let us define a)r. For n > 0, we can rewrite the distribution of Q as
It follows that the overload and underload probabilities are given by the following expressions.
From Theorem 1(a) and the above expressions, we obtain:
1- rb L-1
1- rb L-1
As in the M/M/1 case, we observe that the distribution of the overload term S, is the same as
that of the random variable X is a Geometric random variable with
parameter b . The sum of J independent instances of S is therefore distributed as Y
where Y is a Negative Binomial random variable with parameters (J, b).
To obtain the distribution of the sum of I independent instances of the underload term, R, we
use the z-transform method, as in the M/M/1 case. The z-transform of this distribution is the
I
The second term above is the Ith power of an (L-1) degree polynomial in (bz) whose first term
is not 1, but b(1- r)
. In Lemma 2 of the Appendix, we derive the following expression for
the coefficient of the kth power of (b z), denoted by d k, in this polynomial.
I
I
This yields the following formula.
I
IL-
IL-k
Putting all these pieces together, we obtain the following expression.
For
N-I
IL-k
Here, f J is the cumulative distribution function of the Negative Binomial (J, b ) density, i.e.,
x
for
5.3.1 Computation of Job Sharing Coefficient
In Sec 5.3, we observed that for n > 0, the queue size distribution in the M X /M/1 queue is given
by . Using this expression and the definition of in Section 5.3, it
follows that
was calculated from Definition 13 for
various combinations of system parameters and plotted as a function of r in Figures 9, 10, 11,
14, 15, 16, 19, 20, 21, 24, 25, and 26.
6. Numerical Results and Discussion
Using the methods described above, EG and J c were computed as functions of the traffic
intensity for three queuing models: M/M/1, M/D/1, and M X /M/1. Two values, 5 and 20, were
chosen for N, the number of sites in the distributed system. For each value of N, L and H were
varied in two ways. First, L was varied from 1 to 3 while H was set at L + 2. Second, L was
kept fixed at 2, while H was varied from 4 to 6.
Figure
Figure 6 are plots of EG vs. r with various combinations of L
and H for the three queuing models mentioned above. Figure 7 through Figure 26 are plots of
the normalized form of EG, i.e., the Job Sharing Coefficient, J c , vs. r.
6.1 Remarks on plots of EG vs. r
The following are observations based upon inspection of Figure 1 through Figure 3 .
For low values of the traffic intensity r, the mean of G is small or zero because the job arrival
rate is much lower relative to the job processing rate. The queue size at each site is therefore
small and each site is either normal or underloaded so that there is seldom a need for load
sharing. When r is large, (i.e., close to 1), the job arrival rate approaches the job processing
rate and most sites are overloaded. Thus, there tend to be fewer underloaded sites to accept
transferred jobs, so EG is again small or zero. However, in the middle range, where r is
neither too small nor too large, job sharing is possible due to the simultaneous occurrence of
underloaded and overloaded sites. Therefore, as can be seen in the plots, EG is significantly
large in the middle range of values of the traffic intensity.
When the difference between H and L is a constant, the EG vs. r plots shift to the right as L
increases. In particular, EG attains its maximum at larger values of r. This is because H
increases with increasing L. Therefore, queue size has to be correspondingly greater for sites
to be overloaded. Hence, the traffic intensity must be higher to enable occurrence of
overloaded sites.
The value of the maximum of EG also increases when L is increased while keeping (H-L)
constant. This is because the total underload in the system (see Definition 10) increases as L
increases. The maximum number of sharable jobs is also an increasing function of L, as noted
in Observation 8.
Figure
Figure 6 are plots of EG vs. r, for the three queuing disciplines mentioned above, in
which the value of the lower threshold, L, is fixed at 2, while the upper threshold, H, is varied
from 4 to 6. In all cases the maximum of EG decreases as H increases. This is because for a
fixed value of r, the probability of overload is a decreasing function of H. Hence, as H
increases the need for load sharing tends to decrease. At the same time, since L is fixed, the
total underload in the system remains constant. Therefore, the mean number of sharable jobs
decreases as H increases while keeping L fixed.
EG tends to be smaller in M/D/1 case than in M/M/1 case. The reason is that there is a much
greater variation in service time in M/M/1 case as compared to M/D/1 case; so there is a
greater probability of the simultaneous existence of overloaded and underloaded sites in the
former case than the latter. On the other hand, in M X /M/1 queuing model with a = 0.75 , EG
vs. r plots are shifted to the left as compared to M/M/1 case. This is because in M X /M/1
case, there is a higher probability of bulk arrivals, which in turn increases the probability that a
site will become overloaded. Hence the need for load sharing will occur at smaller values of
the traffic intensity in M X /M/1 case than in M/M/1 case.
6.2 Remarks on plots of J c vs. r
Since the Job Sharing Coefficient is a normalized version of EG (see Definition 13), plots of J c
vs. r are similar to those of EG vs. r. However, due to the nonlinear relationship of both EG
and EQ with respect to the traffic intensity, some noteworthy observations can be made based
on
Figure
7 through Figure 26. These are described next.
Despite the fact that the number of sites, N, appears in the denominator of the definition of J c
we see that it tends to increase with increasing N for all queuing disciplines. For example, in
M/M/1 case with reaches a peak of 0.15 whereas for
5, the peak is 0.22. The reason is that P{G > 0} increases non linearly with respect to
increasing N. For M X /M/1 queue with a = 0.75, 20, the mean number of
sharable jobs is as high as 45% of the mean number of all jobs in the system. The potential
for load sharing, and hence the benefits from successful load sharing, appear to be high. This
is so especially for large N.
As can be seen in Figure 9 - Figure 11, and Figure 19 - Figure 21, M X /M/1 case shows
interesting behavior with respect to the parameter a. We note that as the parameter a
increases, the peak for J c occurs for smaller values of r. For example, with
the peak for J c occurs at queue as compared to r = 0.52 in M X /M/1 queue
with 0.5. For the same M X /M/1 case, J c is seen to be fairly high even for small values of
the traffic intensity. The same pattern is seen for M X /M/1 case with
We observe that as a increases, the curves for J c tend to shift towards the left. The reason is
that as a increases, the probability of large bulk size arrivals also increases. Hence, the
probability that all sites will be overloaded is high even for small values of r .
Another interesting observation on M X /M/1 queue for a = 0.75 and is that for
the plot for J c is bimodal even though Figure 3 shows that the plots of EG vs. traffic
intensity for the same parameter values are unimodal. Furthermore, although the peaks move
to the right as we increase L and H, the values of the peaks actually appear to decrease slightly.
These observations illustrate that the non-linearity of both EG and EQ results in somewhat
unexpected phenomena especially when the number of sharable jobs is potentially large.
6.3 Comparison with Previous Work
It is interesting to compare our numerical results with those of Rommel [6], especially for the
larger value of N. Since Rommel plots the Probability of Load Balancing Success(PLBS)
versus traffic intensity while we plot the mean number of jobs that can usefully be shared, these
two are not directly comparable. However, in terms of the information that they give to system
designers and analysts, we can contrast them. From Figure Figure 32, we see that across
all queuing disciplines considered in this paper, for similar values of L and H, the
PLBS is essentially flat and equal to 1 for a large range of values of r. On the other hand, for
the same set of parameter values, the curves for EG and J c are much sharper and more
sensitive to r. This striking contrast becomes more visible as N increases. In this sense, our
results are much more informative than Rommel's. This is natural because Rommel's PLBS is
essentially the same as Probability{G > 0} while our formulation takes into account the entire
probability distribution of G. EG and J c both provide detailed insights into the average
number of jobs that can usefully be transferred from site to site rather than merely providing
the information that load sharing is likely to succeed. In particular, EG is the average of the
maximum number of jobs that can be shared among processing sites in the system.
Improvement in overall system performance using load sharing is limited, on the average, by
some function of EG.
7. Conclusions
In this paper, we have precisely defined the notion of the number of jobs that can usefully be
transferred across sites in homogeneous distributed computing systems. This random variable,
called the number of sharable jobs and denoted as G, provides useful information to system
designers and analysts. A number of properties of G have been derived, including a general
expression for its probability distribution, independent of the particular queuing discipline at
each site of the distributed system. The computation of the distribution of G has been
illustrated for three important queuing models. In two of these, exact expressions have been
derived using the general formula. Two methods have been described to compute the
probability distribution of G in cases where exact expressions are not obtainable.
A quantity called the job sharing coefficient, denoted as J c , which expresses the potential for
load sharing normalized with respect to the mean number of jobs in the system, has been
defined. Interpretations and applications of this measure have been discussed. J c has been
computed for various values of key system parameters and plotted against the traffic intensity.
When compared to previous work in this area, our results provide finer and more detailed
insight about the number of jobs which can usefully be shared in a distributed computing
system. In particular, our work provides an upper bound on the number of jobs that can
usefully be shared across sites in the system and hence on the overall system performance
improvement that can be obtained by load sharing. Both the measures we have described can
play important roles in the analysis and design of distributed computing systems.
Acknowledgments
We are greatly indebted to Professor H. Nagaraja, Department of Statistics, The Ohio State
University, for many helpful discussions and suggestions, especially concerning the proof of
Theorem 1. The first author would like to express his gratefulness to Professor Jack W. Smith,
Jr., Director of the Division of Medical Informatics, College of Medicine, The Ohio State
University, for many years of staunch support and encouragement. And, in particular, for
providing a supportive environment for research and study.
--R
A First Course in Combinatorial Mathematics
Fundamentals of Queuing Theory
Queuing Systems
"Load Balancing in homogeneous broadcast distributed systems"
Introduction to the Theory of Statistics
"The Probability of load balancing success in a homogeneous network"
Elements of Queuing Theory with Applications.
"Load Distributing in Locally Distributed Systems"
--TR
--CTR
M. Sriram Iyengar , Mukesh Singhal, Effect of network latency on load sharing in distributed systems, Journal of Parallel and Distributed Computing, v.66 n.6, p.839-853, June 2006 | distributed systems;load sharing;job sharing coefficient;sharable jobs |
631167 | Guaranteeing Real-Time Requirements With Resource-Based Calibration of Periodic Processes. | This paper presents a comprehensive design methodology for guaranteeing end-to-end requirements of real-time systems. Applications are structured as a set of process components connected by asynchronous channels, in which the endpoints are the systems external inputs and outputs. Timing constraints are then postulated between these inputs and outputs; they express properties such as end-to-end propagation delay, temporal input-sampling correlation, and allowable separation times between updated output values. The automated design method works as follows: First new tasks are created to correlate related inputs, and an optimization algorithm, whose objective is to minimize CPU utilization, transforms the end-to-end requirements into a set of intermediate rate constraints on the tasks. If the algorithm fails, a restructuring tool attempts to eliminate bottlenecks by transforming the application, which is then re-submitted into the assignment algorithm. The final result is a schedulable set of fully periodic tasks, which collaboratively maintain the end-to-end constraints. | Introduction
Most real-time systems possess only a small handful of inherent timing constraints which will "make
or break" their correctness. These are called end-to-end constraints, and they are established on
the systems' external inputs and outputs. Two examples are:
(1) Temperature updates rely on pressure and temperature readings correlated within 10ms.
(2) Navigation coordinates are updated at a minimum rate of 40ms, and a maximum rate 80ms.
But while such end-to-end timing parameters may indeed be few in number, maintaining functionally
correct end-to-end values may involve a large set of interacting components. Thus, to ensure
that the end-to-end constraints are satisfied, each of these components will, in turn, be subject to
their own intermediate timing constraints. In this manner a small handful of end-to-end constraints
may - in even a modest system - yield a great many intermediate constraints.
The task of imposing timing parameters on the functional components is a complex one, and it
mandates some careful engineering. Consider example (2) above. In an avionics system, a "naviga-
tion update" may require such inputs as "current heading," airspeed, pitch, roll, etc; each sampled
within varying degrees of accuracy. Moreover, these attributes are used by other subsystems, each
of which imposes its own tolerance to delay, and possesses its own output rate. Further, the navigation
unit may itself have other outputs, which may have to be delivered at rates faster than
40ms, or perhaps slower than 80ms. And to top it off, subsystems may share limited computer
resources. A good engineer balances such factors, performs extensive trade-off analysis, simulations
and sensitivity analysis, and proceeds to assign the constraints.
These intermediate constraints are inevitably on the conservative side, and moreover, they are
conveyed to the programmers in terms of constant values. Thus a scenario like the following is
often played out: The design engineers mandate that functional units A, B and C execute with
periods 65ms, 22ms and 27ms, respectively. The programmers code up the system, and find that C
grossly over-utilizes its CPU; further, they discover that most of C's outputs are not being read by
the other subsystems. And so, they go back to the engineers and "negotiate" for new periods - for
example 60ms, 10ms and 32ms. This process may continue for many iterations, until the system
finally gets fabricated.
This scenario is due to a simple fact: the end-to-end requirements allow many possibilities for
the intermediate constraints, and engineers make what they consider to be a rational selection.
However, the basis for this selection can only include rough notions of software structuring and
scheduling policies - after all, many times the hardware is not even fabricated at this point!
Our Approach. In this paper we present an alternative strategy, which maintains the timing constraints
in their end-to-end form for as long as possible. Our design method iteratively instantiates
the intermediate constraints, all the while taking advantage of the leeway inherent in the end-to-end
constraints. If the assignment algorithm fails to produce a full set of intermediate constraints, potential
bottlenecks are identified. At this point an application analysis tool takes over, determines
potential solutions to the bottleneck, and if possible, restructures the application to avoid it. The
result is then re-submitted into the assignment algorithm.
Domain of Applicability. Due to the complexity of the general problem, in this paper we place
the following restrictions on the applications that we handle.
Restriction 1: We assume our applications possess three classes of timing constraints which we
call freshness, correlation and separation.
ffl A freshness constraint (sometimes called propagation delay) bounds the time it takes for data to
flow through the system. For example, assume that an external output Y is a function of some
system input X . Then a freshness relationship between X and Y might be: "If Y is delivered
at time t, then the X-value used to compute Y is sampled no earlier than t \Gamma 10ms." We use
the following notation to denote this constraint: "F (Y
ffl A correlation constraint limits the maximum time-skew between several inputs used to produce
an output. For example, if X 1 and X 2 are used to produce Y , then a correlation relationship
may be "if Y is delivered at time t, then the X 1 and X 2 values used to compute Y are sampled
no more than within 2ms of each other." We denote this constraint as "C(Y jX
ffl A separation constraint constrains the jitter between consecutive values on a single output
say Y . For example, "Y is delivered at a minimum rate of 3ms, and a maximum rate
of 13ms," denoted as l(Y
While this constraint classification is not complete, it is sufficiently powerful to represent many
timing properties one finds in a requirements document. (Our initial examples (1) and (2) are
correlation and separation constraints, respectively.) Note that a single output Y 1 may - either
directly or indirectly - be subject to several interdependent constraints. For example, Y 1 might
require tightly correlated inputs, but may abide with relatively lax freshness constraints. However,
perhaps Y 1 also requires data from an intermediate subsystem which is, in turn, shared with a very
high-rate output Y 2 .
Restriction 2: All subsystems execute on a single CPU. Our approach can be extended for use
in distributed systems, a topic we revisit in Section 7. For the sake of presenting the intermediate
constraint-assignment technique, in this paper we limit ourselves to uniprocessor systems.
Restriction 3: The entity-relationships within a subsystem are already specified. For example,
if a high-rate video stream passes through a monolithic, compute-intensive filter task, this situation
may easily cause a bottleneck. If our algorithm fails to find a proper intermediate timing constraint
for the filter, the restructuring tool will attempt to optimize it as much as possible. In the end,
however, it cannot redesign the system.
Finally, we stress that we are not offering a completely automatic solution. Even with a fully
periodic task model, assigning periods to the intermediate components is a complex, nonlinear
optimization problem which - at worst - can become combinatorially expensive. As for software
restructuring, the specific tactics used to remove bottlenecks will often require user interaction.
Problem and Solution Strategy. We duly note the above restrictions, and tackle the intermediate
constraint-assignment problem, as rendered by the following ingredients:
ffl A set of external inputs fX g, and the end-to-end constraints
between them.
ffl A set of intermediate component tasks f g.
ffl A task graph, denoting the communication paths from the inputs, through the tasks, and to
outputs.
Solving the problem requires setting timing constraints for the intermediate components, so that
all end-to-end constraints are met. Moreover, during any interval of time utilization may never
exceed 100%.
Our solution employs the following ingredients: (1) A periodic, preemptive tasking model (where
it is our algorithm's duty to assign the rates); (2) a buffered, asynchronous communication scheme,
allowing us to keep down IPC times; (3) the period-assignment, optimization algorithm, which
forms the heart of the approach; and (4) the software-restructuring tool, which takes over when
period-assignment fails.
Related Work. This research was, in large part, inspired by the real-time transaction model
proposed by Burns et. al. in [3]. While the model was formulated to express database applications,
it can easily incorporate variants of our freshness and correlation constraints. In the analogue to
freshness, a persistent object has "absolute consistency within t" when it corresponds to real-world
samples taken within maximum drift of t. In the analogue to correlation, a set of data objects
possesses "relative consistency within t" when all of the set's elements are sampled within an
interval of time t.
We believe that in output-driven applications of the variety we address, separation constraints
are also necessary. Without postulating a minimum rate requirement, the freshness and correlation
constraints can be vacuously satisfied - by never outputting any values! Thus the separation
constraints enforce the system's progress over time.
Burns et. al. also propose a method for deriving the intermediate constraints; as in the data
model, this approach was our departure point. Here the high-level requirements are re-written as
a set of constraints on task periods and deadlines, and the transformed constraints can hopefully
be solved. There is a big drawback, however: the correlation and freshness constraints can inordinately
tighten deadlines. E.g., if a task's inputs must be correlated within a very tight degree of
accuracy - say, several nanoseconds - the task's deadline has to be tightened accordingly. Similar
problems accrue for freshness constraints. The net result may be an over-constrained system, and
a potentially unschedulable one.
Our approach is different. With respect to tightly correlated samples, we put the emphasis on
simply getting the data into the system, and then passing through in due time. However, since
this in turn causes many different samples flowing through the system at varying rates, we perform
"traffic control" via a novel use of "virtual sequence numbering." This results in significantly looser
periods, constrained mainly by the freshness and separation requirements. We also present a period
assignment problem which is optimal - though quite expensive in the worst case.
This work was also influenced by Jeffay's ``real-time producer/consumer model'' [10], which
possesses a task-graph structure similar to ours. In this model rates are chosen so that all messages
"produced" are eventually "consumed." This semantics leads to a tight coupling between the
execution of a consumer to that of its producers; thus it seems difficult to accommodate relative
constraints such as those based on freshness.
Klein et. al. surveys the current engineering practice used in developing industrial real-time
systems [11]. As is stressed, the intermediate constraints should be primarily a function of the
end-to-end constraints, but should, if possible, take into account sound real-time scheduling tech-
niques. At this point, however, the "state-of-the-art" is the practice of trial and error, as guided
by engineering experience. And this is exactly the problem we address in this paper.
The remainder of the paper is organized as follows. In Section 2 we introduce the application
model and formally define our problem. In Section 3 we show our method of transforming
the end-to-end constraints into intermediate constraints on the tasks. In Section 4 we describe
the constraint-solver in detail, and push through a small example. In Section 5 we describe the
application transformer, and in Section 6 we show how the executable application is finally built.
Problem Description and Overview of Solution
We re-state our problem as follows:
ffl Given a task graph with end-to-end timing constraints on its inputs and outputs,
ffl Derive periods, offsets and deadlines for every task,
ffl Such that the end-to-end requirements are met.
In this section we define these terms, and present an overview of our solution strategy.
2.1 The Asynchronous Task Graph
An application is rendered in an asynchronous task graph (ATG) format, where for a given graph
i.e., the set of tasks; and g, a set of
asynchronous, buffered channels. We note that the external outputs and inputs are simply
typed nodes in D.
(D \Theta P ) is a set of directed edges, such that if are both
in E, then . That is, each channel has a single-writer/multi-reader restriction.
have the following attributes: a period T i , an offset O i 0 (denoting the earliest
start-time from the start-of-period), a deadline D i T i (denoting the latest finish-time relative
to the start-of-period), and a maximum execution time e i . The interval [O constrains
the window W i of execution, where W
Note that initially the T i 's, O i 's and D i 's are open variables, and they get instantiated by the
constraint-solver.
The semantics of an ATG is as follows. Whenever a task i executes, it reads data from all
incoming channels d j corresponding to the edges d j ! i , and writes to all channels d l corresponding
to the edges i ! d l . The actual ordering imposed on the reads and writes is inferred by the task
All reads and writes on channels are asynchronous and non-blocking. While a writer always
inserts a value onto the end of the channel, a reader can (and many times will) read data from
any location. For example, perhaps a writer runs at a period of 20ms, with two readers running at
120ms and 40ms, respectively. The first reader may use every sixth value (and neglect the others),
whereas the second reader may use every other value.
But this scheme raises a "chicken and egg" issue, one of many that we faced in this work. One
of our objectives is to support software reuse, in which functional components may be deployed
in different systems - and have their timing parameters automatically calibrated to the physical
limitations of each. But this objective would be hindered if a designer had to employ the following
tedious method: (1) to first run the constraint-solver, which would find the T i 's, and then, based
on the results; (2) to hand-patch all of the modules with specialized IPC code, ensuring that the
intermediate tasks correctly correlate their input samples.
Luckily, the ATG semantics enables us to automatically support this process. Consider the
ATG in
Figure
1(A), whose node 4 is "blown up" in Figure 1(B). As far as the programmer is
concerned the task 4 has a (yet-to-be-determined) period T 4 , and a set of asynchronous channels,
accessible via generic operations such as "Read" and "Write." Moreover, the channels can be
treated both as unbounded, and as non-blocking.
After the constraint-assignment algorithm determines the task rates, a post-processing phase
determines the actual space required for each channel. Then they are automatically implemented
as circular, slotted buffers. This is accomplished by running an "awk" script on each module, which
instantiates each "Read" and "Write" operation to select the correct input value.
This type of scheme allows us to minimize the overhead incurred when blocking communication
is used, and to concentrate exclusively on the assignment problem. In fact - as we show in the
sequel - communication can be completely unconditional, in that we do not even require short
locking for consistency. However, we pay a price for avoiding this overhead; namely, that the
period assignments must ensure that no writer can overtake a reader currently accessing its slot.
Moreover, we note that our timing constraints define a system driven by time and output re-
quirements. This is in contrast to reactive paradigms such as ESTEREL [4], which are input-driven.
Analogous to the "conceptually infinite buffering" assumptions, the rate assignment algorithm assumes
that the external inputs are always fresh and available. The derived input-sampling rates
then determine the true requirements on input-availability. And since an input X can be connected
to another ATG's output Y , these requirements would be imposed on Y 's timing constraints.
d1 d2
6
f
Write(&Y1, res);
Figure
1: (A) A task graph and (B) code for 4 .
2.2 A Small Example
As a simple illustration, consider the system whose ATG is shown in Figure 1(A). The system
is composed of six interacting tasks with three external inputs and two external outputs. The
application's characteristics are as follows:
Freshness F (Y 1
Correlation
Separation
Max Execution e
Times: e
While the system is small, it serves to illustrate several facets of the problem: (1) There may
be many possible choices of rates for each task; (2) correlation constraints may be tight compared
to the allowable end-to-end delay; (3) data streams may be shared by several outputs (in this case
that originating at X 2 ); and (4) outputs with the tightest separation constraints may incur the
highest execution-time costs (in this case Y 1 , which exclusively requires 1 ).
2.3 Problem Components
Guaranteeing the end-to-end constraints actually poses three sub-problems, which we define as
follows.
Correctness: Let C be the set of derived, intermediate constraints and E be the set of end-to-end
constraints. Then all system behaviors that satisfy C also satisfy E .
Feasibility: The task executions inferred by C never demand an interval of time during which
utilization exceeds 100%.
Schedulability: There is a scheduling algorithm which can efficiently maintain the intermediate
constraints C, and preserve feasibility.
In the problem we address, the three issues cannot be decoupled. Correctness, for example, is often
treated as verification problem using a logic such as RTL [9]. Certainly, given the ATG we could
formulate E in RTL and query whether the constraint set is satisfiable. However, a "yes" answer
would give us little insight into finding a good choice for C - which must, after all, be simple enough
to schedule. Or, in the case of methods like model-checking ([1], etc.), we could determine whether
C)E is invariant with respect to the system. But again, this would be an a posteriori solution, and
assume that we already possess C. On the other hand, a system that is feasible may still not be
schedulable under a known algorithm; i.e., one that can be efficiently managed by a realistic kernel.
In this paper we put our emphasis on the first two issues. However, we have also imposed a task
model for which the greatest number of efficient scheduling algorithms are known: simple, periodic
dispatching with offsets and deadlines. In essence, by restricting C's free variables to the T i 's, O i 's
and D i 's, we ensure that feasible solutions to C can be easily checked for schedulability.
The problem of scheduling a set of periodic real-time tasks on a single CPU has been studied
for many years. Such a task set can be dispatched by a calendar-based, non-preemptive schedule
(e.g., [16, 17, 18]), or by a preemptive, static-priority scheme (e.g., [5, 12, 13, 15]). For the most
part our results are independent of any particular scheduling strategy, and can be used in concert
with either non-preemptive or preemptive dispatching.
However, in the sequel we frequently assume an underlying static-priority architecture. This is
for two reasons. First, a straightforward priority assignment can often capture most of the ATG's
precedence relationships, which obviates the need for superfluous offset and deadline variables.
Thus the space of feasible solutions can be simplified, which in turn reduces the constraint-solver's
work. Second, priority-based scheduling has recently been shown to support all of the ATG's
inherent timing requirements: pre-period deadlines [2], precedence constrained sub-tasks [8], and
offsets [14]. A good overview to static priority scheduling may be found in [5].
2.4 Overview of the Solution
Our solution is carried out in a four-step process, as shown in Figure 2. In Step 1, the intermediate
constraints C are derived, which postulates the periods, deadlines and offsets as free variables. The
challenge here is to balance several factors - correctness, feasibility and simplicity. That is, we
failure
Constraint
Derivation
Asynchronous
Task Graph
Constraints
Restructuring
Tool
Constraint
Satisfaction
Feasible
Task Set
Buffer
Allocation
Final
Task Set
Application Structure.
End-to-end Constraints.
Task Libraries.
Figure
2: Overview of the approach.
require that any solution to C will enforce the end-to-end constraints E , and that any solution must
also be feasible. At the same time, we want to keep C as simple as possible, and to ensure that
finding a solution is a relatively straightforward venture. This is particularly important since the
feasibility criterion - defined by CPU utilization - introduces non-linearities into the constraint
set. In balancing our goals we impose additional structure on the application; e.g., by creating new
sampler tasks to get tightly correlated inputs into the system.
In Step 2 the constraint-solver finds a solution to C, which is done in several steps. First C is
solved for the period variables, the T i 's, and then the resulting system is solved for the offsets and
deadlines. Throughout this process we use several heuristics, which exploit the ATG's structure.
If a solution to C cannot be found, the problem often lies in the original design itself. For
example, perhaps a single, stateless server handles inputs from multiple clients, all of which run at
wildly different rates. Step 3's restructuring tool helps the programmer eliminate such bottlenecks,
by automatically replicating strategic parts of the ATG.
In Step 4, the derived rates are used to reserve memory for the channels, and to instantiate
the "Read" and "Write" operations. For example, consider 4 in Figure 1(B), which reads from
Now, assume that the constraint-solver assigns 4 and 2 periods of 30ms and 10ms, respectively.
Then 4 's Read operation on d 2 would be replaced by a macro, which would read every third data
item in the buffer - and would skip over the other two.
Harmonicity. The above scheme works only if a producer can always ensure that it is not
overtaking its consumers, and if the consumers can always determine which data item is the correct
one to read. For example, 4 's job in managing d 2 is easy - since T
read every third item out of the channel.
But 4 has another input channel, d 1 ; moreover, temporally correlated samples from the two
channels have to be used to produce a result. What would happen if the solver assigned 1 a period
of 30ms, but gave 2 a period of 7ms?
If the tasks are scheduled in rate-monotonic order, then d 2 is filled five times during 4 's first
frame, four times during the second frame, etc. In fact since and 7 are relatively prime, 4 's
selection logic to correlate inputs would be rather complicated. One solution would be to time-stamp
each input X 1 and X 2 , and then pass these stamps along with all intermediate results. But
this would assume access to a precise hardware timer; moreover, time-stamps for multiple inputs
would have to be composed in some manner. Worst of all, each small data value (e.g., an integer)
would carry a large amount of reference information.
The obvious solution is the one that we adopt: to ensure that every "chain" possesses a common
base clock-rate, which is exactly the rate of the task at the head of the chain. In other words, we
impose a harmonicity constraint between (producer, consumer) pairs; (i.e., pairs
are edges p ! d and d ! c .)
Definition 2.1 (Harmonicity) A task 2 is harmonic with respect to a task 1 if T 2 is exactly
divisible by T 1 ( represented as T 2 jT 1
Consider Figure 1(A), in which there are three chains imposing harmonic relationships. In this
tightly coupled system we have that T 4
Deriving the Constraints
In this section we show the derivation process of intermediate constraints, and how they (conser-
vatively) guarantee the end-to-end requirements. We start the process by synthesizing the intermediate
correlation constraints, and then proceed to treat freshness and separation.
3.1 Synthesizing Correlation Constraints
Recall our example task graph in Figure 3(A), where the three inputs are sampled
by three separate tasks. If we wish to guarantee that 1 's sampling of X 1 is correctly correlated
to 2 's sampling of X 2 , we must pick short periods for both 1 and 2 . Indeed, in many practical
real-time systems, the correlation requirements may very well be tight, and way out of proportion
with the freshness constraints. This typically results in periods that get tightened exclusively to
accommodate correlation, which can easily lead to gross over-utilization. Engineers often call this
problem "over-sampling," which is somewhat of a misnomer, since sampling rates may be tuned
expressly for coordinating inputs. Instead, the problem arises from poor coupling of the sampling
and computational activities.
Thus our approach is to decouple these components as much as possible, and to create specialized
samplers for related inputs. For a given ATG, the sampler derivation is performed in the following
manner.
is an integer.
d1 d2
6
s
dX1 dX2 dX3
d1 d2
6
Figure
3: (A) Original task graph and (B) transformed task graph.
foreach Correlation constraint C l (Y k jX l 1
Create the set of all input-output pairs associated with C l , i.e.,
foreach T l , foreach T k
If there's a common input X such that there exist outputs Y
with
if chains from X to Y i and X to Y j share a common task, then
foreach T l , identify all associated sampling tasks, i.e.,
If jS l j ? 1, create a periodic sampler s l
to take samples for inputs in T l
Thus the incoming channels from inputs T l to tasks in S l are "intercepted" by the new sampler
task s l .
Returning to our original example, which we repeat in Figure 3(A). Since both correlated inputs
share the center stream, the result is a single group of correlated inputs )g. This, in
turn, results in the formation of the single sampler s . We assume s has a low execution cost of 1.
The new, transformed graph is shown at the right column of Figure 3(B).
As for the deadline-offset requirements, a sampler s l
is constrained by the following trivial
relationship
where t cor is the maximum allowable time-drift on all correlated inputs read by s l
The sampler tasks ensure that correlated inputs are read into the system within their appropriate
time bounds. This allows us to solve for process rates as a function of both the freshness and
separation constraints, which vastly reduces the search space.
However we cannot ignore correlation altogether, since merely sampling the inputs at the same
time does not guarantee that they will remain correlated as they pass through the system. The
input samples may be processed by different streams (running at different rates), and thus they
may still reach their join points at different absolute times.
For example, refer back to Figure 3, in which F (Y 2 This disparity is the result
of an under-specified system, and may have to be tightened. The reason is simple: if 6 's period
is derived by using correlation as a dominant metric, the resulting solution may violate the tighter
freshness constraints. On the other hand, if freshness is the dominant metric, then the correlation
constraints may not be achieved.
We solve this problem by eliminating the "noise" that exists between the different set of require-
ments. Thus, whenever a fresh output is required, we ensure that there are correlated data sets to
produce it. In our example this leads to tightening the original freshness requirement F (Y 2 jX 2 ) to
Thus we invoke this technique as a general principle. For an output Y with correlated input
sets the associated freshness constraints are adjusted accordingly:
3.2 Synthesizing Freshness Constraints
Consider a freshness constraint F (Y recall its definition:
For every output of Y at some time t, the value of X used to compute Y must have
been read no earlier that time
As data flows through a task chain from X to Y , each task adds two types of delay overhead to
the data's end-to-end response time. One type is execution time, i.e., the time required for to
process the data, produce outputs, etc. In this paper we assume that 's maximum execution time
is fixed, and has already been optimized as much as possible by a good compiler.
The other type of delay is transmission latency, which is imposed while waits for its correlated
inputs to arrive for processing. Transmission time is not fixed; rather, it is largely dependent on
our derived process-based constraints. Thus minimizing transmission time is our goal in achieving
tight freshness constraints.
Fortunately, the harmonicity relationship between producers and consumers allows us to accomplish
this goal. Consider a chain is the output task, and 1 is the input
Y
O 1
O 2
O 3
1. Harmonicity: T 2
2. Precedence: 1 OE 2 OE 3
3. Chain Size: D
Constraints
Figure
4: Freshness constraints with coupled tasks.
task. From the harmonicity constraints we get T Assuming that all tasks are
started at time 0, whenever there is an invocation of the output task n , there are simultaneous
invocations of every task in the freshness chain.
Consider Figure 4 in which there are three tasks 1 ; 2 and 3 in a freshness chain. From the
harmonicity assumption we have T 3 jT 2 and T 2 jT 1 .
The other constraints are derived for the entire chain, under the scenario that within each task's
minor frame, input data gets read in, it gets processed, and output data is produced. Under these
constraints, the worst case end-to-end delay is given by D and the freshness requirement is
guaranteed if the following holds:
Note that we also require a precedence between each producer/consumer task pair. As we show in
Figure
4, this can be accomplished via the offset and deadline variables - i.e., by mandating that
But this approach has the following obvious drawback: The end-to-end freshness t f must be
divided into fixed portions of slack at each node. On a global system-wide level, this type of rigid
flow control is not the best solution. It is not clear how to distribute the slack between intermediate
tasks, without over-constraining the system. More importantly, with a rigid slack distribution, a
Table
1: Constraints due to freshness requirements.
consumer task would not be allowed to execute before its offset, even if its input data is available. 2
Rather, we make a straightforward priority assignment for the tasks in each chain, and let
the scheduler enforce the precedence between them. In this manner, we can do away with the
intermediate deadline and offset variables. This leads to the following rule of thumb:
If the consumer task is not the head or tail of a chain, then its precedence requirement
is deferred to the scheduler. Otherwise, the precedence requirement is satisfied through
assignment of offsets.
Example. Consider the freshness constraints for our example in Figure 3(A), F (Y 1
15. The requirement F (Y 1 specifies a
chain window size of D 4 \Gamma O s 30. Since 1 is an intermediate task we now have the precedence
s OE 1 , which will be handled by the scheduler. However, according to our "rule of thumb," we
use the offset for 4 to handle the precedence 1 OE 4 . This leads to the constraints D 1 O 4 and
Similar inequalities are derived for the remaining freshness constraints, the result
of which is shown in Table 1.
3.3 Output Separation Constraints
Consider the separation constraints for an output Y , generated by some task i . As shown in
Figure
5, the window of execution defined by O i and D i constrains the time variability within a
period. Consider two frames of i 's execution. The widest separation for two successive Y 's can
occur when the first frame starts as early as possible, and the second starts as late as possible.
Conversely, the opposite situation leads to the smallest separation.
Thus, the separation constraints will be satisfied if the following holds true:
2 Note that corresponding issues arise in real-time rate-control in high-speed networks.
O i
O i
Y latest
Y latest
Y earliest
Y earliest
Figure
5: Separation constraints for two frames.
Example. Consider the constraints that arise from output separation requirements, which are
induced on the output tasks 4 and 6 . The derived constraints are presented below:
3.4 Execution Constraints:
Clearly, each task needs sufficient time to execute. This simple fact imposes additional constraints,
that ensure that each task's maximum execution time can fit into its window. Recall that (1) we
use offset, deadline and period variables for tasks handling external input and output; and (2) we
use period variables and precedence constraints for the intermediate constraints.
We can easily preserve these restrictions when dealing with execution time. For each external
task i , the following inequalities ensure that window-size is sufficiently large for the CPU demand:
On the other hand, the intermediate tasks can be handled by imposing restrictions on their constituent
chains. For a single chain, let E denote the chain's execution time from the head to the
last intermediate task (i.e., excluding the outputting task, if any). Then the chain-wise execution
constraints are:
O h +E Dm ; Dm Tm
where O h is the head's offset, and where Dm and Tm are the last intermediate task's deadline and
period, respectively.
Example. Revisiting the example, we have the following execution-time constraints.
This completes the set of task-wise constraints C imposed on our ATG. Thus far we have shown
only one part of the problem - how C can derived from the end-to-end constraints. The end-to-
requirements will be maintained during runtime (1) if a solution to C is found, and (2) if the
scheduler dispatches the tasks according to the solution's periods, offsets and deadlines. Since
there are many existing schedulers that can handle problem (2), we now turn our attention to
problem (1).
2: Constraint Solver
The constraint solver generates instantiations for the periods, deadlines and offsets. In doing so,
it addresses the notion of feasibility by using objective functions which (1) minimize the overall
system utilization; and (2) maximize the window of execution for each task. Unfortunately, the
non-linearities in the optimization criteria - as well as the harmonicity assumptions - lead to a
very complex search problem.
We present a solution which decomposes the problem into relatively tractable parts. Our
decomposition is motivated by the fact that the non-linear constraints are confined to the period
variables, and do not involve deadlines or offsets. This suggests a straightforward approach, which
is presented in Figure 6.
1. The entire constraint set C is projected onto its subspace "
C, constraining only the T i 's.
2. The constraint set "
C is optimized for minimum utilization.
3. Since we now have values for the T i 's, we can instantiate them in the original constraint set
C. This forms a new, reduced set of constraints
C, all of whose functions are affine in the O i 's
and D i 's. Hence solutions can be found via linear optimization.
The back-edge in Figure 6 refers to the case where the nonlinear optimizer finds values for the
no corresponding solution exists for the O i 's and D i 's. Hence, a new instantiation for the
periods must be obtained - a process that continues until either a solution is found, or all possible
values for the T i 's are exhausted.
4.1 Elimination of Offset and Deadline Variables
We use an extension of Fourier variable elimination [6] to simplify our system of constraints. Intu-
itively, this step may be viewed as the projection of an n dimensional polytope (described by the
constraints) onto its lower-dimensional shadow.
Non-Linear Constraints on:
Linear Constraints on:
Non-linear Constraints on:
Eliminate O i and D i .
Optimize w.r.t. min(U ).
Optimize w.r.t. min(D
Solution
Figure
Top level algorithm to obtain task characteristics.
In our case, the n-dimensional polytope is the object described by the initial constraint set C,
and where the shadow is the subspace "
C, in which only the T i 's are free. The shadow is derived by
eliminating one offset (or deadline) variable at a time, until only period variables remain. At each
stage the new set of constraints is checked for inconsistencies (e.g., 0 ? 5). Such a situation means
that the original system was over-specified - and the method terminates with failure.
The technique can best be illustrated by a small example. Consider the following two inequalities
on W
Each constraint defines a line; when W 4 and T 4 are restricted to nonzero solutions, the result is a
2-dimensional polygon. Eliminating the variable W 4 is simple, and is carried out as follows:
Since we are searching for integral, nonzero solutions to T 4 , any integer in can be considered
a candidate.
When there are multiple constraints on W 4 - perhaps involving many other variables - the same
Y
Figure
7: Variable elimination for integer solutions - A deviant case.
process is used. Every constraint "W 4 combined with every other constraint "W 4
until W 4 has been eliminated. The correctness of the method follows simply from the polytope's
convexity, i.e., if the original set of constraints has a solution, then the solution is preserved in the
shadow.
Unfortunately, the opposite is not true; hence the the requirement for the back-edge in Figure 6.
As we have stated, the refined constraint set "
C may possess a solution for the T i 's that do not
correspond to any integral-valued O i 's and D i 's. This situation occasionally arises from our quest
for integer solutions to the T i 's - which is essential in preserving our harmonicity assumptions.
For example, consider the triangle in Figure 7. The X-axis projection of the triangle has seven
integer-solutions. On the other hand, none exist for Y , since all of the corresponding real-valued
solutions are "trapped" between 1 and 2.
If, after obtaining a full set of T i 's, we are left without integer values for the O i 's and D i 's, we
can resort to two possible alternatives:
1. Search for rational solutions to the offsets and deadlines, and reduce the clock-granularity
accordingly, or
2. Try to find new values for the T i 's, which will hopefully lead to a full integer solution.
The Example Application - From C to "
C. We illustrate the effect of variable elimination on
the example application presented earlier. The derived constraints impose lower and upper bounds
on task periods, and are shown below. Also remaining are the original harmonicity constraints.
Linear
Constraints
Harmonicity
Constraints
Utilization-Based Pruning
LCM Child Pruning
Search Solution
Harmonic Chain Merging
Figure
8: Finding the T 's - Pruning.
Here the constraints on the output tasks ( 4 and 6 ) stem from the separation constraints, which
impose upper and lower bounds on the periods.
4.2 From "
C to
C: Deriving the Periods
Once the deadlines and offsets have been eliminated, we have a set of constraints involving only
the task periods. The objective at this point is to obtain a feasible period assignment which (1)
satisfies the derived linear equations; (2) satisfies the harmonicity assumptions; and (3) is subject
to a realizable utilization, i.e.,
1.
As in the example above, the maximum separation constraints will typically mandate that the
solution-space for each T i be bounded from above. Thus we are faced with a decidable problem -
albeit a complex one. In fact there are cases which will defeat all known algorithms. In such cases
there is no alternative to traversing the entire Cartesian-space
where there are n tasks, and where each T i may range within [l
Fortunately the ATG's structure gives rise to four heuristics, each of which can aggressively
prune the search space. The strategy is pictorially rendered in Figure 8.
Let Pred(i) (Succ(i)) denote the set of tasks which are predecessors (successors) of task i , i.e.,
those tasks from (to) which there is a directed path to (from) i . Since the harmonicity relationship
is transitive, we have that if j 2 Succ( i ), it follows that T j jT i . This simple fact leads to three of
our four heuristics.
Harmonic Chain Merging extends from following observation: we do not have to solve for each
it is an arbitrary variable in an arbitrary function. Rather, we can combine chains of
processes, and then solve for their base periods. This dramatically reduces the number of free
variables.
GCD Parent Pruning is used to ensure that the head of each chain forms a greatest-common-
divisor for the entire chain. All tuples which violate this property are deleted from the set of
candidate solutions.
Utilization Pruning ensures that candidate solutions maintain a CPU utilization under 100%,
a rather desirable constraint in a hard real-time system.
LCM Child Pruning takes an opposite approach to GCD Parent Pruning. It ensures that a
task's period is a multiple of its predecessors' combined LCM. Since this is the most expensive
pruning measure, it is saved for last.
Harmonic Chain Merging. The first step in the pruning process extends from a simple, but
frequently overlooked, observation: that tasks often over-sample for no discernible reason, and that
unnecessarily low T i 's can easily steal cycles from the tasks that truly need them. For our purposes,
this translates into the following rule:
If a task i executes with period T i , and if some j 2 has the property that
should also execute with period T i .
In other words, we will never run a task faster than it needs to be run. In designs where the
periods are ad-hoc artifacts, tuned to achieve the end-to-end constraints, such an approach would
be highly unsafe. Here the rate constraints are analytically derived directly from the end-to-end
requirements. We know "how fast" a task needs to be run, and it makes no sense to run it faster.
This allows us to simplify the ATG by merging nodes, and to limit the number of free variables
in the problem. The method is summed up in the following steps:
consequently, T i T j . The first pruning takes place by
propagating this information to tighten the period bounds. Thus, for each task i , the bounds are
tightened as follows:
(2) The second step in the algorithm is to simplify the task graph. Consider a task i , which
has an outgoing edge . Then the maximum value of T i is constrained only
by harmonicity restrictions. The simplification is done by merging i and j , whenever it is safe
to set T i.e., the restricted solution space contains the optimal solution. The following two
rules give the condition when it safe to perform this simplification.
Rule 1: If a vertex i has a single outgoing edge merged with j .
Rule 2: If Succ( i merged with
Consider the graph in Figure 9. The parenthesized numbers denote the costs of corresponding
nodes. In the graph, the nodes 3 , 5 , and 1 have a single outgoing edge. Using Rule 1, we
s (1)
1;4 (8)
6 (2)
3;5;6 (8)
1;4 (8)
s (1)
3;5;6 (8)
Figure
9: Task graph for harmonicity and its simplification.
merge 3 and 5 with 6 , and 1 with 4 . In the simplified graph, Succ( s and
g. Thus, we can invoke Rule 2 to merge s with 2 .
This scheme manages to reduce our original seven tasks to three sets of tasks, where each set
can be represented as a pseudo-task with its own period, and an execution time equal to the sum
of its constituent tasks.
At this point we have reduced the structure of the ATG as much as possible, and we turn
to examining the search space itself. But even here, we can still use the harmonicity restrictions
and utilization bounds as aggressively as possible, with the objective of limiting our search. Let
\Phi denote the set of feasible solutions for a period T i , whose initial solution space is denoted as
g. The pruning takes place by successively refining and restricting \Phi i for
each task.
Algorithm 4.1 combines our three remaining pruning techniques - GCD Parent Pruning, Utilization
Pruning and LCM Child Pruning. In the following paragraphs, we explain these steps in
detail, and show how they are applied to our example.
GCD Parent Pruning. Consider any particular node i in the task graph. The feasible set
of solutions for this node can be reduced by considering the harmonicity relationship with all its
successor nodes.
That is, we restrict \Phi i to values that can provide a base clock-rate for all successor tasks.
Algorithm 4.1 Prune Feasible Search Space using Harmonicity and Utilization constraints.
allowable utilization. */
Sort the graph in reverse topological order. Let the sorted list be
for to n do /* traverse the list */
/* check utilization condition for each value in \Phi i j
*/
foreach do
foreach
U min := U min \Gamma U min
U
/* Propagate restricted feasible set to all successors */
foreach
Figure
10: Pruning feasible space for period derivation.
Within our example, our three merged tasks have the following allowable ranges:
The sampler task's period is restricted to values with integral multiples in both \Phi 1;4 and \Phi 3;5;6 .
After deleting members of \Phi s;2 that fail to satisfy this property, we are left with the following
reduced set:
Utilization-Based Pruning. Let U max be the upper bound on the utilization that we wish to
achieve. At any stage, a lower bound on the utilization for task i is given by:
U min
If the lower bound on overall utilization U min (=
then there is no solution
which satisfies the utilization bound. Now, consider a single task ' , and consider a value "
T k for all other tasks as follows:
T ' is the period for ' , a lower bound on the utilization is given by:
Clearly, if U ? Umax , then no feasible solution can be obtained with "
hence it may be
removed from the feasible set.
Returning to our example, at this point we have the following solution space:
Since 1;4 and 3;5;6 have no successors, and the utilization bounds are satisfied for all values, no
restriction takes place. Now we consider s;2 , whose period comes from our original sampler task.
After testing the possible solutions for utilization, we obtain the reduced set \Phi 14g.
LCM Child-Pruning. In general, there may be several chains, each of whose tasks has been
restricted by the utilization test. (In our case we only have two chains which share a common
source.) In the general case, the reduced feasible set for each task i may be propagated to all
successors k . This is done by restricting T k to integer multiples of T i .
When we follow this approach in our example, we end up with the following solution space:
If our objective is to achieve optimality, then examining the remaining candidate solutions is probably
unavoidable. In this case, the optimal solution is easily found to be T
42, giving a utilization of 0:7619.
If the remaining solution-space is large, a simple a branch-and-bound heuristic can be employed
to control the search. By carefully setting the utilization bound, we can limit the search time
required, since the tighter the utilization bound, the greater is the pruning achieved. Thus, by
starting with a low utilization bound, and successively increasing it, we can reduce the amount of
search time required to achieve optimally low utilization.
However, if the objective is simply finding a solution - any solution - then any of the remaining
candidates can be selected.
4.3 Deriving Offsets and Deadlines
Once the task periods are determined, we need to revisit the constraints to find a solution to the
deadlines and offsets of the periods. This involves finding a solution which maximizes schedulability.
Variable elimination allows us to select values in the reverse order in which they are eliminated.
Suppose we eliminated in following When variable x i is eliminated, the
remaining free variables are [x are already bound to values, the
constraints immediately give a lower and an upper bound on x i .
We use this fact in assigning offsets and deadlines to the tasks. As the variables are assigned
values, each variable can be individually optimized. Recall that the feasibility of a task set requires
that the task set never demand a utilization greater than one in any time interval. We use a greedy
heuristic, which attempts to maximize the window of execution for each task. For tasks which
do not have an offset, this is straightforward, and can be achieved by maximizing the deadline.
For input/output tasks which have offsets, we also need to fix the position of the window on the
time-line. We do this by minimizing the offset for input tasks, and maximizing the deadline for
output tasks.
The order in which the variables are assigned is given by the following strategy: First, we assign
the windows for each input task, followed by the windows for each output task. Then, we assign
the offsets for each task followed by deadline for each output task. Finally, the deadlines for the
remaining tasks are assigned in a reverse topological order of the task graph. Thus, an assignment
ordering for the example application is given as fW s g. The final
parameters, derived as a result of this ordering, are shown below.
Period 14 28 14 42 28 42 42
Offset
Deadline 3
A feasible schedule for the task set is shown in Figure 11. We note that the feasible schedule can
be generated using the fixed priority ordering s
5 Step 3: Graph Transformation
When the constraint-solver fails, replicating part of a task graph may often prove useful in reducing
the system's utilization. This benefit is realized by eliminating some of the tight harmonicity
Figure
Feasible schedule for example application.
requirements, mainly by decoupling the tasks that possess common producers. As a result, the
constraint derivation algorithm has more freedom in choosing looser periods for those tasks.
Recall the example application from Figure 3(B), and the constraints derived in Section 4. In
the resulting system, the producer/consumer pair has the largest period difference
and 42). Note that the constraint solver mandated a tight period for 2 , due to the coupled
harmonicity requirements T 4 jT 2 and T 5 jT 2 . Thus, we choose to replicate the chain including 2 from
the sampler ( s ) to data object d 2 . This decouples the data flow to Y 1 from that to Y 2 . Figure 12
shows the result of the replication.
6
d2
Figure
12: The replicated task graph.
Running the constraint derivation algorithm again with the transformed graph in Figure 12, we
obtain the following result. The transformed system has a utilization of 0.6805, which is significantly
lower than that of the original task graph (0.7619).
Periods
The subgraph replication technique begins with selecting a producer/consumer pair which requires
replication. There exist two criteria in selecting a pair, depending on the desired goal. If
the goal is reducing expected utilization, a producer/consumer pair with the maximum period difference
is chosen first. On the other hand, if the goal is achieving feasibility, then we rely on the
feedback from the constraint solver in determining the point of infeasibility.
After a producer/consumer pair is selected, the algorithm constructs a subgraph using a backward
traversal of the task graph from the consumer. In order to avoid excessive replication, the
traversal is terminated at the first confluence point. The resulting subgraph is then replicated and
attached to the original graph.
The producer task in a replication may, in turn, be further specialized for the output it serves.
For example, consider a task graph with two consumers c1 and c2 and a common producer p .
If we replicate the producer, we have two independent producer/consumer pairs, namely
only serves c2 , we can eliminate all operations that only contribute to the
output for c1 . This is done by dead code elimination, a common compiler optimization. The same
specialization is done for p .
6 Step 4: Buffer Allocation
Buffer allocation is the final step of our approach, and hence applied to the feasible task graph
whose timing characteristics are completely derived. During this step, the compiler tool determines
the buffer space required by each data object, and replaces its associated reads and writes with
simple macros. The macros ensure that each consumer reads temporally correlated data from
several data objects - even when these objects are produced at vastly different rates. The reads
and writes are nonblocking and asynchronous, and hence we consider each buffer to have a "virtual
sequence number."
Combining a set of correlated data at a given confluence point appears to be a nontrivial
venture. After all, (1) producers and the consumers may be running at different rates; and (2) the
flow delays from a common sampler to the distinct producers may also be different. However, due
to the harmonicity assumption the solution strategy is quite simple. Given that there are sufficient
buffers for a data object, the following rule is used:
"Whenever a consumer reads from a channel, it uses the first item that was generated
within its current period."
For example, let p be a producer of a data object d, let c 1
cn be the consumers that
read d. Then the communication mechanism is realized by the following techniques (where
LCM 1in (T c i ) is the least common multiple of the periods):
(1) The data object d is implemented with buffers.
(2) The producer p circularly writes into each buffer, one at a time.
(3) The consumer c i reads circularly from slots (0; T c i =T
Figure
13: A task graph with buffers.
Consider three tasks 2 , 4 and 5 in our example, before we performed graph replication. The
two consumer tasks 4 and 5 run with periods 28 and 42, respectively, while the producer 2 runs
with period 14. Thus, the data object requires a 6 place buffer
from slots (0, 2, reads from slots (0, 3). Figure 13 shows the relevant part of the task
graph after the buffer allocation.
After the buffer allocation, the compiler tool expands each data object into a multiple place
buffer, and replaces each read and write operations with macros that perform proper pointer updates
Figure
14 shows the results of the macro-expansion, after it is applied to 4 's code from
Figure
1(B). Note that 1 , 2 and 4 run at periods of 28, 14 and 28, respectively.
7 Conclusion
We have presented a four-step design methodology to help synthesize end-to-end requirements into
full-blown real-time systems. Our framework can be used as long as the following ingredients are
provided: (1) the entity-relationships, as specified by an asynchronous task graph abstraction; and
(2) end-to-end constraints imposed on freshness, input correlation and allowable output separation.
This model is sufficiently expressive to capture the temporal requirements - as well as the modular
structure - of many interesting systems from the domains of avionics, robotics, control and
multimedia computing.
However, the asynchronous, fully periodic model does have its limitations; for example, we
cannot support high-level blocking primitives such as RPCs. On the other hand this deficit yields
int
int
every 28
f
size of Buffer1;
size of Buffer2;
Figure
14: Instantiated code with copy-in/copy-out channels and memory-mapped IO.
significant gains; e.g., handling streamed, tightly correlated data solely via the "virtual sequence
afforded by the rate-assignments.
There is much work to be carried out. First, the constraint derivation algorithm can be extended
to take full advantage of a wider spectrum of timing constraints, such as those encountered in
input-driven, reactive systems. Also, we can harness finer-grained compiler transformations such
as program slicing to help transform tasks into read-compute-write-compute phases, which will even
further enhance schedulability. We have used this approach in a real-time compiler tool [7], and
there is reason to believe that its use would be even more effective here.
We are also streamlining our search algorithm, by incorporating scheduling-specific decisions
into the constraint solver. We believe that when used properly, such policy-specific strategies will
help significantly in pruning the search space.
But the greatest challenge lies in extending the technique to distributed systems. Certainly a
global optimization is impractical, since the search-space is much too large. Rather, we are taking
a compositional approach - by finding approximate solutions for each node, and then refining each
node's solution-space to accommodate the system's bound on network utilization.
Acknowledgements
The authors gratefully acknowledge Bill Pugh, who was an invaluable resource, critic, and friend
throughout the development of this paper. In particular, Bill was our best reference on the topic
on nonlinear optimization. We are also grateful for the insightful comments of the TimeWare
group members: Jeff Fischer, Ladan Gharai, Tefvik Bultan and Dong-In Kang (in addition to the
authors). In particular, discussions with Ladan and Jeff were great "sounding boards" when we
formalized the problem, and they gave valuable advice while we developed a solution.
--R
Hard real-time scheduling: The deadline-monotonic approach
Data consistency in hard real-time systems
ESTEREL: Towards a synchronous and semantically sound high level language for real time applications.
Preemptive priority based scheduling: An appropriate engineering approach.
Fixed Priority Scheduling of Periodic Tasks with Varying Execution Priority.
Safety analysis of timing properties in real-time systems
The real-time producer/consumer paradigm: A paradigm for the construction of efficient
Scheduling algorithm for multiprogramming in a hard real-time envi- ronment
Priority inheritance protocols: An approach to real-time synchronization
Using offset information to analyse static priority pre-emptively scheduled task sets
An extendible approach for analysing fixed priority hard real-time tasks
Scheduling processes with release times
A Decomposition Approach to Real-Time Scheduling
Scheduling Tasks with Resource requirements in a Hard Real-Time System
--TR
--CTR
Namyun Kim , Minsoo Ryu , Seongsoo Hong , Heonshik Shin, Experimental Assessment of the Period Calibration Method: A Case Study, Real-Time Systems, v.17 n.1, p.41-64, July 1999
Tadaaki Tanimoto , Seiji Yamaguchi , Akio Nakata , Teruo Higashino, A real time budgeting method for module-level-pipelined bus based system using bus scenarios, Proceedings of the 43rd annual conference on Design automation, July 24-28, 2006, San Francisco, CA, USA
Minsoo Ryu , Seongsoo Hong, Toward Automatic Synthesis of Schedulable Real-Time Controllers, Integrated Computer-Aided Engineering, v.5 n.3, p.261-277, August 1998
Luigi Palopoli , Giuseppe Lipari , Gerardo Lamastra , Luca Abeni , Gabriele Bolognini , Paolo Ancilotti, An object-oriented tool for simulating distributed real-time control systems, SoftwarePractice & Experience, v.32 n.9, p.907-932, July 2002
Victor A. Braberman, Automatic verification of real-time designs, Proceedings of the 21st international conference on Software engineering, p.716-717, May 16-22, 1999, Los Angeles, California, United States
D.-I. Kang , R. Gerber , L. Golubchik , J. K. Hollingsworth , M. Saksena, A software synthesis tool for distributed embedded system design, ACM SIGPLAN Notices, v.34 n.7, p.87-95, July 1999
Dinesh Ramanathan , Ali Dasdan , Rajesh Gupta, Timing-driven HW/SW codesign based on task structuring and process timing simulation, Proceedings of the seventh international workshop on Hardware/software codesign, p.203-207, March 1999, Rome, Italy
Ali Dasdan , Dinesh Ramanathan , Rajesh K. Gupta, Rate derivation and its applications to reactive, real-time embedded systems, Proceedings of the 35th annual conference on Design automation, p.263-268, June 15-19, 1998, San Francisco, California, United States
Dong-In Kang , Richard Gerber , Manas Saksena, Parametric Design Synthesis of Distributed Embedded Systems, IEEE Transactions on Computers, v.49 n.11, p.1155-1169, November 2000
Huan Li , Krithi Ramamritham , Prashant Shenoy , Roderic A. Grupen , John D. Sweeney, Resource management for real-time tasks in mobile robotics, Journal of Systems and Software, v.80 n.7, p.962-971, July, 2007
Ali Dasdan , Dinesh Ramanathan , Rajesh K. Gupta, A timing-driven design and validation methodology for embedded real-time systems, ACM Transactions on Design Automation of Electronic Systems (TODAES), v.3 n.4, p.533-553, Oct. 1998
Richard Gerber , Seongsoo Hong, Slicing real-time programs for enhanced schedulability, ACM Transactions on Programming Languages and Systems (TOPLAS), v.19 n.3, p.525-555, May 1997
Victor A. Braberman , Miguel Felder, Verification of real-time designs: combining scheduling theory with automatic formal verification, ACM SIGSOFT Software Engineering Notes, v.24 n.6, p.494-510, Nov. 1999
Dinesh Ramanathan , Ravindra Jejurikar , Rajesh K. Gupta, Timing driven co-design of networked embedded systems, Proceedings of the 2000 conference on Asia South Pacific design automation, p.117-122, January 2000, Yokohama, Japan | design methodology;non-linear optimization;end-to-end timing constraints;real-time;static priority scheduling;constraint solving |